| Literature DB >> 35858307 |
Joseph S Reiff1, Justin C Zhang2, Jana Gallus1, Hengchen Dai1, Nathaniel M Pedley3, Sitaram Vangala3, Richard K Leuchter3, Gregory Goshgarian4, Craig R Fox1, Maria Han3, Daniel M Croymans3.
Abstract
Policymakers and business leaders often use peer comparison information-showing people how their behavior compares to that of their peers-to motivate a range of behaviors. Despite their widespread use, the potential impact of peer comparison interventions on recipients' well-being is largely unknown. We conducted a 5-mo field experiment involving 199 primary care physicians and 46,631 patients to examine the impact of a peer comparison intervention on physicians' job performance, job satisfaction, and burnout. We varied whether physicians received information about their preventive care performance compared to that of other physicians in the same health system. Our analyses reveal that our implementation of peer comparison did not significantly improve physicians' preventive care performance, but it did significantly decrease job satisfaction and increase burnout, with the effect on job satisfaction persisting for at least 4 mo after the intervention had been discontinued. Quantitative and qualitative evidence on the mechanisms underlying these unanticipated negative effects suggest that the intervention inadvertently signaled a lack of support from leadership. Consistent with this account, providing leaders with training on how to support physicians mitigated the negative effects on well-being. Our research uncovers a critical potential downside of peer comparison interventions, highlights the importance of evaluating the psychological costs of behavioral interventions, and points to how a complementary intervention-leadership support training-can mitigate these costs.Entities:
Keywords: field experiment; healthcare; peer comparison; well-being
Mesh:
Year: 2022 PMID: 35858307 PMCID: PMC9303988 DOI: 10.1073/pnas.2121730119
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 12.779
Fig. 1.Treatment effect estimates on job satisfaction and burnout. The blue and red dots reflect the estimated treatment effects of the respective conditions (vs. Control Condition) on job satisfaction (upper panel) and burnout (lower panel). Error bars reflect 95% confidence intervals.
Fig. 2.Treatment effect estimates of adding leadership support training to the peer comparison intervention. The blue dots reflect the estimated treatment effects on job satisfaction (upper panel) and burnout (lower panel) of the Peer Comparison and Leadership Training Condition (Condition 3) relative to the Peer Comparison Condition (Condition 2). Error bars reflect 95% confidence intervals.
Fig. 3.Treatment effect estimates on perceived leadership support. The blue and red dots show the estimated treatment effects in the respective conditions (relative to the Control Condition) on perceived leadership support. The error bars reflect 95% confidence intervals.
Fig. 4.Treatment effect estimates of leadership training on perceived leadership support. The blue dots show the estimated treatment effects of the Peer Comparison and Leadership Training Condition (Condition 3) relative to the Peer Comparison Condition (Condition 2). Error bars reflect 95% confidence intervals.
Descriptions of intervention(s) implemented in each condition
| Condition | Main intervention elements |
|---|---|
| 1. Control | - Monthly emails informed PCPs of their HM completion rate over the prior 3 mo, the focus measure on which they had performed the best, and the two focus measures that they could most improve on |
| 2. Peer comparison | - Same information as in the monthly emails in the Control Condition |
| 3. Peer comparison and leadership training | - Same monthly emails as in the peer comparison condition |
Fig. 5.Study timeline. This timeline depicts the timing of the relevant events in the study. The L-shaped lines depict events that occurred over sustained periods of time. The performance feedback emails were initially sent at the beginning of each month, and up to two reminders were sent during the month to those who had not opened the initial emails; PCPs had approximately 2 wk to complete the quarterly surveys. The straight vertical lines depict discrete events. For ease of visualization, the email and survey dates are approximate. See for the precise dates of each email sent and survey launched.