Timothy J Daskivich1,2, Justin Houman1, Garth Fuller2,3, Jeanne T Black4, Hyung L Kim1, Brennan Spiegel2,3,5. 1. Division of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA. 2. Cedars-Sinai Center for Outcomes Research and Education (CS-CORE), Cedars-Sinai Medical Center, Los Angeles, CA, USA. 3. Department of Medicine, Division of Health Services Research, Cedars-Sinai Health System, Los Angeles, CA, USA. 4. Resource and Outcomes Management Department, Cedars-Sinai Health System, Los Angeles, CA, USA. 5. Department of Health Policy and Management, UCLA Fielding School of Public Health, Los Angeles, CA, USA.
Abstract
Objective: Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. Materials and Methods: We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Results: Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Discussion: Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Conclusion: Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.
Objective: Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. Materials and Methods: We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Results: Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Discussion: Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Conclusion: Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.
Authors: Robert E Freundlich; Gen Li; Brendan Grant; Paul St Jacques; Warren S Sandberg; Jesse M Ehrenfeld; Matthew S Shotwell; Jonathan P Wanderer Journal: J Clin Anesth Date: 2020-05-07 Impact factor: 9.452
Authors: Kanu Okike; Natalie R Uhr; Sherry Y M Shin; Kristal C Xie; Chong Y Kim; Tadashi T Funahashi; Michael H Kanter Journal: J Gen Intern Med Date: 2019-09-17 Impact factor: 5.128
Authors: Caitlin B Finn; Jason K Tong; Hannah E Alexander; Chris Wirtalla; Heather Wachtel; Carmen E Guerra; Shivan J Mehta; Richard Wender; Rachel R Kelz Journal: J Gen Intern Med Date: 2022-04-19 Impact factor: 6.473
Authors: Y Alicia Hong; Chen Liang; Tiffany A Radcliff; Lisa T Wigfall; Richard L Street Journal: J Med Internet Res Date: 2019-04-08 Impact factor: 5.428
Authors: Justin Houman; James Weinberger; Ashley Caron; Alex Hannemann; Michael Zaliznyak; Devin Patel; Ariel Moradzadeh; Timothy J Daskivich Journal: J Med Internet Res Date: 2019-08-13 Impact factor: 5.428
Authors: Timothy Daskivich; Michael Luu; Benjamin Noah; Garth Fuller; Jennifer Anger; Brennan Spiegel Journal: J Med Internet Res Date: 2018-05-09 Impact factor: 5.428