Literature DB >> 31853490

Early Performance Trends After the Public Posting of Ambulatory Patient Satisfaction Reviews.

Paige G Wickner1,2,3, Christian Dankers1,3, Melanie Green4, Hojjat Salmasian1,3, Allen Kachalia5,6.   

Abstract

Entities:  

Keywords:  clinician–patient relationship; patient feedback; patient satisfaction; survey data

Year:  2019        PMID: 31853490      PMCID: PMC6908990          DOI: 10.1177/2374373519833649

Source DB:  PubMed          Journal:  J Patient Exp        ISSN: 2374-3735


× No keyword cloud information.

Introduction

Patients are increasingly turning to online physician reviews to guide their choice of physicians (1). In parallel, health-care organizations have started publicly posting physician reviews, as alternatives to independent review sites. In 2012, University of Utah Health began posting ambulatory patient reviews to foster transparency and trust with patients, provide clinician performance feedback, and demonstrate an institutional patient-centered focus (2,3). In 2016, for the same reasons, Brigham and Women’s Hospital (BWH) started publicly posting ambulatory patient-experience reviews. As more health systems embark on this endeavor, there remains little published data about rating trends once reviews are publicly posted. In this report, we seek to share early data on physician ratings and reviews after a transition to public facing ambulatory patient experience comments. Our aims were to (a) determine whether ratings improved once they went public and (b) determine whether patients providing higher or lower ratings were more likely to leave comments.

Methods

Brigham and Women’s Hospital’s process takes star ratings and comments from 10 provider questions on our ambulatory survey and posts them online within the individual physician’s hospital directory listing. We also post the comment posting and screening guidelines so that the public knows how the process works. All patient comments about their physician visits are posted unless they meet 1 of 4 limited criteria: offensive language, inflammatory or potentially libelous material, protected health information, or mentions other providers, trainees, or non-physician staff. Brigham and Women’s Hospital utilizes the Press Ganey Ambulatory eSurvey® to collect patient feedback after outpatient encounters. Press Ganey uses a star rating system from 1 to 5 for each question, where 1 is the lowest rating and 5 is the highest for all of their surveys. Scores from 10 questions about the physician are averaged into an overall star rating for the provider (4). We analyzed the data from surveys encompassing encounters from August 2012 until December 2017. This included data of 44 specialties (of which 22 went live with public comments during the study period), 1544 distinct providers, and 128,083 distinct encounters. We analyzed the data descriptively and by comparing the means in different groups using Wilcox rank sum test. For each specialty, the 1-year period before public comments was compared to the period after the specialty became public facing. Given that specialties went publicly facing at different times (i.e. a staged roll out), the “post” period ranged from 6 to 16 months based on the speciality. We also assessed the trends over time (from 2012 to 2017) using a general linear regression model. In the linear regression model, the star rating was the independent variable and the encounter date and the study arm (pre vs post) were the predictors, and specialty and time were used as dummy variables.

Results

In the 1 year before public display of reviews at BWH, our average physician rating was 4.77 of 5. The first 4 specialties posted their physician ratings publicly in July of 2016; in 2017, 18 additional specialties made their reviews public facing in a staged fashion. Comparing the average hospital-wide ratings before and after the inception of the program, we saw a small, but statistically significant upward trend (from 4.77 to 4.80, P value < .0001). A smaller, but also statistically significant, upward trend was also observed for those specialties that were not publicly displayed (on average, scores increased by a factor of 1.00033 each year, P value < .0001), and a similar baseline trend was observed for those specialties that chose to display the comments publicly. Overall, the public posting of reviews was associated with a statistically significant rise in the improvement trends (odds ratio = 1.03, P value < .0001) when compared to specialties that did not post publicly. Every specialty that went public had a higher average overall rating afterward (Table 1) with 13 of 22 having statistically significant changes.
Table 1.

Comparison of Ratings Before and After Public Posting of Patient Comments and Ratings, by Specialty.

SpecialtyPublic Ratings Period (months)Mean Difference in Star RatingsP ValueNumber of ProvidersNumber of Surveys
Allergy and immunology100.034.044191861
Anesthesia and pain management60.034.183161593
Cardiac surgery70.079.5626220
Cardiovascular medicine100.022.047867695
Dermatology100.043<.0014416505
Endocrine surgery70.069.4112118
Endocrinology, diabetes, and hypertension100.045.009383236
Foot and ankle surgery70.037.0424994
Gastrointestinal and general surgery70.011.062232721
Infectious disease100.003.08128862
Neurology160.044<.001887683
Neurosurgery160.029.213211928
Obstetrics/gynecology100.019.775626701
Orthopedic surgery160.025.001337502
Plastic surgery70.036.010152312
Primary care70.037<.00114530815
Psychiatry80.047.035421231
Renal disease80.012.155331171
Rheumatology100.004.051334889
Sleep medicine100.181.0055346
Thoracic surgery70.046.010171380
Vascular and endovascular surgery70.137<.0017570
Comparison of Ratings Before and After Public Posting of Patient Comments and Ratings, by Specialty. We also evaluated whether dissatisfied patients were more likely to leave reviews, an action that could negatively skew their physicians’ online profiles. There were 40,093 five-star ratings, with 5128 (12.79%) associated comments and 132 1-star ratings with 10 associated comments (7.58%; Table 2). Patients who assigned 5 stars to their encounter were significantly more likely to leave comments compared to those who assigned a 1-4 star rating (P value < .0001; Table 2). In addition, the overall number of patients who gave a high rating (4 or 5) far outweighed the number of patients who gave a low star (1 or 2) rating, both before and after the public display of the data.
Table 2.

Physician Ratings and Comments Provided by Patients.a

Star RatingTotal RatingsRatings With Comment (%)Percentage of Overall Comments
540 0935128 (12.79%)94.3%
44841206 (4.26%)3.78%
393862 (6.61%)1.14%
236531 (8.49%)0.57%
113210 (7.58%)0.18%

aAs a note, a 1.99 was treated as a 1 star and a 4.99 was a 4 star; 1, lowest rating and 5, highest rating.

Physician Ratings and Comments Provided by Patients.a aAs a note, a 1.99 was treated as a 1 star and a 4.99 was a 4 star; 1, lowest rating and 5, highest rating.

Discussion

To our knowledge, this is the first analysis reporting trends in specialty-specific star rating comparisons with a transition to public facing ratings and comments. We also found that positive ratings far exceeded negative ratings, and patients who give a 5 star rating provided comments significantly more often than those giving a lower star rating. While our early experience has given us important and high-level information on the nature of the data and pace of change, other questions—which are harder to answer—remain. Will providers use these comments for improvement, and if so, how? It has been suggested that anonymous patient feedback cannot improve physician performance due to its lack of specific context for the provider and may in fact be falsely reassuring or inaccurately alarming (5). Our oversight group has indeed debated how best, at a practice and individual level, to interpret “subjective” responses. However, as a result of our process, we have identified individual physicians with patient experience comment and score patterns that have led to personalized training efforts. In the future, we hope to develop improved tools for providers with lower scores and automatic methods of rating and comment analysis to alert us when a provider is receiving concerning patterns of stars and comments. Although we believe that these data are important to transparently share with patients, how patients should best utilize this information remains unknown. The interpretation of potentially conflicting closed question ratings and open-ended narratives and their contribution to clinician score variation is critical to understand (6). Equally important is how we then help educate patients on optimal use. In addition to the issues identified above, there are other limitations to our analysis. We have 1 year of unadjusted data, and the longer term effects remain unknown. One of the challenges in analyzing rating data is that for physicians who are part time clinicians or practice in low-volume specialties, there may not be sufficient data for pattern observation. We did not separately account for physicians who were either in only the pre- or the post-group. We also did not have the ability to account for other simultaneous departmental or divisional patient experience improvements in our analysis that may have affected our scores. It is likely that the small numerical difference we see before versus after public-facing comment transition is statistically significant due to the large sample size. It remains unclear whether seeing a physician who has a rating of 4.75 of 5, ensure a noticeably different patient experience than a 4.8. However, we saw the improvement across specialties with different sample sizes, suggesting that the change is not exclusively attributable to randomness. Physician awareness and education of questions being asked on the survey and engagement in the comment process may likely have contributed to the positive score improvement seen in our data. We are encouraged by the positive ratings and comments that we believe helps highlight the quality and skill of our physicians. Though the scores have only improved since the inception of the program, there remain areas for improvement. How to pair this practice with meaningful quality measurements and how best to educate patients on interpretation merit further investigation. Receiving negative feedback, even in the context of an overall positive star rating and a majority of positive comments, can be deflating. The contribution of provider satisfaction ratings on provider perceptions, wellness, and practice with complex interactions also need further serious consideration (7 –9). We remain optimistic that institutions can meet these challenges and create a transparent system that benefits patients and physicians alike.
  8 in total

1.  Will doctor rating sites improve the quality of care? No.

Authors:  Margaret McCartney
Journal:  BMJ       Date:  2009-03-17

2.  Physician and Patient Views on Public Physician Rating Websites: A Cross-Sectional Study.

Authors:  Alison M Holliday; Allen Kachalia; Gregg S Meyer; Thomas D Sequist
Journal:  J Gen Intern Med       Date:  2017-02-01       Impact factor: 5.128

3.  Transparency and Trust - Online Patient Reviews of Physicians.

Authors:  Vivian Lee
Journal:  N Engl J Med       Date:  2017-01-19       Impact factor: 91.245

Review 4.  The impact of patient feedback on the medical performance of qualified doctors: a systematic review.

Authors:  Rebecca Baines; Sam Regan de Bere; Sebastian Stevens; Jamie Read; Martin Marshall; Mirza Lalani; Marie Bryce; Julian Archer
Journal:  BMC Med Educ       Date:  2018-07-31       Impact factor: 2.463

Review 5.  Popularity of internet physician rating sites and their apparent influence on patients' choices of physicians.

Authors:  Christopher M Burkle; Mark T Keegan
Journal:  BMC Health Serv Res       Date:  2015-09-26       Impact factor: 2.655

6.  Impact of patient satisfaction ratings on physicians and clinical care.

Authors:  Aleksandra Zgierska; David Rabago; Michael M Miller
Journal:  Patient Prefer Adherence       Date:  2014-04-03       Impact factor: 2.711

7.  Creating the Exceptional Patient Experience in One Academic Health System.

Authors:  Vivian S Lee; Thomas Miller; Chrissy Daniels; Marilynn Paine; Brian Gresh; A Lorris Betz
Journal:  Acad Med       Date:  2016-03       Impact factor: 6.893

8.  CAHPS and Comments: How Closed-Ended Survey Questions and Narrative Accounts Interact in the Assessment of Patient Experience.

Authors:  Steven C Martino; Dale Shaller; Mark Schlesinger; Andrew M Parker; Lise Rybowski; Rachel Grob; Jennifer L Cerully; Melissa L Finucane
Journal:  J Patient Exp       Date:  2017-01-01
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.