Literature DB >> 33655200

Measuring success: perspectives from three optimization programs on assessing impact in the age of burnout.

Eli M Lourie1,2, Lindsay A Stevens3,4, Emily C Webber5,6.   

Abstract

Electronic health record (EHR) optimization has been identified as a best practice to reduce burnout and improve user satisfaction; however, measuring success can be challenging. The goal of this manuscript is to describe the limitations of measuring optimizations and opportunities to combine assessments for a more comprehensive evaluation of optimization outcomes. The authors review lessons from 3 U.S. healthcare institutions that presented their experiences and recommendations at the American Medical Informatics Association 2020 Clinical Informatics conference, describing uses and limitations of vendor time-based reports and surveys utilized in optimization programs. Compiling optimization outcomes supports a multi-faceted approach that can produce assessments even as time-based reports and technology change. The authors recommend that objective measures of optimization must be combined with provider and clinician-defined value to provide long term improvements in user satisfaction and reduce EHR-related burnout.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.

Entities:  

Keywords:  burnout; efficiency; electronic health record (EHR); optimization

Year:  2020        PMID: 33655200      PMCID: PMC7903326          DOI: 10.1093/jamiaopen/ooaa056

Source DB:  PubMed          Journal:  JAMIA Open        ISSN: 2574-2531


LAY SUMMARY

This article is focused on the most effective and reliable ways to measure improvements when optimizing the electronic health record (EHR). Some measures—such as process outcomes such as time spent in the EHR and time in the EHR after regular business hours—would appear to be easy to use, but present complexities in their reliability and clarity. Surveys have been found to be helpful but are also highly variable. These challenges make it difficult to utilize quality improvement tools to demonstrate sustainable improvements. The urgency of addressing EHR-related burnout increases the pressure of improving these tools and then incorporating them effectively together to measure the improvements in the EHR.

INTRODUCTION

The use of electronic health records (EHRs) has been consistently identified as a major contributor to physician burnout. In response, EHR optimization programs have been adopted to help address EHR-related causes of burnout;, however, drivers for satisfaction are often challenging to measure especially with regards to time in the EHR. A number of confounding factors exist to make these measurements of success in these programs difficult. Traditional measures of efficiency—time spent on specific tasks—becomes challenging in a physician workflow, even with vendor-supplied time-based reports. Some time-based reports may or may not correlate with burnout or EHR satisfaction and different venues and roles may have different expectations and needs of efficiency. Separating inpatient time from ambulatory time can be confusing as well as understanding the effect of time spent teaching. Time-based reports may be limited by mobile versus desktop platforms, whether platforms are integrated or not, and by increased integration of new functionality (eg, electronic prior authorization, FHIR APIs that move the “outside” record search to within the EHR). Telehealth may also lead to new confounders to timing. Thus, time measurements are limited in accuracy as they do not always correlate with higher user EHR satisfaction or the burnout inventory scores; however, time metrics are the most common evaluation of effectiveness for optimization. In this article, outcomes from EHR optimization efforts at 3 health systems are shared. A review of the experiences provides recommended best practices on how to incorporate time-based reports, surveys, and other metrics to promote accurate, comprehensive and effective measures of optimization.

EHR OPTIMIZATION PROGRAMS: TARGETING BURNOUT AND GETTING “BURNED” BY TIME-BASED REPORTS

All 3 programs in this article identified reducing EHR-related causes of burnout as one of the primary outcomes they were asked to address. While the approaches were different, there was alignment around personal coaching and optimized workflow, as well as use of vendor EHR time-based reports and reports to assess impact.

Stanford

In efforts to better understand and improve provider efficiency, the team at Stanford discovered that individualized training can improve providers’ self-perceived knowledge and use of EHR tools and also improve their satisfaction with their EHR workload., It is well known that lack of mastery contributes to feelings of burnout, and thus it is logical that efforts to improve an individual’s mastery of the EHR may have a positive impact on their wellness. While surveys can give a sense whether interventions make an impact, they are time-consuming to administer and are not always fully representative of a given population. The Stanford team was able to identify that more after-hours time correlated with a worse provider EHR experience, and therefore was a prime candidate for an EHR-data-drive metric to track outcomes. How one defines this metric of “after-hours time” or “work outside work (WOW)” will lead to variations in that data measured. The Stanford Clinician Logged-in Outside Clinic (CLOC) algorithm was able to approximate after-hours time for outpatient providers, however, it became less useful for providers who split time between inpatient and ambulatory service. Given the calculations were based on a provider’s clinic schedule, time estimates for these providers were skewed by inpatient time in the system not captured, thus overestimating WOW. The project at Stanford did not see a statistically significant improvement in WOW time for the providers that participated in the individualized training program, however, did see an improvement in the providers’ satisfaction with their workload. This raises the question: how do we assess value as we are defining these metrics for success?

Children’s Hospital of Philadelphia

Children’s Hospital of Philadelphia (CHOP) also has a re-training program to improve provider efficiency, and, ideally, would also like to measure WOW and time in the EHR per patient/shift/day as a measure of success. However, time has proven a challenging metric due to a number of factors. First is a difficulty in validating the data provided by their vendor. EHR access logs collect every action a user performs and assigns it a category such as orders, notes, or chart review. The assignment of these actions is not always clear, for example “notes viewed” may be assigned to the category of “Notes and Letters” while “notes viewed in chart review” may be assigned to “Clinical Review.” Ideally, a time and motion study could be used to validate that these categorizations match real-live usage patterns. As of this writing, those studies have not been done by the vendors or others. Secondly, vendors use proprietary algorithms to generate their time data; algorithms that change often. Because the data sets that are generated from the access logs are so large, the vendors do not keep them in long term storage and, therefore, cannot rerun old data on the new algorithms. It is then difficult to impossible to follow time metrics longitudinally—unless the study period does not coincide with an algorithm change. Ultimately, CHOP has used qualitative survey data as their primary measurement because of these difficulties. Although their surveys showed an increase in efficiency and satisfaction in providers who completed their program, they strive to find a reliable quantitative metric and continue to work internally and with their vendor to better validate the access log time data.

Indiana University Health

The EHR optimization program at Indiana University Health applied specialty-specific workflows and directed coaching to in attempts to reduce WOW. Indiana University Health is closely affiliated with the Indiana University School of Medicine, and this optimization project also included adoption of recommended workflows with the goal of reducing WOW, as well as overall time in the EHR. Use of the reports required manual review to exclude providers who did hospital-based work and variable shift times not included in the standard report. For those physicians excluded from the standard reports, success was measured in other ways such as completion of required documents, submission of bills on time, and overall time in chart/patient. These were somewhat useful to demonstrate meeting project objectives but did not carry the same impact. The program demonstrated overall reduction in time in chart per patient, as well as improved overall operational time-based reports; however, this data was used with a definite “grain of salt” that reflected the complexities of the reporting. It was found to be most useful as a control chart to help track our overall trends, rather than a direct reflection of individual improvement. Surveys were utilized to assess satisfaction with the improvement program itself. The narrative comments in surveys often revealed additional signs of burnout cited in Maslach inventory, such as emotional exhaustion due to a lack of control. For example, one group of specialists showed high adoption of the optimized workflow on the mobile EHR (a recommended practice in the program to reduce WOW), apart from one physician. That physician stated that they preferred “control of where and how I do my work. If you give me the mobile EHR, I’ll get sucked into work even more than I am now and I’ll be more burned out.” Adding to the challenge of assessment was the fact that the mobile EHR time-log data was excluded from vendor WOW calculations. Comments (Figure 1) documented during coaching by the clinical informatics teams and coaches and shared on social communication channel also provided insights on clinician defined value.
Figure 1.

Comments captured by coaches (recorded in real time on Slack).

Comments captured by coaches (recorded in real time on Slack). The ongoing optimization effort at Indiana University Health is now measured using a combination of vendor-generated time-based reports and surveys, with an emphasis also placed on optimization that is the clinician-defined.

APPROACHING EHR OPTIMIZATION AS CONTINUOUS IMPROVEMENT, WITH A MORE COMPREHENSIVE SET OF TOOLS TO MEASURE SUCCESS

EHR-related burnout is a continued problem and optimization programs are vital in combatting it. What we have found in our organizations is that optimization should not be approached as a one-time effort, but, instead, as continuous improvement. To most effectively leverage quality improvement methodologies (ie, Lean, PDSA), we must have reliable tools available for measurement. All 3 of our programs strove to find ways to appropriately measure success and all 3 needed a combination of time-based measurements and surveys to capture improvements. It is clear to us that accurate, time-based reports from vendors must be part of a multi-dimensional approach to demonstrate progress in optimization efforts. While optimization programs may yield time savings, the use of vendor time reports requires consistent, ongoing evaluation and may need significant work to reflect accuracy. These time-based, quantitative measurements are routinely available but are subject to changing vendor algorithms, inconsistent definitions and correlating actions, and a lack of transparency on validation of access log data in vendor metrics. Additionally, the inherent “noise” in how the EHR actions fit into the clinical workflow are difficult to capture without costly and complex time motion studies. As EHRs adapt and add functionality, vendors need to add improved methods of measuring EHR burden that contribute to clinician burnout. Ideally, vendors should work internally to validate their access log data and assure that those data are accurately representing users’ workflows. Finally, efforts to determine standardization of measurements, including collaboration between vendors, is needed as well as constant discipline to measurement integrity. EHR optimization programs must have more than time-based metrics to demonstrate success in user satisfaction. In our programs, surveys were utilized to help close the gaps left by unreliable time metrics. However, surveys can have limited participation, are qualitative (therefore more challenging to demonstrate rigor in the results), and may, themselves, add to burnout in the over-surveyed physician and clinician workforce. We believe that, despite their downsides, these qualitative measures of success still have an important place in optimization. What is needed is a framework for such measurements to be able to benchmark programs across organizations and across EHR vendors. EHR user satisfaction is a specific measure of EHR-related burnout, and we must address the causes. Although improving EHR satisfaction scores is a key driver and can be positively impacted by the programs describe in this article as well as others, it does not fully address burnout and additional components of burnout must be considered. In addition to the practices of the optimization programs we delineate here, standards for improving user satisfaction are beginning to establish “best practices” in both industry publications such as the KLAS collaborative as well as in the medical literature. For example, Adler-Milstein et al identified and reaffirmed that both time “after-hours” on clinic days and volume of messages correlated with greater odds of high exhaustion. Reducing EHR-related burnout is complex, and tools to measure success must adapt and evolve to meet the need. Measurements will need to capture ongoing commitment to continuous improvement, as well as the impact of building mastery and allowing clinical users to determine what improvements are valuable. This involves investment in the training and technologies that have demonstrated improvements: protecting time on schedules for physicians to receive coaching, replacing outdated dictation systems with embedded voice recognition, and “smarter” interoperability. A continuous optimization project truly committed to reducing EHR-related burnout also means aligning efforts to support physicians and other clinicians in a sustainable workload, so that as users improve their efficiency, the time “gained back” through optimization should be applied clinically impactful work and not refilled with additional administrative responsibilities which might undo any reduction in burnout.

FUNDING

This was not a funded study.

AUTHOR CONTRIBUTIONS

All authors contributed sufficiently and meaningfully to the conception, design, draft, edits and revision of the manuscript. All authors approved the final version for submission.
  9 in total

1.  Practicing Clinicians' Recommendations to Reduce Burden from the Electronic Health Record Inbox: a Mixed-Methods Study.

Authors:  Daniel R Murphy; Tyler Satterly; Traber D Giardina; Dean F Sittig; Hardeep Singh
Journal:  J Gen Intern Med       Date:  2019-07-10       Impact factor: 5.128

2.  Optimization Sprints: Improving Clinician Satisfaction and Teamwork by Rapidly Reducing Electronic Health Record Burden.

Authors:  Amber Sieja; Katie Markley; Jonathan Pell; Christine Gonzalez; Brian Redig; Patrick Kneeland; Chen-Tan Lin
Journal:  Mayo Clin Proc       Date:  2019-02-26       Impact factor: 7.616

3.  Advanced proficiency EHR training: effect on physicians' EHR efficiency, EHR satisfaction and job satisfaction.

Authors:  M Tariq Dastagir; Homer L Chin; Michael McNamara; Kathy Poteraj; Sarah Battaglini; Lauren Alstot
Journal:  AMIA Annu Symp Proc       Date:  2012-11-03

4.  Electronic health records and burnout: Time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians.

Authors:  Julia Adler-Milstein; Wendi Zhao; Rachel Willard-Grace; Margae Knox; Kevin Grumbach
Journal:  J Am Med Inform Assoc       Date:  2020-04-01       Impact factor: 4.497

5.  Have you got the time? Challenges using vendor electronic health record metrics of provider efficiency.

Authors:  Jonathan D Hron; Eli Lourie
Journal:  J Am Med Inform Assoc       Date:  2020-04-01       Impact factor: 4.497

6.  Association of Electronic Health Record Design and Use Factors With Clinician Stress and Burnout.

Authors:  Philip J Kroth; Nancy Morioka-Douglas; Sharry Veres; Stewart Babbott; Sara Poplau; Fares Qeadan; Carolyn Parshall; Kathryne Corrigan; Mark Linzer
Journal:  JAMA Netw Open       Date:  2019-08-02

7.  Metrics for assessing physician activity using electronic health record log data.

Authors:  Christine A Sinsky; Adam Rule; Genna Cohen; Brian G Arndt; Tait D Shanafelt; Christopher D Sharp; Sally L Baxter; Ming Tai-Seale; Sherry Yan; You Chen; Julia Adler-Milstein; Michelle Hribar
Journal:  J Am Med Inform Assoc       Date:  2020-04-01       Impact factor: 4.497

8.  Electronic health record (EHR) training program identifies a new tool to quantify the EHR time burden and improves providers' perceived control over their workload in the EHR.

Authors:  Yumi T DiAngi; Lindsay A Stevens; Bonnie Halpern-Felsher; Natalie M Pageler; Tzielan C Lee
Journal:  JAMIA Open       Date:  2019-03-21

9.  Designing An Individualized EHR Learning Plan For Providers.

Authors:  Lindsay A Stevens; Yumi T DiAngi; Jonathan D Schremp; Monet J Martorana; Roberta E Miller; Tzielan C Lee; Natalie M Pageler
Journal:  Appl Clin Inform       Date:  2017-12-20       Impact factor: 2.342

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.