Literature DB >> 23831833

e-Measures: insight into the challenges and opportunities of automating publicly reported quality measures.

Terhilda Garrido1, Sudheen Kumar, John Lekas, Mark Lindberg, Dhanyaja Kadiyala, Alan Whippy, Barbara Crawford, Jed Weissberg.   

Abstract

Using electronic health records (EHR) to automate publicly reported quality measures is receiving increasing attention and is one of the promises of EHR implementation. Kaiser Permanente has fully or partly automated six of 13 the joint commission measure sets. We describe our experience with automation and the resulting time savings: a reduction by approximately 50% of abstractor time required for one measure set alone (surgical care improvement project). However, our experience illustrates the gap between the current and desired states of automated public quality reporting, which has important implications for measure developers, accrediting entities, EHR vendors, public/private payers, and government.

Entities:  

Keywords:  Automatic Data Processing; Electronic Health Records; Quality Assurance/Health Care

Mesh:

Year:  2013        PMID: 23831833      PMCID: PMC3912717          DOI: 10.1136/amiajnl-2013-001789

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


Introduction

Quality measurement, a key lever to improve healthcare, has traditionally relied on administrative claims data and time-consuming manual chart abstraction.1 Health information technology (IT) promises to generate quality measurement and public reporting through automated data collection more readily.2 Many believe that electronic health records (EHR) offer new potential for quality measurement.3 The US Health Information Technology for Economic and Clinical Health Act of 2009 invested US$20 billion for health IT infrastructure and Medicare and Medicaid ‘meaningful use’ (MU) incentives. The proportion of acute care hospitals adopting at least a basic EHR more than doubled between 2009 and 2011, and in 2011 85% of hospitals planned to attest to MU of certified EHR technology by 2015, which includes submitting clinical quality measures.4 Stage 1 measures include 15 metrics focusing on emergency department (ED) throughput, stroke care, venous thromboembolism (VTE) prevention, and anticoagulation.5 The taxonomy for measuring and reporting performance from EHR is evolving; we use ‘e-measures’ and ‘automated measures’ interchangeably to refer to all partly or fully automated processes for generating performance information from EHR-contained data. Multiple stakeholders are vested in their development: the consumers and communities e-measures are intended to benefit, governmental entities, and commercial payers.1 At the request of the US Department of Health and Human Services, the National Quality Forum convened diverse stakeholders to provide recommendations for retooling 113 paper-based measures to an electronic format.6 e-Measure implementation occurs at the interface between measure developers and providers using EHR. Understanding the work of retooling paper-based quality measures for automated reporting illuminates the gap between the current and desired states of e-measures; we report here Kaiser Permanente's experience with automating quality reporting.

Automated quality reporting at Kaiser Permanente

Kaiser Permanente's EHR, KPHealthConnect, enables clinicians and employees to manage the healthcare and administrative needs of nine million members across eight geographic regions in a seamless and integrated way, with resulting quality and efficiency benefits.2 7 8 Implemented beginning in 2003, KP HealthConnect is deployed program-wide; individual regions customize EHR builds to local conditions and needs. In 2010, Kaiser Permanente care reporting staff began to re-tool selected the joint commission (TJC) core measures for automated quality reporting. The Northern California, Southern California and Northwest regions currently use or are developing e-measures. Each measure comprises numerous data elements identifying population inclusion and exclusion and outcomes criteria. Kaiser Permanente selected measures for automation on the basis of their clinical significance, importance to regulatory reporting, and reliance on discrete data elements: unambiguous individual numeric or coded values, such as cardiac rate and rhythm. Discrete data greatly reduce (but do not eliminate) the possibility of inaccurate reporting. They enable semantic understanding of the nature of the underlying information. In contrast, non-discrete data fields contain strings of indistinguishable characters; an example is the VTE-5 measure: VTE warfarin therapy discharge instructions. Documentation takes the form of embedded text, and text mining or current natural language processing cannot meet the current 100% accuracy requirement for publicly reported measures. To date, Kaiser Permanente has focused on six core measure sets: acute myocardial infarction, ED patient flow, immunizations, the surgical care improvement project, pneumonia, and VTE prophylaxis. Kaiser Permanente has partly or fully automated 21 of 29 measures in these sets. The number of data elements per measure ranges from eight to 93, and the proportion that is discrete and mapped ranges from 43% to 100% (table 1). We derive the balance of TJC core measure reports from administrative claims data and manual chart abstraction.
Table 1

Current state of automated quality reporting at Kaiser Permanente Northern California

TJC core measureRequired data fieldsMapped discrete data fieldsMapped data elements (%)Time saved per case in minutes
SCIP93434611
AMI41204910
PN4128685
IMM15151005
VTE67234314
ED881005

AMI, acute myocardial infarction; ED, emergency department; IMM, immunizations; PN, pneumonia; SCIP; surgical care improvement project; TJC, the joint commission; VTE, venous thromboembolism.

Current state of automated quality reporting at Kaiser Permanente Northern California AMI, acute myocardial infarction; ED, emergency department; IMM, immunizations; PN, pneumonia; SCIP; surgical care improvement project; TJC, the joint commission; VTE, venous thromboembolism. Figure 1 represents the iterative five-step process of developing and maintaining selected measures for automated reporting. A more extended discussion is available online (see supplementary web-only appendix 1, available online only).
Figure 1

Steps involved in automating quality reporting at Kaiser Permanente.

Steps involved in automating quality reporting at Kaiser Permanente.

Core measure interpretation

Measures first require interpretation in light of current specifications.9 Domain experts, such as abstractors and analysts, informaticists, legal and compliance staff, and clinical experts, provide input. Measures interpretation must be consistent throughout Kaiser Permanente.

Mapping

Mapping links specifications of the interpreted measure to EHR data tables. EHR database design is highly complex and its relationship to clinical documentation is often obscure. In addition, regional configurations and local documentation workflows vary.

Coding

Coding extracts required data from the mapped data tables. Convergent medical terminology ‘groupers’—lists of similar medications maintained over time—are intended to eliminate the need to ‘hard-code’ the constantly evolving universe of individual medications.10

Quality assurance/validation

Validation of automated quality reports ensures accuracy and is conducted by comparing automated results to the official submission and re-examining interpretation, mapping, and coding to rectify discrepancies.

Maintaining code over time

TJC measure definitions change to reflect evolving evidence and clinical practice, and EHR updates and new releases may change database design. Diagnostic nomenclature, procedural codes, and medication identifiers may change, as can internal factors impacting automated quality reporting.

Results of automated quality reporting

With validated accuracy of 100%, potential gains to the organization result from increased efficiency. To assess these, we compared pre and post-automation extraction time. We measured the time required to abstract each of 20 randomly selected cases using manual abstraction methods. After partial or full automation was complete, we again measured the time required to abstract each case. Our calculations omit the time required for automation; for example, an 8-week collaborative effort between national reporting and regional abstraction staff fully automated five individual immunizations measures. Table 1 contains the average time savings per case from using partly or fully automated reporting, compared to manual chart abstraction. In addition, we estimated the costs of development and ongoing maintenance for these measures, which included time for programmers, registered health information technicians (chart abstractors), and management and oversight, as well as a portion of IT infrastructure (eg, hardware and software). We found a breakeven in savings over cost by the fourth year, by then achieving an ongoing savings stream of approximately US$1 million annually for one Kaiser region.

Discussion

Our experience illustrates opportunities and challenges inherent in e-measures and gaps between the current and potential states of automated quality reporting. The five-step process we describe can serve as a template for the development of automated measures elsewhere, but it is unlikely to expedite the local development process. The time savings we observed highlight substantial opportunities for increased efficiency to augment the intrinsic benefits of performance reporting. Overhead costs of public quality reporting are significant. For example, Kaiser Permanente has reported on 50 well-established metrics for 15 years; the annual cost is approximately US$6.75 million, excluding expenses related to IT systems, storage, and oversight (unpublished data, Kaiser Permanente, 2010). We observed an approximate 50% reduction in abstraction time just for the partly automated surgical care improvement project measures; this time savings is likely to be broadly achievable. Partial abstraction saves time over a completely manual process, expanding the capacity of existing abstraction staff and allowing us to forego hiring additional abstractors despite an expanding number of quality measures. Each automated element has been rigorously tested and validated by subject matter experts and needs no manual review for confirmation. In addition, even in some instances when we cannot fully automate a field, we supply a ‘trigger location’ within the EHR for confirmation by manual review, changing abstraction from an intuitive search to a focused verification. The number of contemplated and required public reporting initiatives is growing exponentially. Automated quality reporting offers an opportunity to obtain the transparency and accountability benefits of increased reporting without adding to high overhead costs or depleting organizational quality improvement budgets.11 In addition, when quality measures are automated, some data are available as real-time organizational intelligence, expediting care improvement cycles. Automated reporting can also be extended to quality measures that are not uniformly publicly reported; for example, surgical site infections (SSI). The Centers for Medicare and Medicaid Services requires hospitals to report infections related to colon surgeries and abdominal hysterectomies to the National Healthcare Safety Network for eventual publication on the Centers for Medicare and Medicaid Services Hospital Compare website.12 13 Although state policies vary, the California Department of Public Health requires SSI reporting for 29 National Healthcare Safety Network-defined procedures, with data published online.14–16 An automated SSI reporting process reduced Kaiser Permanente's manual surveillance full time equivalent by 80%, reflecting an aggregate savings of US$2 million (see supplementary appendix 2, available online only). However, our experience also highlights the challenges of automating quality reporting. A recent report questions the accuracy of automated reporting across conditions.17 Achieving 100% accuracy required additional staff time to rectify discrepancies between methods; this time would probably be required in all settings. Although little has been reported on automated quality reporting across conditions, our experience is supported by existing evidence. A primary challenge is that EHR were not initially designed to calculate, compile, and report on quality measures. Their core function is to capture, store, and track clinical data to support transactions.18 19 Consequently, much of the quality reporting supported by EHR is not fully automated.20 EHR meeting explicit stage 1 MU data capture requirements are estimated to provide approximately 35% of needed data.21 EHR also including electronic physician notes and medication administration records may provide up to 65% of needed data.21 Similarly, across the measures reported here, an average of 61% of needed data was available as discrete elements. Our experience is likely to be generalizable to a large extent, as we rely on data that are available to providers during the course of an encounter. All certified EHRs are likely to have these data. Some types of data lend themselves to discrete representation: vital signs, medications, and coded diagnoses and procedures, for instance. Other types of data are intrinsically more nuanced, such as discharge teaching. Our priority is to capture data that are a natural ‘exhaust’ from provider workflows, rather than asking physicians and nurses to interrupt natural care processes to complete a template or form. Solutions to bridge the gap between the current and desired states of automated quality reporting are complex. One option, full automation, requires that all data elements are represented by standardized terminologies and codes within an EHR system and that the same standards are used locally and nationally.22 This is impractical. Many data elements that are difficult or impossible to automate are also essential for measure meaningfulness. For instance, an acute myocardial infarction measure relates to smoking cessation counseling, which is recorded narratively in progress notes or a teaching summary, precluding automation. If measures lack sufficient meaningfulness, physicians and other clinicians will have less incentive to drive operational and practice changes to improve performance on them. Strong federal involvement and guidance would be required to achieve a highly coordinated approach to addressing gaps in automated quality measurement standards and processes;23 this may risk generating reporting requirements that inappropriately drive cumbersome clinical workflows. A collaborative approach is evidenced by numerous stakeholders at the national level, including the Agency for Healthcare Quality and Research and, as noted earlier, the National Quality Forum, working together and independently to advance e-measures.24 However, the continuing development of quality measures should not embrace automation at the cost of meaningful clinical detail or over-burdening clinician workflow. Many questions remain. Even with improved standardization of terminologies and codes, EHR content, structure, and data format vary, as do local data capture and extraction procedures.25 Within a single institution, significant differences in denominators, numerators, and rates arise from different electronic data sources, and documentation habits of providers vary.26 Data entered into the EHR may not be interpreted or recognized, resulting in substantial numerator loss and underestimates of the delivery of clinical preventive services.27 EHR vendors can potentially support e-measures. However, organizations typically customize vendor-provided builds and workflows are local; data extraction will require local customization, too. Structured clinical data are often captured electronically through clinical reminders that are relatively insensitive to context and interfere with workflows, and adapting workflows to support documentation is short sighted.28 29 For instance, data supporting the medication reconciliation measure for stage 2 MU can be found in medication actions, such as new, changed, discontinued, or adjusted medications or a refill request, from which review and reconciliation can be inferred. A vendor-generated solution might be a check box for ‘medication reconciliation complete.’ This both adds an inefficient step to the workflow and creates the possibility of providers indicating that reconciliation occurred without conducting it. A final comment pertains to the difference between retooling quality measures that were designed for manual abstraction and developing quality measures for internal use de novo, with which Kaiser Permanente has robust experience. Given the level of exactitude required to retool manual measures for automation, de-novo development of quality measures can potentially be more straightforward. However, our experience is that the latter also incurs substantial development costs to capture variability in clinical workflows.

Conclusions

Kaiser Permanente has fully or partly automated six TJC measures. Time savings from automation were substantial, but our experience illustrates the complex nature of this undertaking and the gap between the current and desired states of automated quality reporting. The goal of fully automating quality measurement may challenge the goals of supporting provider-driven efficient workflows and retaining the meaningfulness of quality measures.
  14 in total

1.  Kaiser Permanente's Convergent Medical Terminology.

Authors:  Robert H Dolin; John E Mattison; Simon Cohn; Keith E Campbell; Andrew M Wiesenthal; Brad Hochhalter; Diane LaBerge; Rita Barsoum; James Shalaby; Alan Abilla; Robert J Clements; Carol M Correia; Diane Esteva; John M Fedack; Bruce J Goldberg; Sridhar Gopalarao; Eza Hafeza; Peter Hendler; Enrique Hernandez; Ron Kamangar; Rafique A Kahn; Georgina Kurtovich; Gerry Lazzareschi; Moon H Lee; Tracy Lee; David Levy; Jonathan Y Lukoff; Cyndie Lundberg; Michael P Madden; Trongtu L Ngo; Ben T Nguyen; Nikhilkumar P Patel; Jim Resneck; David E Ross; Kathleen M Schwarz; Charles C Selhorst; Aaron Snyder; Mohamed I Umarji; Max Vilner; Roy Zer-Chen; Chris Zingo
Journal:  Stud Health Technol Inform       Date:  2004

2.  Variation in surgical site infection monitoring and reporting by state.

Authors:  Martin A Makary; Monica S Aswani; Andrew M Ibrahim; Julie Reagan; Elizabeth C Wick; Peter J Pronovost
Journal:  J Healthc Qual       Date:  2012-03-02       Impact factor: 1.095

3.  Escaping the EHR trap--the future of health IT.

Authors:  Kenneth D Mandl; Isaac S Kohane
Journal:  N Engl J Med       Date:  2012-06-14       Impact factor: 91.245

4.  The Kaiser Permanente Electronic Health Record: transforming and streamlining modalities of care.

Authors:  Catherine Chen; Terhilda Garrido; Don Chock; Grant Okawa; Louise Liang
Journal:  Health Aff (Millwood)       Date:  2009 Mar-Apr       Impact factor: 6.301

5.  If you build it, will they come? The Kaiser Permanente model of online health care.

Authors:  Anna-Lisa Silvestre; Valerie M Sue; Jill Y Allen
Journal:  Health Aff (Millwood)       Date:  2009 Mar-Apr       Impact factor: 6.301

6.  Development of automated quality reporting: aligning local efforts with national standards.

Authors:  Patricia C Dykes; Christine Caligtan; Andrew Novack; Debra Thomas; Linda Winfield; Gianna Zuccotti; Roberto A Rocha
Journal:  AMIA Annu Symp Proc       Date:  2010-11-13

Review 7.  Review: electronic health records and the reliability and validity of quality measures: a review of the literature.

Authors:  Kitty S Chan; Jinnet B Fowles; Jonathan P Weiner
Journal:  Med Care Res Rev       Date:  2010-02-11       Impact factor: 3.929

8.  The impact of electronic medical records data sources on an adverse drug event quality measure.

Authors:  Michael G Kahn; Daksha Ranade
Journal:  J Am Med Inform Assoc       Date:  2010 Mar-Apr       Impact factor: 4.497

9.  The impact of emerging standards adoption on automated quality reporting.

Authors:  Paul C Fu; Daniel Rosenthal; Joshua M Pevnick; Floyd Eisenberg
Journal:  J Biomed Inform       Date:  2012-07-20       Impact factor: 6.317

10.  Validity of electronic health record-derived quality measurement for performance monitoring.

Authors:  Amanda Parsons; Colleen McCullough; Jason Wang; Sarah Shih
Journal:  J Am Med Inform Assoc       Date:  2012-01-16       Impact factor: 4.497

View more
  11 in total

1.  Electronic health records-driven phenotyping: challenges, recent advances, and perspectives.

Authors:  Jyotishman Pathak; Abel N Kho; Joshua C Denny
Journal:  J Am Med Inform Assoc       Date:  2013-12       Impact factor: 4.497

2.  Clinical Decision Support-based Quality Measurement (CDS-QM) Framework: Prototype Implementation, Evaluation, and Future Directions.

Authors:  Polina V Kukhareva; Kensaku Kawamoto; David E Shields; Darryl T Barfuss; Anne M Halley; Tyler J Tippetts; Phillip B Warner; Bruce E Bray; Catherine J Staes
Journal:  AMIA Annu Symp Proc       Date:  2014-11-14

3.  The Virtuous Circles of Clinical Information Systems: a Modern Utopia.

Authors:  P Degoulet
Journal:  Yearb Med Inform       Date:  2016-11-10

4.  Automatic quality improvement reports in the intensive care unit: One step closer toward meaningful use.

Authors:  Mikhail A Dziadzko; Charat Thongprayoon; Adil Ahmed; Ing C Tiong; Man Li; Daniel R Brown; Brian W Pickering; Vitaly Herasevich
Journal:  World J Crit Care Med       Date:  2016-05-04

5.  Chief complaint-based performance measures: a new focus for acute care quality measurement.

Authors:  Richard T Griffey; Jesse M Pines; Heather L Farley; Michael P Phelan; Christopher Beach; Jeremiah D Schuur; Arjun K Venkatesh
Journal:  Ann Emerg Med       Date:  2014-10-16       Impact factor: 5.721

6.  Applications of Business Analytics in Healthcare.

Authors:  Michael J Ward; Keith A Marsolo; Craig M Froehle
Journal:  Bus Horiz       Date:  2014-09

7.  Comparison of electronic versus manual abstraction for 2 standardized perinatal care measures.

Authors:  Stephen Schmaltz; Jocelyn Vaughn; Tricia Elliott
Journal:  J Am Med Inform Assoc       Date:  2022-04-13       Impact factor: 4.497

Review 8.  A new era of quality measurement in rheumatology: electronic clinical quality measures and national registries.

Authors:  Chris Tonner; Gabriela Schmajuk; Jinoos Yazdany
Journal:  Curr Opin Rheumatol       Date:  2017-03       Impact factor: 5.006

9.  Calculations of Financial Incentives for Providers in a Pay-for-Performance Program: Manual Review Versus Data From Structured Fields in Electronic Health Records.

Authors:  Tracy H Urech; LeChauncy D Woodard; Salim S Virani; R Adams Dudley; Meghan Z Lutschg; Laura A Petersen
Journal:  Med Care       Date:  2015-10       Impact factor: 2.983

10.  Using a stakeholder-engaged approach to develop and validate electronic clinical quality measures.

Authors:  Jill Boylston Herndon; Krishna Aravamudhan; Ronald L Stephenson; Ryan Brandon; Jesley Ruff; Frank Catalanotto; Huong Le
Journal:  J Am Med Inform Assoc       Date:  2017-05-01       Impact factor: 4.497

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.