Literature DB >> 25848626

Evaluating the Reliability of EHR-Generated Clinical Outcomes Reports: A Case Study.

Chatrian Kanger1, Lisanne Brown1, Snigdha Mukherjee1, Haichang Xin2, Mark L Diana3, Anjum Khurshid1.   

Abstract

INTRODUCTION: Quality incentive programs, such as Meaningful Use, operate under the assumption that clinical quality measures can be reliably extracted from EHRs. Safety Net providers, particularly Federally Qualified Health Centers and Look-Alikes, tend to be high adopters of EHRs; however, recent reports have shown that only about 9% of FQHCs and Look-Alikes were demonstrating meaningful use as of 2013. Our experience working with the Crescent City Beacon Community (CCBC) found that many health centers relied on chart audits to report quality measures as opposed to electronically generating reports directly from their EHRs due to distrust in the data. This paper describes a step-by-step process for improving the reliability of data extracted from EHRs to increase reliability of quality measure reports, to support quality improvement, and to achieve alignment with national clinical quality reporting requirements.
BACKGROUND: Lack of standardization in data capture and reporting within EHRs drives distrust in EHR-reported data. Practices or communities attempting to achieve standardization may look to CCBC's experience for guidance on where to start and the level of resources required in order to execute a data standardization project. During the time of this data standardization project, CCBC was launching an HIE. Lack of trust in EHR data was a driver for distrust in the HIE data.
METHODS: We present a case study where a five-step process was used to harmonize measures, reduce data errors, and increase trust in EHR clinical outcomes reports among a community of Safety Net providers using a common EHR. Primary outcomes were the incidence of reporting errors and the potential effect of error types on quality measure percentages. The activities and level of resources required to achieve these results were also documented by the CCBC program.
FINDINGS: Implementation of a community-wide data reporting project resulted in measure harmonization, reduced reporting burden, and error reduction in EHR-generated clinical outcomes reporting across participating clinics over a nine-month period. Increased accuracy of clinical outcomes reports provided physicians and clinical care teams with better information to guide their decision-making around quality improvement planning. DISCUSSION: A number of challenges exist to achieving reliable population level quality reporting from EHRs at the practice, vendor, and community levels. Our experience demonstrates that quality measure reporting from EHRs is not a straightforward process, and it requires time and close collaboration between clinics and vendors to improve reliability of reports. Our experience found that practices valued the opportunity and step-wise process to validate their data locally (out of their EHRs) prior to reporting out of the HIE. CONCLUSION AND NEXT STEPS: Communities can achieve higher levels of confidence in quality measure reporting at the population level by establishing collaborative user groups that work with EHR vendors as partners and use technical assistance to build relationships and trust in EHR-generated reports. While this paper describes the first phase of our work around improving standardization and reliability of EHR reports, vendors should continue to explore modifications for improving data capture (at the front-end) via standardized data entry templates.

Entities:  

Keywords:  data use and quality; health information technology; standardized data collection

Year:  2014        PMID: 25848626      PMCID: PMC4371440          DOI: 10.13063/2327-9214.1102

Source DB:  PubMed          Journal:  EGEMS (Wash DC)        ISSN: 2327-9214


Introduction

There has been an unprecedented effort to promote the adoption and meaningful use of electronic health records (EHRs) over the last five years. The Health Information Technology for Economic and Clinical Health (HITECH) Act, followed by passage of the Affordable Care Act, incentivized physicians and hospitals to adopt and meaningfully use EHRs, mostly through payments to be made by the Centers for Medicare and Medicaid Services (CMS).1 As of June 2014, CMS reported more than 24 billion dollars in incentive payments to providers.2 Meaningful use (MU), as defined by CMS, is a set of highly specific requirements that providers and hospitals must demonstrate through the use of certified EHRs in order to receive incentive payments. The requirements are split into three stages: Stage 1 focuses primarily on standardizing data capture to facilitate exchange of health information, Stage 2 focuses on advancing clinical processes through health information exchange (HIE), and Stage 3 focuses on using comprehensive data available via HIE to improve patient outcomes.3 Each stage is dependent upon the provider’s or hospital’s ability to capture and report standard national clinical quality metrics (eCQMs) using an EHR. Increasingly, provider reimbursements are being tied to performance such as achieving benchmarked satisfactory levels of quality and requiring reporting of national quality metrics. Further, HITECH and the Affordable Care Act have spurred additional programs, such as Accountable Care Organizations and other payment reform and population health benchmarking initiatives, which are all driving market demand for vendors to standardize data capture and reporting of eCQMs from EHR products. These approaches to reimbursement, based upon performance and measurement, assume that quality metrics can be extracted accurately and reliably from EHRs. While EHR certification programs require that vendors demonstrate capabilities to generate quality measures electronically through their systems, they only address to limited degree accuracy in measure logic, reliability, and ease of measures reporting. In fact, several recent studies have cautioned against the validity and reliability of electronically reported clinical quality measures due to inadequate translation of measure logic into EHR reporting systems, completeness and variability in data capture, and accuracy of reports.4,5 Simply having a certified EHR, which is required to meet all stages of the MU requirements, does not guarantee that quality metrics can be reliably extracted.4–7 Safety net providers tend to be high adopters of EHRs largely due to the availability of support and resources through Health Resources & Services Administration (HRSA) and Regional Extension Centers. One recent study found that 80 percent of federally funded health centers were using EHRs.8 But despite their support and participation with Regional Extension Centers, the Office of the National Coordinator (ONC) for Health Information Technology (HIT) reports only 9 percent of Federally Qualified Health Centers (FQHCs) and look-alikes were demonstrating MU as of 2013.2 Our experience working with community clinics in a safety net setting in Louisiana found that our health centers’ greatest barrier to meeting MU objectives was “meeting performance thresholds for clinical quality measures” due to inadequate data capture within the EHR and lack of proficiency in data extraction and reporting from their EHRs.9 As a result, practices often relied upon chart audits as their primary method for generating quality measures for reporting as opposed to electronic reporting from their EHRs because they did not trust their EHR-generated reports. These factors led us to question the assumption that quality metrics can be easily and reliably extracted from EHR systems. Moreover, we determined that if physicians did not trust the data reported in their EHRs, they would not trust data reported out of an HIE. The goal of this paper is to describe the implementation of a data standardization process developed by the Crescent City Beacon Community (CCBC) for improving measure harmonization, data quality and trustworthiness, and reliability of data extracted from community clinic EHRs to increase the reliability of data reports, to support quality improvement (QI), and to achieve alignment with national clinical quality reporting requirements. A secondary objective is to describe the activities and level of resources needed in order to execute this type of effort based upon the CCBC experience.

Background

New Orleans is one of 17 communities awarded a Beacon Community grant from the ONC for HIT. The Louisiana Public Health Institute (LPHI) convened the CCBC in Orleans and Jefferson parishes in partnership with community clinics and hospitals, and led the evaluation of the program. LPHI’s evaluation role was to collect, aggregate, and report on clinical quality and outcome measures to demonstrate improved quality of care and population health through adoption of HIT. A primary focus of the CCBC was to increase participating community health centers’ capabilities to join the Greater New Orleans Health Information Exchange (GNOHIE). Improving the quality of the data feeding into the GNOHIE from the local health centers’ EHRs was a critical step. Eligible practices included the 17 community health centers receiving EHR-related technical assistance through CCBC, which comprised the majority of the safety net population in New Orleans and represented 100 percent of the total population contained in the GNOHIE at that time. Area hospitals were not directly engaged in CCBC’s technical assistance program and are not included in this project. Table 1 presents characteristics of the clinics involved in the project.
Table 1.

Practice Characteristics at Start of CCBC Data Standardization Process: March 2011

Practice CharacteristicsFrequency (N = 17)
Practice Type

Federally Qualified Health Center (FQHC) or Look-Alike13
Academic/Institution Affiliated2
Special Population (e.g., HIV/AIDS)1
Private, Nonprofit1

Practice Size (number of physicians and nurse practitioners)

1–25
3–1010
11–202

EHR Types

SuccessEHS13
  (# using Practice Management + EHR)12*
   (# using Practice Management only)1
Allscripts3
   (# using Practice Management + EHR)1
   (# using Practice Management only)1
Aprima1*
  No EHR system1

Length of Time Using EHR

< 1 year2
1–2 years1
2–5 years7
5+ years4
EHR not implemented yet3

Note:

The CCBC data standardization process focused on the 12 SuccessEHS EHR users and the 1 Aprima site, which was included because Aprima uses Crystal Reports-based software, which is similar to SuccessEHS’s Business Objects reporting software.

Methods

Over a nine-month period, CCBC implemented a five-step data standardization process in 13 clinics that were using the same EHR system, SuccessEHS, with the intent to modify the process for clinics on other EHR systems. A schematic of our process is shown in Figure 1.
Figure 1.

CCBC Data Standardization Process

Step 1: Measure Selection and Review

Prior to participating in the CCBC program, clinics used a variety of measures for monitoring quality of care for patients. The measures were based upon the HRSA Disparities Collaborative reports for diabetes, clinic-defined cardiovascular disease (CVD) measures, or canned reports generated by their EHR system; but data managers and clinicians were often unsure of the measure specifications. In March 2011, the CCBC evaluation team at LPHI worked with a broad stakeholder group, including representatives from each clinic, to reach consensus on a common set of communitywide measures based on nationally standardized measures endorsed by groups such as the National Committee for Quality Assurance (NCQA)’s Diabetes and Heart/Stroke Recognition programs,10 the National Quality Forum (NQF), and CMS. Over a three-month period, the community arrived at a consensus on clinical quality and outcome metrics for diabetes mellitus (DM) and CVD (see Tables 2a and 2b).
Table 2a.

CCBC Diabetes Mellitus (DM) Measures11

DM measures listed below adhere to the following parameters: patients ages 18 to 75 with diabetes (type 1 or type 2) and having had at least two face-to-face visits during the 12-month measurement period.
MeasureMeasure DescriptionNQMC IDNCQA Benchmark
HbA1c Testing% who received at least one HbA1c test007087≥85%
HbA1c Poor Control (>9.0)% whose most recent HbA1c was greater than 9.0%007088≤15%
HbA1c Control (<8.0)% whose most recent HbA1c was <8.0%007089≥65%
HbA1c Control (<7.0)% whose most recent HbA1c was <7.0%007090≥40%
BP <140/90% whose most recent blood pressure reading was 140/90 mmHg007096>65%
BP <130/80% whose most recent blood pressure reading was 130/80 mmHg001598≥25%
LDL-C testing% who received had an LDL-C screening performed007092≥85%
LDL-C<100% whose most recent LDL-C value was <100 mg/dL007093≥50%
Nephropathy% who received nephropathy screening test or documented evidence of nephropathy007094≥85%
Foot Exam% who received foot exam001603≥60%
Eye Exam% who received dilated retinal eye exam007091≥80%
Table 2b.

CCBC Cardiovascular Disease (CVD) Measures11

CVD measures listed below adhere to the following parameters: patients ages 18 and older with CVD and having at least one face-to-face visit during the 12-month measurement period.
MeasureMeasure DescriptionPQRI IDNCQA Benchmark
HTN BP<140/90% of patients with Hypertension whose most recent BP < 140/90PQRI 237≥75%
IVD BP <140/90% of patients with Ischemic Vascular Disease whose most recent BP <140/90PQRI 201≥75%
IVD Lipid Profile% of patients with Ischemic Vascular Disease with Lipid ProfilePQRI 202≥85%
IVD Aspirin% of patients with Ischemic Vascular Disease with documentation of aspirin or other antithromboticPQRI 6≥80%
IVD Lipid Therapy% of patients with Coronary Artery Disease who were prescribed lipid-lowering therapyPQRI 197Not applicable

Step 2: Assess Data Reporting Capabilities and Needs

In May 2011, the CCBC conducted a pilot study to assess the feasibility of implementing the community’s agreed-upon chronic-care management interventions. A key focus of this study was to determine practices’ capacity to report on CCBC’s clinical quality and outcome metrics. Interviews were conducted over a two-week period with representatives, such as a data or HIT manager, or a QI manager for each practice, to accomplish the following: to identify the appropriate individual responsible for data reporting, to build an ongoing trusted relationship, to introduce the CCBC clinical quality and outcomes measures, to gauge proficiency with the EHR reporting modules, and to identify challenges to reporting these measures.

Step 3: Tailor Approach Based Upon Findings

Once data reporting needs were identified, the CCBC implemented the following approach to harmonize measures and data reporting for eCQMs.

Identification of data coordinators at each practice

Identifying the data coordinator (DC) or most appropriate individual for data reporting at each clinic site was essential for establishing direct communication and to avoid a “trickle down” of reporting instructions and measure specifications.

Phase in of measures

CCBC phased in eCQM reporting—DM first, followed by CVD—based upon recommendations from the DCs, given that DCs and health centers were more familiar with the DM measures and felt these would be easier to report on, whereas CVD measures were perceived to be more complex to capture within the EHRs.

Measure reference sheets

The CCBC team developed a reporting manual for DCs that contained measure reference sheets outlining the measure definitions, measure sources, technical specifications (e.g., International Classification of Diseases (ICD) and Current Procedural Terminology (CPT) codes), inclusion and exclusion criteria, and numerator and denominator calculations. The manual also included FAQs, tips, and quick reference guides containing the corresponding NCQA benchmark for comparison.

Standardized reporting templates in the EHR

The CCBC collaborated with the EHR vendor over a three-month period (May to August 2011) to build standardized reporting templates for our eCQMs using the measure reference sheets. The standard templates allowed DCs to generate clinical quality and outcomes metrics by inputting desired reporting start and end dates, with minimal customization required.

Training workshops

CCBC conducted quarterly data workshops (March, May, August, and November 2011) to facilitate peer-to-peer learning, to share tips, and—at times—to participate in learning sessions with the vendor’s report expert to demonstrate report-queries generation (e.g., “where the data is being pulled from within the EHR”), modification and validation of practice reports, and verification of codes used in queries.

Step 4: Conduct Quality Checks

The CCBC conducted quality checks using measurement validation and chart audits. At the time of this data standardization intervention, data sharing agreements were not in effect among the CCBC partners, therefore patient-encounter level data were not available to the CCBC team for direct measure calculations. Reports containing values for numerators, denominators, and percentages for each DM measure by site were submitted to the CCBC team on a quarterly basis from March 2011 through December 2011, totaling four data-reporting periods: March, June, September, and December. During the first two quarters the clinics received technical assistance through data workshops and measure reference sheets. September 2011 marked the deployment of vendor-developed, standardized-reporting templates augmented with the data workshops and instructional materials. The last two quarters are the focus of this study where we were able to examine the impact of vendor engagement; 13 clinics reported on 11 DM measures. To assess whether the sites were following the specifications for each measure, the CCBC team independently calculated the measures. The data set included 156 observations (12 factors x 13 clinics), for each period. Clinics could also self-report data-reporting challenges and issues that were incorporated into the data set. A data error was assigned if (1) data reported did not adhere to national measure technical specifications or parameters (i.e., a misspecification of eligible patient population or miscalculation of numerators), or (2) no data was able to be reported for a measure (e.g., due to not having a report or query built to extract the measure values). Data errors were further classified according to error types. A data error proportion was created for each period by summing the total number of actual errors compared to the total the number of possible errors. The difference between the data error proportions for the last two quarters was examined using two-sample t-tests.

Step 5: Provide Rapid Performance Feedback

Members of the CCBC team were available in the weeks leading up to and immediately following each data submission to troubleshoot and provide rapid feedback regarding data errors to both practices and to the vendor. Health centers were also provided with trend graphs and bar charts each quarter to monitor their performance on each eCQM over time.

Findings

CCBC implemented a five-step data standardization process with the intent to improve the reliability of data extracted from EHRs to support QI by harmonizing measures across participating community clinics, by bringing clinics into alignment with national eCQM requirements, and to increase trust in EHR-generated eCQM reports. In addition to achieving these goals, CCBC reduced reporting burden among participating clinics; and identified common data error types, their effects on apparent measure performance, and the resources necessary for a community to implement a data standardization process similar to that of the CCBCs. Given that CCBC measures were phased in over time, only findings from the DM measures have been reported in this paper.

Measure Harmonization

We found that prior to implementation of the CCBC data standardization process, practices were following similar measures for monitoring their quality of care for DM and CVD patients, but the DCs were often unsure of the specifications of each measure and the ramifications of modifying or customizing the measure settings within their EHRs. The CCBC’s thorough examination of measures with a broad stakeholder group resulted in all of the CCBC practices adopting the same set of standardized measures for DM and CVD care (see Tables 2a and 2b).

Reduced Reporting Burden

Prior to the CCBC data standardization process, 8 of the 13 (62 percent) CCBC practices were generating clinical quality measures (CQMs) via chart audit or a combination of electronic reporting with manual chart audit. DCs reported that the time it took them to generate measures for quarterly reporting ranged from “a couple of days” to “an entire week” (depending upon the number of sites for which they were responsible). After deployment of standardized electronic reporting templates via CCBC, 100 percent of practices were generating CQMs via electronic reports. DCs reported that the time it took them to generate their reports directly from their EHRs decreased to “a few hours” to “one day” per quarter (again, depending upon the number of sites for which they were responsible).

Reduced Data Errors

As shown in Figure 2, a significant reduction in the proportion of data errors reported was found between September and December 2011, from 33.8 percent to 13.5 percent once the vendor-developed, standardized-reporting template was implemented. This decrease in data errors resulted in a concurrent increase in data reporting reliability and consistency with national clinical quality and outcomes measures. Overall reporting improvements were found to be present across all 13 CCBC clinics, and data errors decreased by more than 50 percent in 11 of 13 practices between March and December 2011, a result of overall participation in the CCBC data standardization process.
Figure 2.

Mean Data Error Proportions for Diabetes Mellitus Measures among CCBC Clinics over Time

Common Error Types and Effects on Diabetes Measure Percentages

Review of the quarterly reports revealed common error types and measure miscalculations that resulted in CCBC clinics resubmitting their quarterly data to reflect improved data reporting over time. Tables 3 and 4 highlight the common error types found in the DM measure reports and the effects that each particular error type had on the measure percentage level.
Table 3.

Typology of Common Errors and Its Effects on Diabetes Measure Percentages

Error TypeEffect on Measure %Explanation
Incorrect visit count parametersLower visit count = lower %Higher visit count = higher %National measure specifications require a patient with a diagnosis of DM to have had at least two visits during the measurement period. In this study, it was common for practices to use a higher visit count, which resulted in misleading measure percentage levels. Likewise, restricting measure parameters to patients who had only one visit would likely decrease measure percentage levels.
Use of nonstandardized or highly customized Order/CPT codesLower %National measure specifications following standard coding such as ICD-9 and CPT form the basis for inclusion and exclusion criterion in the EHR. In this study it was common that practices had created customized codes without informing the DCs to modify report queries, resulting in lower measure percentage levels.
Nonstructured lab data fieldsLower %Some systems default to nonstructured lab data fields if not set up properly during lab interface development into an EHR. This was found to be a common mistake resulting in unextractable lab values that form the numerator criteria for many of the DM measures.
Practice management configurations for uninsured or nonbillable visitsLower %In the EHR system that was common to most of the CCBC practices, any report that was tied to a “location” in the data query had to be associated with a financial group. Since some of the CCBC practices provide services to uninsured patients whose visits are not associated with a financial group, those sites experienced a lower percentage level in their DM measures. Another common challenge was found to be that billing staff did not inform DCs when changes were made to financial groups—not realizing their impacts on quality and outcomes reporting.
Numerator miscalculation—inclusion criteria:Lower %National measures tend to have misleading titles or labels that often lead practitioners to believe that the only populations included in the measures are those that are mentioned in the title. For example, “HbA1c poor control” leads practitioners to count only patients with HbA1c >9.0 (poorly controlled patients), when in fact the measure also includes individuals who did not receive an HbA1c test during the measurement period. Additionally, practitioners may confuse measure requirements with clinical guidelines. For example, blood pressure measurement values must meet criteria for both systolic and diastolic limits, whereas only one value may need to be satisfied for clinical diagnosis; foot exam measures include monofilament/visual/sensory exams performed by a primary care doctor—not just visits to a specialist or a podiatrist.
Denominator miscalculation for ALL HbA1c valuesHigher %National measures specify that the denominator for all HbA1c measure calculations uses the total number of patients with a diagnosis of DM (Type 1 or Type 2) during the reporting period. Once the % of patients with DM who received A1c testing was calculated, it was determined that all practices had mistakenly used that percentage as the denominator to calculate the remainder of their DM measures, such as HbA1c poor control and control, which results in a higher % level for those measures since that value excludes the patients who did not receive HbA1c testing.
Table 4.

Before and After Error Impacts on Apparent Measure Performance

MeasureError TypeImpact on Apparent Measure Performance (% Difference Before and After)
HbA1c control (<8.0)Denominator miscalculation7.4% ± 5.6% SD, t = 4.03, p = 0.004
HbA1c poor controlNumerator and denominator miscalculation9.7% ± 6.8% SD, t = 4.26, p = 0.003
We examined the extent to which two of the most common data error types affected apparent measure performance for “HbA1c control (<8.0)” and “HbA1c poor control.” Percentage differences for measures reported before and after data error identification were calculated and compared as shown in Table 4. An error in denominator calculation for HbA1c control (<8.0) accounted for an average difference of 7.4 percent ± 5.6 percent (SD) in before and after measure percentages. In calculating HbA1c poor control, for 9 of 11 clinics we found both numerator and denominator miscalculations, resulting in an average difference of 9.7 percent ± 6.8 percent (SD) in before and after measure percentages. In addition, because the measure calculation for HbA1c poor control included patients who did not receive HbA1c testing, the relationship between the testing rate for HbA1c and the percentage change in HbA1c poor control was tested. The results indicated that higher testing rates for HbA1c resulted in a lower percentage change (smaller effect) in HbA1c poor control levels. Conversely, lower testing rates for HbA1c resulted in a greater percentage change (more effect) on HbA1c poor control levels (p < 0.01 for both results).

Resources Required

Table 5 provides insight into the amount of resources that CCBC expended over a 12-month period in order to operationalize this data reporting project. While significant time and resources were expended in the first year in order to reach communitywide consensus on measures and to develop measure tools, reports, and procedures, CCBC anticipates that less time would be required to replicate this data reporting project with other clinics and other EHR vendors since the methodology and tools to achieve standardization have been developed—as long as other EHR vendors are responsive to addressing eCQM reporting needs (the time-limiting factor).
Table 5.

Resources Necessary to Implement CCBC’s Data Standardization Process

Core ActivitiesAmount of Effort
Practice-Level Resources Required
LeadershipTo be engaged and to allocate personnel timeIn-kind
Data or QI ManagerParticipate in orientation; 1 site visit interview; data workshops; execute and troubleshoot reports using EHR system12–40 hrs/per quarter (dependent on no. of practices responsible for, experience and training on system, etc.)
Medical Director/Nurse Supervisor/QI DirectorValidate data reports2 hrs/per quarter
Community-Level Resources Required
LeadershipTo prioritize data standardization and to allocate resourcesIn-kind
Data/Evaluation ManagerDevelop measure sets; provide technical assistance to practice DCs; coordinate with EHR vendor to build reports and instructional materials.75 FTE
Data CoordinatorCompile clinic data submissions and generate performance feedback reports, graphs, and charts.25 FTE
QI ManagerProvide clinical guidance re: measure sets and EHR system layout.20 FTE
Community User Group (consisting of Medical and QI Directors, research experts, clinicians)Vet measures; provide recommendations; set priorities for clinical measures (on as-needed basis)Meet once per month (2 hours)
QI/Data InternDevelop measure reference documentation for educational materials1.0 FTE
EHR Vendor-Level Resources Required
Reporting/Database Design ExpertBuild and pretest standardized data reporting templates within EHRs; co-host data workshops; troubleshoot as needed; supply screenshots with user instructions for report templates40 hrs total
It is important to keep in mind that CCBC’s data standardization process focused mainly on creating a communitywide learning collaborative around measure selection, standardization, education, and performance feedback to identify opportunities to improve data reporting out of EHRs. Vendor engagement was critical for success but consumed only a minimal proportion of the total time spent implementing this data standardization process because there was a high level of buy-in and cooperation from the vendor for this Beacon Community Program work that would also benefit their other customers nationwide. Note that this effort was expended for a limited number of eCQMs, clinics, and one EHR vendor. The level of effort required to succeed in a larger project (which may include many EHR vendors and various eligible providers) cannot be calculated by simply multiplying the reported full time equivalents (FTEs), but rather is calculated by taking into consideration variables such as community’s level of experience and proficiency with an EHR vendor, prior history or relationship with community participants and the EHR vendor, and familiarity with eCQMs. CCBC possessed a high level of engagement, experience, and proficiency with the primary EHR vendor in the community and had a history of successfully collaborating on innovative projects with the EHR vendor.

Discussion

The Beacon Communities have generated important lessons around capturing high quality EHR data to support performance improvement, such as measure selection and data element mapping; vendor engagement strategies; and implementing EHR data quality-improvement activities.12 For example, building upon previously known strategies to improve implementation of QI components, the Colorado Beacon Community recently reported that their providers and staff most valued community-based practice facilitation; collaborative peer-to-peer teaching and learning; and accessible, local experts with EHR technical and data extraction and use expertise in their journey to meaningfully utilize electronic clinical data.13 Our study findings support these previous learnings—in that error proportions declined as practices delegated data expertise to a local data intermediary (CCBC) to champion efforts, such as measure harmonization, education, and performance feedback. CCBC also leveraged the community practices’ relationships with their EHR vendors as a group to meet reporting needs by facilitating a local learning collaborative in partnership with the vendor for technical assistance and to support standardized report creation for data extraction in their EHRs. But what we have not seen to date is a practical, detailed, step-wise process for how a community with limited resources and effort can begin to improve their EHR-generated clinical quality reports. This paper presents CCBC’s experience in implementing a data standardization process that delineates the steps taken, times, and the resources utilized, as well as the results of the project. CCBC’s experience can provide other communities with an outline for where to start, how to tailor an approach, what resources are needed, and types of results that can be expected. At the practice level, identifying appropriate data coordinators ensured standardization of communications and instructions for measure specifications and complex measurement vocabularies. Measures were phased in so as not to overburden practices that were unfamiliar with electronic reporting of quality measures, as well as to allow practices to start with measures that they felt most comfortable and confident in reporting in order to create buy-in to the overall project. Measure reference sheets ensured standardization of measure specifications and lab and procedural codes, which resulted in participating practices dedicating time and resources to “clean up” their overly customized EHRs (such as identifying and removing customized ICD or CPT codes that were not being captured in quality measure reporting). Likewise, creation of vendor-developed, standardized report templates reduced variations in measure interpretations and mapping measure components to EHR system fields, which led to vendor education of practices on appropriate data entry protocols. Lastly, training workshops combined with rapid performance feedback allowed physicians and staff to identify opportunities for documentation issues and cross-fertilization of solutions to resolve them. These steps linked what CCBC was asking physicians and staff to do (such as modify workflows to accommodate data capture) to actual patient care (such as having a more accurate count of their patients with DM, or identifying which labs their providers are ordering, etc.). The outcome of the data standardization process led to more accurate and reliable EHR-generated clinical quality reports providing physicians and clinical care teams with better information to guide their decision-making around QI planning. For instance, by the end of the project, medical directors did not hesitate to declare that they felt “comfortable with these [eCQM] numbers.” Other practitioners reported that the performance feedback generated by CCBC “triggered more concentrated efforts by the providers to meet [QI] goals.” The reporting burden also greatly diminished with the introduction of vendor-developed, standardized reporting templates within the EHR. In the context of national initiatives to advance EHR adoption and to facilitate HIE, it is essential to understand the common barriers and facilitators to enabling reporting of clinical quality and outcome data. The work of the CCBC can be used to inform the work and policies of organizations and policymakers supporting these changes. In partnership with ONC’s Beacon Community Program, the CCBC sought to build the electronic infrastructure to improve reporting of clinical metrics and to develop communitywide analytics, research, and QI capability through an HIE primarily consisting of a group of safety net clinics whose previous experiences reporting quality metrics were mostly through chart audits. However, CCBC learned that practices required a certain degree of prework around standardization of data capture and reporting within their EHRs prior to joining the GNOHIE. All communities face challenges in aggregating data from multiple sites and ensuring reliability for comparison purposes. Other studies have documented the challenges that a community data coordinator faces with regard to quality control, data cleaning, reliability, and validity checks.14 It is challenging for communities working with multiple vendors to translate national quality metrics in a way that can be reliably compared for performance or QI activities. Many communities may not yet be a part of HIEs or may lack data sharing agreements altogether. However, our experience revealed that even in the absence of data sharing agreements, steps can be taken to achieve data reporting reliability across multiple sites and partner agencies. There is an obvious benefit to having a central data repository that would allow for quality measure calculations from patient-encounter level data as opposed to aggregated numerators and denominators reported from multiple sites. Yet it is just as important that practices be able to verify their data locally prior to integrating their data into a central database.14 The CCBC anticipates that the steps taken toward achieving data standardization will assist practices in implementing data entry and documentation practices using recommended EHR fields, which will facilitate the exchange of health information through GNOHIE and will also help these practices achieve MU. This experience from the CCBC can serve as a roadmap for other communities attempting to meet the definition of MU, which requires reporting of eCQM and, ultimately, the exchange of health information. The CCBC relied on a collaborative approach to overcome challenges experienced at the practice, vendor, and community levels. Table 6 summarizes the challenges and solutions employed in improving the reliability of EHR-generated clinical outcomes reports.
Table 6.

Challenges and Solutions to Improving the Reliability of EHR-Generated Clinical Outcomes Reports

ChallengeSolution
Practice Level
• Lack experienced, dedicated data coordinators• Engage EHR vendor via data intermediary
• Proficiency on EHRs and familiarity with eCQMs and eCQM vocabulary• Formation of user groups for peer-to-peer learning and group trainings with HIT and data extraction experts
• Provider distrust in EHR eCQM data• Measure reference sheets, data workshops, and webinars to create transparency in measure generation
Vendor Level
• Translating measure specifications into system fields• Mapping measure specifications in a spreadsheet with the vendor to specific fields and codes
• System upgrades, and users on different versions of the EHR system• Update and create multiple versions of standard report templates
Community Level
• Lack of standardization in reporting and inconsistent definitions• Creation of standardized report templates with EHR vendor; development of measure reference sheets and manuals
• Lack of data sharing agreements• Reporting of measure numerators and denominators only
• Staying abreast of measure updates given the vast array of measure sets• Establishing measure consensus among community partners
• Ensuring usability and relevance of data reports• Motivation via performance feedback reports and charts; benchmarking

Conclusions

The introduction of EHRs into clinical practice is intended to improve patient outcomes by supporting providers’ decision-making processes through care coordination, timely reminders, and alerts. These functions require accurate and reliable information that enables clinicians to manage their patient populations through technology-enabled features such as automated reporting according to nationally acceptable measures of quality. The experience of CCBC’s data standardization process provides one example of a simple but effective process that communities and practices might engage in to enable clinicians to utilize EHRs as tools for systematic quality reporting and improvement. EHRs may allow for integration of data from multiple sources in real time for improved decision-making, but they are also complex and allow new opportunities for error.15 Our experience showed that evaluating the reliability of EHR-generated clinical-outcomes reports—which draw upon a subset of EHR data elements such as lab values, visit dates, patient demographics, etc.—served as a pilot for the quality of all of the data elements feeding into our HIE as well as the level of trust that the community would have in performance reports generated from the community data repository of the HIE.

Limitations

Our study has several limitations. First, the number of clinics in our study is small (n = 13), but we believe that the information derived from the experiences of these 13 clinics is useful and informative. Second, longer exposure to the CCBC measures may have contributed in part to the reduction in number of errors by the last two reporting periods. Third, the typology of errors identified and described in this study are from clinics primarily using the same EHR vendor. Further research will be needed to investigate error types among clinics utilizing other EHR vendors. Additionally, this study does not take into account errors that were detected by clinics on their own and not reported, or their impacts on apparent measure performance. Therefore, it is likely that the actual error proportions may be higher than reported in this study. For instance, additional factors, such as provider data-entry habits, lab interfaces, and EHR vendor features should be investigated in future research to examine effects on the validity of EHR-generated clinical-outcomes reports. Changes in DCs among sites were not included in this study given the nature of safety net clinics where staff turnover is high.

Lessons Learned

Our recommendation for other communities attempting to meet MU and other similar quality reporting initiatives would be to allow practices time—e.g., at least six months—to get into alignment with national quality measures even if an EHR system is certified. Communities should not underestimate the value that can be derived from data validation, measures review, reporting reliability, and data cleaning within EHRs prior to onboarding data into an HIE. We found invaluable the use of a process that would allow for consensus building around measure selection, and identification of technical assistance and reporting needs; then, tailoring assistance to the practice, vendor, and community needs as required. Practices and vendors may need to phase in data reporting for measure sets over time, as feasible. Facilitating relationships with EHR vendors is also critical to success, particularly for development of standardized reporting templates and any system-level customizations required. Safety net practices and small physician practices should consider collaborating to take on this type of responsibility and coordination together or via a local champion or data intermediary, rather than individually, for increased chances of success. Such data intermediaries can also interact with local payers and quality committees to facilitate measure harmonization for common reporting purposes. Policymakers should consider the role that data intermediaries—particularly HIEs with centralized community data repositories that adhere to data validity and reliability procedures—can play in harmonizing measures across a community, simplifying data aggregation and analytics, and negotiating data sharing agreements for Quality Improvement Plans (QIPs). CCBC providers are benefiting from participation in our data standardization efforts through facilitated reporting of quality measures to government, accreditation, and payer organizations, such as NCQA and local Medicaid programs. In addition, increased awareness of error types and their effects on measure percentage levels is aiding practices to make the necessary adjustments to improve data validity. While this paper describes the first phase of our work around improving standardization and reliability of EHR reports, our team is embarking on the second phase of our work, which entails utilizing findings from the data standardization work to improve data capture via modifications to EHR templates. LPHI is replicating and spreading the CCBC data standardization process to other community clinics across our state via our Health Center Controlled Network, an HRSA-funded grant program, seeking to expand and build upon the CCBC efforts in order to advance the adoption and MU of HIT, as well as to support clinical QI, business operations, and financial sustainability among safety net providers.16
  8 in total

1.  Are physicians' perceptions of healthcare quality and practice satisfaction affected by errors associated with electronic health record use?

Authors:  Jennifer S Love; Adam Wright; Steven R Simon; Chelsea A Jenter; Christine S Soran; Lynn A Volk; David W Bates; Eric G Poon
Journal:  J Am Med Inform Assoc       Date:  2011-12-23       Impact factor: 4.497

2.  New paradigms for measuring clinical performance using electronic health records.

Authors:  Jonathan P Weiner; Jinnet B Fowles; Kitty S Chan
Journal:  Int J Qual Health Care       Date:  2012-04-06       Impact factor: 2.038

3.  Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.

Authors:  Peter B McGarvey; Sweta Ladwa; Mauricio Oberti; Anca Dana Dragomir; Erin K Hedlund; David Michael Tanenbaum; Baris E Suzek; Subha Madhavan
Journal:  J Am Med Inform Assoc       Date:  2012-02-09       Impact factor: 4.497

4.  The "meaningful use" regulation for electronic health records.

Authors:  David Blumenthal; Marilyn Tavenner
Journal:  N Engl J Med       Date:  2010-07-13       Impact factor: 91.245

Review 5.  Review: electronic health records and the reliability and validity of quality measures: a review of the literature.

Authors:  Kitty S Chan; Jinnet B Fowles; Jonathan P Weiner
Journal:  Med Care Res Rev       Date:  2010-02-11       Impact factor: 3.929

6.  Enabling Quality: Electronic Health Record Adoption and Meaningful Use Readiness in Federally Funded Health Centers.

Authors:  Michael Wittie; Quyen Ngo-Metzger; Lydie Lebrun-Harris; Leiyu Shi; Suma Nair
Journal:  J Healthc Qual       Date:  2016 Jan-Feb       Impact factor: 1.095

7.  Accuracy of electronically reported "meaningful use" clinical quality measures: a cross-sectional study.

Authors:  Lisa M Kern; Sameer Malhotra; Yolanda Barrón; Jill Quaresimo; Rina Dhopeshwarkar; Michelle Pichardo; Alison M Edwards; Rainu Kaushal
Journal:  Ann Intern Med       Date:  2013-01-15       Impact factor: 25.391

8.  Supporting primary care practices in building capacity to use health information data.

Authors:  Douglas Fernald; Robyn Wearner; W Perry Dickinson
Journal:  EGEMS (Wash DC)       Date:  2014-08-04
  8 in total
  6 in total

1.  IT-enabled Community Health Interventions: Challenges, Opportunities, and Future Directions.

Authors:  Hadi Kharrazi; Jonathan P Weiner
Journal:  EGEMS (Wash DC)       Date:  2014-10-30

2.  Comparison of electronic versus manual abstraction for 2 standardized perinatal care measures.

Authors:  Stephen Schmaltz; Jocelyn Vaughn; Tricia Elliott
Journal:  J Am Med Inform Assoc       Date:  2022-04-13       Impact factor: 4.497

3.  Accuracy of Physician Electronic Health Record Usage Analytics using Clinical Test Cases.

Authors:  Brian Lo; Lydia Sequeira; Gillian Strudwick; Damian Jankowicz; Khaled Almilaji; Anjchuca Karunaithas; Dennis Hang; Tania Tajirian
Journal:  Appl Clin Inform       Date:  2022-10-05       Impact factor: 2.762

4.  Clinical Pharmacist Team-Based Care in a Safety Net Medical Home: Facilitators and Barriers to Chronic Care Management.

Authors:  Eboni G Price-Haywood; Sarah Amering; Qingyang Luo; John J Lefante
Journal:  Popul Health Manag       Date:  2016-04-28       Impact factor: 2.459

5.  Primary Care Practices' Ability to Report Electronic Clinical Quality Measures in the EvidenceNOW Southwest Initiative to Improve Heart Health.

Authors:  Kyle E Knierim; Tristen L Hall; L Miriam Dickinson; Donald E Nease; Dionisia R de la Cerda; Douglas Fernald; Molly J Bleecker; Robert L Rhyne; W Perry Dickinson
Journal:  JAMA Netw Open       Date:  2019-08-02

6.  Qualitative evaluation of a cardiovascular quality improvement programmereveals sizable data inaccuracies in small primary care practices.

Authors:  Megan McHugh; Tiffany Brown; David T Liss; Stephen D Persell; Milton Garrett; Theresa L Walunas
Journal:  BMJ Open Qual       Date:  2019-11-27
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.