Literature DB >> 25848573

How electronic clinical data can improve health technology assessment.

Jonathan R Treadwell1, Eileen Erinoff1, Vivian Coates1.   

Abstract

Health technology assessments represent comprehensive summaries of available evidence and information on a technology. They are used by medical decision makers in a variety of ways, including diagnostic testing, treatment selection, care management, patient perspectives, patient safety, insurance coverage, pharmaceutical innovation, equipment planning, device purchasing, and total cost-of-care. Electronic clinical data, which are captured routinely by clinicians and hospitals, are only rarely incorporated into formal health technology assessments. This disconnect reveals a key opportunity. In this paper, we discuss current uses of electronic clinical data, several benefits of including it in health technology assessments, potential pitfalls of that inclusion, and the implications for better medical decisions.

Entities:  

Year:  2013        PMID: 25848573      PMCID: PMC4371479          DOI: 10.13063/2327-9214.1028

Source DB:  PubMed          Journal:  EGEMS (Wash DC)        ISSN: 2327-9214


Introduction

Health technology assessments (HTAs) provide critical input to a large array of medical decisions made by payers, patients, clinicians, hospitals, and manufacturers. These decisions involve diagnostic testing, treatment selection, care management, patient perspectives, patient safety, insurance coverage, pharmaceutical innovation, equipment planning, device purchasing, and total cost of care. Good HTAs provide an independent and unbiased summary of existing information, leading to better decisions that maximize the health of patients. HTAs include numerous components surrounding a technology, such as the indicated patient populations, current standard(s) of care, motivations for the proposed technology, syntheses of relevant evidence, and discussions of guidelines, previous HTAs, ongoing trials, and cost/reimbursement. The most important component of an HTA, at present, is the evidence synthesis. This typically consists of a summary of existing research results for a technology or intervention. Often, however, the published evidence does not permit firm conclusions because of low quantity, poor study design and reporting, inconsistent results among studies in the body of evidence, use of surrogate endpoints rather than patient-oriented endpoints, and follow-up that is too short to truly assess the impact on patient health. A statement that the “evidence is insufficient,” therefore, is an all-too-common refrain in HTAs, and HTA authors are typically loathe to go beyond the evidence. Decision makers, who often fund HTAs, are then left holding the bag. Thus, most medical decisions are still made in the absence of good evidence. This raises a question: Can the addition of electronic clinical data (ECD) help fill the gap, providing a more complete information set for decision-making? Our perspective is that of ECRI Institute, an independent, nonprofit applied health services research organization that has worked since the 1960s with U.S. federal and state agencies and thousands of hospitals, health plans and health care delivery systems, private foundations, and ministries of health worldwide. Decision makers are routinely faced with the challenge of evaluating whether to adopt or reimburse for aggressively marketed drugs, devices, and procedures that quickly diffuse into practice. Our HTAs are designed to provide decision makers with information on health technology and services on a breadth of health conditions, medical/surgical treatments, behavioral health interventions, drugs, devices, health services, and care settings. Our approach has been to develop a variety of information tools, ranging from concise, rapid-turnaround HTAs to large-scale systematic reviews and comparative laboratory evaluations, when appropriate, on more than 1,000 topics. Given the advent of electronic health records (EHR), we explore in this commentary potential uses for ECD to improve HTAs and inform decision-making. We discuss current uses of ECD, possible benefits of using ECD in HTAs, potential pitfalls in such use, and how incorporating ECD in HTAs might yield better medical decisions.

Current Use of ECD in HTAs and by Decision Makers

Most HTAs currently use ECD only indirectly: they synthesize data from studies that may have used ECD. For example, most HTAs of metabolic (bariatric) surgery include the well-known Swedish Obese Subjects study,1 a nonrandomized study of ECD describing virtually all patients in Sweden who received metabolic surgery from 1987 to 2001. HTAs often summarize data from registries, which are centralized databases of ECD (e.g., the HTA by Singh et al. [2011]2 on the epidemiology of knee and hip arthroplasty). Decision makers currently use ECD to help inform many types of decisions and to replace or supplement HTA. For example: In an effort to reduce long-term risks of cardiovascular disease due to diabetes, Kaiser Permanente Northern California used 2.5 years of data from electronic medical records to perform a Markov model cost-effectiveness analysis.3 Geisinger Health System (Danville, Pa. developed their Proven-Health Navigator computer system to permit real-time feedback on many aspects of patient care such as referral tracking, emergency department visits, hospital readmissions, and medical expenses.4 Geisinger also designed a nine-component bundle for EHR to coordinate multidisciplinary diabetes care, assess quality metrics, and improve adherence to clinical practice guidelines.5 An analysis of Veterans Administration electronic records detected 1,000 patients in a five-month period who were past due for a resupply of statins. Upon further review, about 20 percent had no sound reason for not refilling their prescriptions; the records search enabled fast identification and clinical review.6

Benefits of ECD in HTAs

HTAs are often driven by a series of key questions about a technology. These questions reflect the pressing clinical issues that the evidence may or may not answer. One way ECD could improve HTAs is by enabling reviewers to answer key questions not addressed by published studies. In our systematic review of surgical treatments for inguinal hernia,7 one key question involved whether a surgical procedure to repair a known hernia on one side should include an exploratory procedure on the contralateral side (to determine whether the asymptomatic side had already herniated or was about to herniate). No studies have compared a surgical exploratory approach with a wait-and-see approach for the contralateral side. If ECD had been available, however, it might have enabled us to analyze the data and answer the question. Another example concerns rare harms of treatments: most published studies are too small to detect rare harms. Granted, the U.S. Food and Drug Administration performs surveillance to capture harms (see a summary of drug-related surveillance at http://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Surveillance/ucm090385.htm), but such efforts are insufficient according to a 2009 report from the U.S. Government Accountability Office.8 ECD-derived research may be sufficiently powerful to detect rare harms and estimate their likelihood. Furthermore, ECD exist at the individual patient level, rather than at the summarized group level. Data on individual patients may permit analyses of new questions not answerable from the overall means, such as why a treatment works for some patients but not others.9 This is known as heterogeneity of treatment effect (HTE), and it has received insufficient attention from both primary researchers and systematic reviewers.10 Varadhan et al. (2013)11 recently lamented the ongoing confusion among the various purposes and methods of HTE analyses and proposed a framework to guide future work. The framework includes predictive HTE analysis, which intends to estimate each individual patient’s chance of benefit or harm. Such estimates are obviously patient oriented, yet only rarely do HTAs even attempt them. Armed with individual-level ECD, HTA developers could more easily do so. Another way ECD could improve HTAs is to predict the clinical impact of the uptake of a new treatment (or the phasing out of an outdated treatment). For example, metabolic surgery is widely believed to reverse the effects of type 2 diabetes, to the point that after surgery many patients no longer require diabetes medications.12 ECD could provide baseline information on a population’s rates of morbid obesity and diabetes, as well as how many patients annually undergo metabolic surgery. An HTA would first summarize published studies of metabolic surgery and then estimate the chance that a given obese patient with diabetes would no longer need diabetes medication. These estimates could be combined to model hypothetical scenarios of uptake; for example, if 20 percent more of these patients at a given health system underwent metabolic surgery, how many fewer diabetes medications would be prescribed, and what would the cost implications be? Such theoretical questions are addressed formally within a decision model, which is occasionally used by HTA authors.13 For example, Trikalinos and Lau (2007)14 constructed a decision model to inform a systematic review on the diagnosis and treatment of obstructive sleep apnea. Such models, however, require inputs sometimes not provided in the published literature. An incorporation of ECD may provide them, and therefore allow more robust decision models. ECD could also improve HTAs in enabling decision-making to determine how to apply the HTA’s results to a local population.15 This is related to the distinction between efficacy (how well a treatment works under optimal conditions) and effectiveness (how well a treatment works under typical conditions). Published research studies often employ stringent patient inclusion criteria (e.g., non-smokers ages 20–60 years without diabetes or hypertension). Thus, an evidence synthesis of such studies may not apply to other patient populations (e.g., older than age 60). ECD could include such patients who had received the same treatment as patients who had been in the published studies, and analyses of the ECD could answer questions such as (1) does the treatment work differently for patients who were ineligible for the trials, (2) are harms differentially likely for trial-ineligible patients, and (3) is the treatment being administered in settings similar to those in trials. On a related note, the Grading of Recommendations Assessment Development and Evaluation (GRADE) working group has published guidance on how nonrandomized studies (which are often based on ECD) can be used to inform judgments about applicability.16 ECD can also be used to summarize practice variation by clinical specialty or geographic region. This is rarely addressed by published efficacy studies, so few HTAs discuss it. However, if an analysis of ECD demonstrates areas of substantial variation (e.g., higher mastectomy rates in certain parts of the United States),17 then HTA users could act accordingly (e.g., greater education efforts in those regions toward compliance with national guidelines on mastectomy). ECD analysis could also reveal valid reasons for the geographic variation, such as better patient outcomes for areas with nonnormative rates, and this discovery could inform future research. Finally, ECD may help address a ubiquitous problem in HTA: reporting bias. This can take the form of publication bias or selective outcome reporting bias. Publication bias occurs when authors selectively publish their findings, resulting in systematically different results between published and unpublished studies.18 Thus, a summary of the published evidence alone is inaccurate. Selective outcome reporting bias can occur when authors measure an outcome but choose not to include data on that outcome in the published report (possibly because the data on that outcome did not support the authors’ overall conclusions).19 This also can result in biased estimates of effects. ECD may help clarify the influence of these biases in the published literature, since data are often collected without regard to future publication. Realizing these benefits of ECD may require substantial funding support from organizations such as the Patient-Centered Outcomes Research Institute (PCORI). Many projects already funded by PCORI have utilized claims data, which is an important source of ECD.20 Another important source of ECD is registry data, and PCORI has posted guidance on the optimal analysis of such data.21 Further, a recent PCORI topic brief, discussing research on those with multiple chronic conditions (MCCs) stated: “There are few issues more critical to the US healthcare system and to the US economy than developing better ways to care for people with MCCs. Substantial efforts are needed to develop evidence from clinical trials, observational studies, and ‘big data’ sources, to feed into systematic reviews and clinical practice guidelines.”22

Pitfalls of Using ECD in HTAs

Many pitfalls also exist with respect to using ECD in HTAs. Authors of HTAs should recognize the risks of interpreting data that are collected outside the structure and oversight of a research study. Ioannidis (2013)23 warned about ECD: “There are so many biases in these data, ranging from measurement errors, to mis-classification, confounding by indication, and zillions of selection biases, that consent bias is only one among many deadly contributors to the composite analytical chaos.”23 These concerns are true of most retrospectively collected nonrandomized data—the risk of false conclusions. Thus HTA authors seeking to incorporate ECD should be keenly aware of ways to measure the risk of bias, and they should be willing to exclude data at high risk of bias. Various guidance documents exist for this purpose, such as Chapter 13 in the Cochrane Handbook for Systematic Reviews of Interventions,24 recent guidance for Evidence-Based Practice Centers,25,26 and the Newcastle-Ottawa Scale.27 Perhaps the most widely cited problem with ECD is the inference of cause. Patients who received one treatment may have had better health outcomes than other patients who received a different treatment because their prognosis was better in the first place, not as a result of the treatment they received. Thus, a good ECD analysis will control for prognosis (e.g., age, comorbidities, other treatments received) before attempting to measure the impact of a treatment choice. This is only one source of potential bias, however. Others include the self-selected nature of some ECD (i.e., perhaps the data exist in a database because the outcomes were good), the lack of blinding (i.e., knowledge of treatment affecting subsequent management and/or compliance), and inconsistent measurement techniques (i.e., the same outcome measured in different ways over time). Other risks involve patient confidentiality and informed consent. If only a few patients exist with a certain constellation of medical conditions, test results, and interventions, then confidentiality may be compromised, and HTA authors should be aware of how to prevent this. Also, when data are collected outside of a research study (e.g., routine clinical practice data from electronic medical records), many patients never sign consent forms permitting the use of their clinical data for research purposes. Currently there is controversy about of using ECD for research purposes without explicit consent forms.23,28–31

How Would This Produce Better Decisions?

Using ECD in HTA would produce better medical decisions in at least three ways: Better application to local settings. Published trials involve specialized populations who are relative easy to manage clinically (e.g., few comorbidities). Decision makers armed with knowledge of their own patient population (e.g., rates of various comorbidities) may be best suited to assess whether and how trial results and ECD are best applied to their patients. Better evidence generation. Examples of ECD use in HTAs may motivate hospitals and payers to collect their own tailored ECD using more structured methods and comprehensive health outcomes. This action bolsters certain financial encouragement toward evidence generation, such as Coverage with Evidence Development used by the Centers for Medicare & Medicaid Services.32 Validation of existing policies. ECD has the potential to confirm or validate medical coverage policy decisions, and it also can serve to motivate policy changes. For example, new evidence from ECD that a covered treatment is associated with important health risks might motivate payers to modify or eliminate coverage for that treatment. HTAs are useful to decision makers, but judicious use of ECD could make them more useful.
  17 in total

1.  Informed consent: a broken contract.

Authors:  Erika Check Hayden
Journal:  Nature       Date:  2012-06-20       Impact factor: 49.962

Review 2.  Uptake of methods to deal with publication bias in systematic reviews has increased over time, but there is still much scope for improvement.

Authors:  Sheetal Parekh-Bhurke; Chun S Kwok; Chun Pang; Lee Hooper; Yoon K Loke; Jon J Ryder; Alex J Sutton; Caroline B Hing; Ian Harvey; Fujian Song
Journal:  J Clin Epidemiol       Date:  2011-04       Impact factor: 6.437

3.  Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors.

Authors:  An-Wen Chan; Douglas G Altman
Journal:  BMJ       Date:  2005-01-28

4.  Geographic variation in the appropriate use of cesarean delivery.

Authors:  Katherine Baicker; Kasey S Buckles; Amitabh Chandra
Journal:  Health Aff (Millwood)       Date:  2006-08-08       Impact factor: 6.301

5.  How Geisinger's advanced medical home model argues the case for rapid-cycle innovation.

Authors:  Glenn D Steele; Jean A Haynes; Duane E Davis; Janet Tomcavage; Walter F Stewart; Tom R Graf; Ronald A Paulus; Karena Weikel; Janet Shikles
Journal:  Health Aff (Millwood)       Date:  2010-11       Impact factor: 6.301

Review 6.  A framework for the analysis of heterogeneity of treatment effect in patient-centered outcomes research.

Authors:  Ravi Varadhan; Jodi B Segal; Cynthia M Boyd; Albert W Wu; Carlos O Weiss
Journal:  J Clin Epidemiol       Date:  2013-05-04       Impact factor: 6.437

7.  Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions.

Authors:  Holger J Schünemann; Peter Tugwell; Barnaby C Reeves; Elie A Akl; Nancy Santesso; Frederick A Spencer; Beverley Shea; George Wells; Mark Helfand
Journal:  Res Synth Methods       Date:  2013-03       Impact factor: 5.273

8.  Does consent bias research?

Authors:  Mark A Rothstein; Abigail B Shoben
Journal:  Am J Bioeth       Date:  2013       Impact factor: 11.229

9.  Screening electronic veterans' health records for medication discontinuation.

Authors:  Thomas S Rector; Sean Nugent; Michele Spoont; Siamk Noorbaloochi; Hanna E Bloomfield
Journal:  Am J Manag Care       Date:  2012-07       Impact factor: 2.229

Review 10.  From concepts, theory, and evidence of heterogeneity of treatment effects to methodological approaches: a primer.

Authors:  Richard J Willke; Zhiyuan Zheng; Prasun Subedi; Rikard Althin; C Daniel Mullins
Journal:  BMC Med Res Methodol       Date:  2012-12-13       Impact factor: 4.615

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.