Literature DB >> 26667619

Codifying healthcare--big data and the issue of misclassification.

Karim S Ladha1, Matthias Eikermann2.   

Abstract

The rise of electronic medical records has led to a proliferation of large observational studies that examine the perioperative period. In contrast to randomized controlled trials, these studies have the ability to provide quick, cheap and easily obtainable information on a variety of patients and are reflective of everyday clinical practice. However, it is important to note that the data used in these studies are often generated for billing or documentation purposes such as insurance claims or the electronic anesthetic record. The reliance on codes to define diagnoses in these studies may lead to false inferences or conclusions. Researchers should specify the code assignment process and be aware of potential error sources when undertaking studies using secondary data sources. While misclassification may be a short-coming of using large databases, it does not prevent their use in conducting meaningful effectiveness research that has direct consequences on medical decision making.

Entities:  

Mesh:

Year:  2015        PMID: 26667619      PMCID: PMC4678724          DOI: 10.1186/s12871-015-0165-y

Source DB:  PubMed          Journal:  BMC Anesthesiol        ISSN: 1471-2253            Impact factor:   2.217


Background

The rise of electronic medical records has led to a proliferation of large observational studies that examine the perioperative period. In contrast to randomized controlled trials (RCTs), these studies have the ability to provide quick, cheap and easily obtainable information on a variety of patients and are reflective of everyday clinical practice. Additionally these databases, with their large sample sizes, allow us to study rare but serious conditions such as reintubation that are difficult to detect in RCTs. However, it is important to note that the data used in these studies are often generated for billing or documentation purposes such as insurance claims or the electronic anesthetic record. In other words, it is “found data” or data which is not collected primarily for research. This renders the results of these studies susceptible to issues and biases not faced when dealing with traditional randomized controlled trials. The study by Thomas et al. recently published in BMC Anesthesiology [1] highlights one of these concerns, that is misclassification or measurement error. In their study, the authors examined trends in the International Classification of Diseases 9th edition (ICD-9) coding of sepsis and compared it to trends in clinically defined sepsis at a single tertiary center. They discovered an increase in the medical coding of sepsis over time that was not accompanied by a concomitant increase in clinically defined sepsis. This work highlights the caution that must be taken when using administrative databases to study disease trends and outcomes but also has several limitations that should be considered when determining its implications.

Main text

Nosology refers to the discipline of the systematic classification of diseases. While the field has ancient roots, its introduction into Western society was made by Thomas Syndenham during the 17th century [2]. The importance of nosology has continued to increase over time and the field has become particularly relevant as technology continues to play a more prominent role in the delivery of healthcare. ICD-9 codes are perhaps the most commonly used classification scheme in perioperative epidemiologic research. The generation of these codes is undoubtedly susceptible to error at several different points along the path from patient admission to the inclusion into a database [3]. The concern is that if researchers subsequently use these codes that are prone to error in studies, then false conclusions may be made. It has been suggested that validation studies be routinely performed to understand the accuracy of specific ICD-9 codes before using them in an analysis [4]. This type of study involves the comparison of administrative codes to data abstracted from chart review. The work of Thomas et al. [1], falls short of invalidating codes for sepsis since the authors did not investigate the accuracy of coding but rather looked at their use over time. Thus, it is unclear what is responsible for the discrepancy that they discovered and it could be that coding for sepsis became more accurate over time. Validation studies are not a panacea for misclassification bias. First, validation studies are usually undertaken at a single center since large national databases are typically de-identified. It is plausible and likely that coding practices differ across institutions as the coders undoubtedly have varying levels of training/experience between centers. Thus the generalizability of validation studies is unclear. The issue becomes murkier when considering diseases that do not have strict diagnostic criteria such as acquired muscle weakness in the intensive care unit [5], which creates variation amongst clinician documentation as well. There are no set criteria or cut-offs in defining acceptable accuracy of a particular code for use in a study. The validity of a specific code can be described in terms of its specificity, sensitivity, negative predictive value and positive predictive value. Which of these measures is most important can depend on the question that is being asked of the data. Finally some would argue that the level of accuracy is less important than the pattern of error. If there is random or non-differential misclassification than it has been traditionally argued that this would bias estimates towards the null, although this notion has been challenged [6].

Conclusion

While misclassification is a threat to the validity of a study, it is not a sufficient reason to dismiss observational research using administrative datasets. To do so, would be to lose a major opportunity to gain insights into how to make healthcare delivery more efficient and safer. Rather, misclassification should be viewed as simply a source of potential bias that must be considered when interpreting the results of these studies. Although validation studies may provide insight into the accuracy of some codes, it is neither practical nor possible to perform validity studies on every single ICD-9 code used in a particular investigation. One potential solution is to perform sensitivity analyses to determine how sensitive effect estimates are to misclassification [7]. The practice of evidence-based medicine is the application of the best available knowledge. This entails systematically identifying and evaluating appropriate literature, and integrating it with clinical expertise [8]. The traditional view of the evidence-based pyramid ranks evidence from the top (meta-analyses of well performed RCTs) to the bottom (expert opinion). However, each type of evidence has a unique set of benefits and disadvantages [9]. In practice, there is no perfect defense against misclassification and like any type of study design, repeated investigations of the same question using a variety of databases and analytic techniques is likely the best way to obtain causal inference.
  8 in total

1.  Nosology for our day: its application to chronic obstructive pulmonary disease.

Authors:  Gordon L Snider
Journal:  Am J Respir Crit Care Med       Date:  2003-03-01       Impact factor: 21.405

2.  Proper interpretation of non-differential misclassification effects: expectations vs observations.

Authors:  Anne M Jurek; Sander Greenland; George Maldonado; Timothy R Church
Journal:  Int J Epidemiol       Date:  2005-03-31       Impact factor: 7.196

3.  Sensitivity analysis of misclassification: a graphical and a Bayesian approach.

Authors:  Haitao Chu; Zhaojie Wang; Stephen R Cole; Sander Greenland
Journal:  Ann Epidemiol       Date:  2006-07-13       Impact factor: 3.797

4.  Measuring diagnoses: ICD code accuracy.

Authors:  Kimberly J O'Malley; Karon F Cook; Matt D Price; Kimberly Raiford Wildes; John F Hurdle; Carol M Ashton
Journal:  Health Serv Res       Date:  2005-10       Impact factor: 3.402

5.  The Importance of Validation Studies in Perioperative Database Research.

Authors:  Mark D Neuman
Journal:  Anesthesiology       Date:  2015-08       Impact factor: 7.892

Review 6.  Acquired Muscle Weakness in the Surgical Intensive Care Unit: Nosology, Epidemiology, Diagnosis, and Prevention.

Authors:  Hassan Farhan; Ingrid Moreno-Duarte; Nicola Latronico; Ross Zafonte; Matthias Eikermann
Journal:  Anesthesiology       Date:  2016-01       Impact factor: 7.892

7.  Evidence based medicine: a movement in crisis?

Authors:  Trisha Greenhalgh; Jeremy Howick; Neal Maskrey
Journal:  BMJ       Date:  2014-06-13

8.  Temporal trends in the systemic inflammatory response syndrome, sepsis, and medical coding of sepsis.

Authors:  Benjamin S Thomas; S Reza Jafarzadeh; David K Warren; Sandra McCormick; Victoria J Fraser; Jonas Marschall
Journal:  BMC Anesthesiol       Date:  2015-11-24       Impact factor: 2.217

  8 in total
  9 in total

1.  NONPARAMETRIC INFERENCE FOR MARKOV PROCESSES WITH MISSING ABSORBING STATE.

Authors:  Giorgos Bakoyannis; Ying Zhang; Constantin T Yiannoutsos
Journal:  Stat Sin       Date:  2019-10       Impact factor: 1.261

Review 2.  Toward a better understanding about real-world evidence.

Authors:  Mei Liu; Yana Qi; Wen Wang; Xin Sun
Journal:  Eur J Hosp Pharm       Date:  2021-12-02

3.  Sustainable Smart Industry: A Secure and Energy Efficient Consensus Mechanism for Artificial Intelligence Enabled Industrial Internet of Things.

Authors:  A Sasikumar; Logesh Ravi; Ketan Kotecha; Jatinderkumar R Saini; Vijayakumar Varadarajan; V Subramaniyaswamy
Journal:  Comput Intell Neurosci       Date:  2022-06-20

4.  Combining information from a clinical data warehouse and a pharmaceutical database to generate a framework to detect comorbidities in electronic health records.

Authors:  Emmanuelle Sylvestre; Guillaume Bouzillé; Emmanuel Chazard; Cécil His-Mahier; Christine Riou; Marc Cuggia
Journal:  BMC Med Inform Decis Mak       Date:  2018-01-24       Impact factor: 2.796

5.  A Bayesian approach for analysis of ordered categorical responses subject to misclassification.

Authors:  Ashley Ling; El Hamidi Hay; Samuel E Aggrey; Romdhane Rekaya
Journal:  PLoS One       Date:  2018-12-13       Impact factor: 3.240

6.  A Data Element-Function Conceptual Model for Data Quality Checks.

Authors:  James R Rogers; Tiffany J Callahan; Tian Kang; Alan Bauck; Ritu Khare; Jeffrey S Brown; Michael G Kahn; Chunhua Weng
Journal:  EGEMS (Wash DC)       Date:  2019-04-23

7.  Developing a systematic approach to assessing data quality in secondary use of clinical data based on intended use.

Authors:  Hanieh Razzaghi; Jane Greenberg; L Charles Bailey
Journal:  Learn Health Syst       Date:  2021-05-03

8.  Identifying Patients With Inflammatory Bowel Diseases in an Administrative Health Claims Database: Do Algorithms Generate Similar Findings?

Authors:  Yizhou Ye; Sudhakar Manne; Dimitri Bennett
Journal:  Inquiry       Date:  2019 Jan-Dec       Impact factor: 1.730

9.  Validation of a computational phenotype for finding patients eligible for genetic testing for pathogenic PTEN variants across three centers.

Authors:  Cartik Kothari; Siddharth Srivastava; Youssef Kousa; Rima Izem; Marcin Gierdalski; Dongkyu Kim; Amy Good; Kira A Dies; Gregory Geisel; Hiroki Morizono; Vittorio Gallo; Scott L Pomeroy; Gwenn A Garden; Lisa Guay-Woodford; Mustafa Sahin; Paul Avillach
Journal:  J Neurodev Disord       Date:  2022-03-23       Impact factor: 4.025

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.