Jonathan H Chen1, Muthuraman Alagappan2, Mary K Goldstein3, Steven M Asch4, Russ B Altman5. 1. Department of Medicine, Stanford University, Stanford, CA, USA. Electronic address: jonc101@stanford.edu. 2. Internal Medicine Residency Program, Beth Israel Deaconess Medical Center, Boston, MA USA. 3. Geriatrics Research Education and Clinical Center, Veteran Affairs Palo Alto Health Care System, Palo Alto, CA, USA; Primary Care and Outcomes Research (PCOR), Stanford University, Stanford, CA, USA. 4. Department of Medicine, Stanford University, Stanford, CA, USA; Center for Innovation to Implementation (Ci2i), Veteran Affairs Palo Alto Health Care System, Palo Alto, CA, USA. 5. Department of Medicine, Stanford University, Stanford, CA, USA; Departments of Bioengineering and Genetics, Stanford University, Stanford, CA, USA.
Abstract
OBJECTIVE: Determine how varying longitudinal historical training data can impact prediction of future clinical decisions. Estimate the "decay rate" of clinical data source relevance. MATERIALS AND METHODS: We trained a clinical order recommender system, analogous to Netflix or Amazon's "Customers who bought A also bought B..." product recommenders, based on a tertiary academic hospital's structured electronic health record data. We used this system to predict future (2013) admission orders based on different subsets of historical training data (2009 through 2012), relative to existing human-authored order sets. RESULTS: Predicting future (2013) inpatient orders is more accurate with models trained on just one month of recent (2012) data than with 12 months of older (2009) data (ROC AUC 0.91 vs. 0.88, precision 27% vs. 22%, recall 52% vs. 43%, all P<10-10). Algorithmically learned models from even the older (2009) data was still more effective than existing human-authored order sets (ROC AUC 0.81, precision 16% recall 35%). Training with more longitudinal data (2009-2012) was no better than using only the most recent (2012) data, unless applying a decaying weighting scheme with a "half-life" of data relevance about 4 months. DISCUSSION: Clinical practice patterns (automatically) learned from electronic health record data can vary substantially across years. Gold standards for clinical decision support are elusive moving targets, reinforcing the need for automated methods that can adapt to evolving information. CONCLUSIONS AND RELEVANCM: Prioritizing small amounts of recent data is more effective than using larger amounts of older data towards future clinical predictions.
OBJECTIVE: Determine how varying longitudinal historical training data can impact prediction of future clinical decisions. Estimate the "decay rate" of clinical data source relevance. MATERIALS AND METHODS: We trained a clinical order recommender system, analogous to Netflix or Amazon's "Customers who bought A also bought B..." product recommenders, based on a tertiary academic hospital's structured electronic health record data. We used this system to predict future (2013) admission orders based on different subsets of historical training data (2009 through 2012), relative to existing human-authored order sets. RESULTS: Predicting future (2013) inpatient orders is more accurate with models trained on just one month of recent (2012) data than with 12 months of older (2009) data (ROC AUC 0.91 vs. 0.88, precision 27% vs. 22%, recall 52% vs. 43%, all P<10-10). Algorithmically learned models from even the older (2009) data was still more effective than existing human-authored order sets (ROC AUC 0.81, precision 16% recall 35%). Training with more longitudinal data (2009-2012) was no better than using only the most recent (2012) data, unless applying a decaying weighting scheme with a "half-life" of data relevance about 4 months. DISCUSSION: Clinical practice patterns (automatically) learned from electronic health record data can vary substantially across years. Gold standards for clinical decision support are elusive moving targets, reinforcing the need for automated methods that can adapt to evolving information. CONCLUSIONS AND RELEVANCM: Prioritizing small amounts of recent data is more effective than using larger amounts of older data towards future clinical predictions.
Authors: Scott T Micek; Nareg Roubinian; Tim Heuring; Meghan Bode; Jennifer Williams; Courtney Harrison; Theresa Murphy; Donna Prentice; Brent E Ruoff; Marin H Kollef Journal: Crit Care Med Date: 2006-11 Impact factor: 7.598
Authors: Robert Moskovitch; Hyunmi Choi; George Hripcsak; Nicholas Tatonetti Journal: IEEE/ACM Trans Comput Biol Bioinform Date: 2016-07-14 Impact factor: 3.710
Authors: Jason K Wang; Jason Hom; Santhosh Balasubramanian; Alejandro Schuler; Nigam H Shah; Mary K Goldstein; Michael T M Baiocchi; Jonathan H Chen Journal: J Biomed Inform Date: 2018-09-07 Impact factor: 6.317
Authors: Thomas G Myers; Prem N Ramkumar; Benjamin F Ricciardi; Kenneth L Urish; Jens Kipper; Constantinos Ketonis Journal: J Bone Joint Surg Am Date: 2020-05-06 Impact factor: 5.284
Authors: Daniel J Feller; Jason Zucker; Oliver Bear Don't Walk; Michael T Yin; Peter Gordon; Noémie Elhadad Journal: AMIA Annu Symp Proc Date: 2020-03-04
Authors: Michala Skovlund Sørensen; Thomas Alexander Gerds; Klaus Hindsø; Michael Mørk Petersen Journal: Clin Orthop Relat Res Date: 2018-08 Impact factor: 4.176
Authors: Jean Feng; Rachael V Phillips; Ivana Malenica; Andrew Bishara; Alan E Hubbard; Leo A Celi; Romain Pirracchio Journal: NPJ Digit Med Date: 2022-05-31
Authors: John H Holmes; James Beinlich; Mary R Boland; Kathryn H Bowles; Yong Chen; Tessa S Cook; George Demiris; Michael Draugelis; Laura Fluharty; Peter E Gabriel; Robert Grundmeier; C William Hanson; Daniel S Herman; Blanca E Himes; Rebecca A Hubbard; Charles E Kahn; Dokyoon Kim; Ross Koppel; Qi Long; Nebojsa Mirkovic; Jeffrey S Morris; Danielle L Mowery; Marylyn D Ritchie; Ryan Urbanowicz; Jason H Moore Journal: Methods Inf Med Date: 2021-07-19 Impact factor: 1.800