Li-Heng Fu1, Jessica Schwartz2, Amanda Moy3, Chris Knaplund3, Min-Jeoung Kang4, Kumiko O Schnock4, Jose P Garcia5, Haomiao Jia6, Patricia C Dykes4, Kenrick Cato2, David Albers7, Sarah Collins Rossetti8. 1. Department of Biomedical Informatics, Columbia University, New York, NY, United States. Electronic address: lf2608@cumc.columbia.edu. 2. School of Nursing, Columbia University, New York, NY, United States. 3. Department of Biomedical Informatics, Columbia University, New York, NY, United States. 4. Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, MA, United States; Harvard Medical School, Boston, MA, United States. 5. Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, MA, United States. 6. School of Nursing, Columbia University, New York, NY, United States; Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY, United States. 7. Department of Biomedical Informatics, Columbia University, New York, NY, United States; Department of Pediatrics, Section of Informatics and Data Science, University of Colorado, Aurora, CO, United States. 8. Department of Biomedical Informatics, Columbia University, New York, NY, United States; School of Nursing, Columbia University, New York, NY, United States.
Abstract
OBJECTIVES: This review aims to: 1) evaluate the quality of model reporting, 2) provide an overview of methodology for developing and validating Early Warning Score Systems (EWSs) for adult patients in acute care settings, and 3) highlight the strengths and limitations of the methodologies, as well as identify future directions for EWS derivation and validation studies. METHODOLOGY: A systematic search was conducted in PubMed, Cochrane Library, and CINAHL. Only peer reviewed articles and clinical guidelines regarding developing and validating EWSs for adult patients in acute care settings were included. 615 articles were extracted and reviewed by five of the authors. Selected studies were evaluated based on the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist. The studies were analyzed according to their study design, predictor selection, outcome measurement, methodology of modeling, and validation strategy. RESULTS: A total of 29 articles were included in the final analysis. Twenty-six articles reported on the development and validation of a new EWS, while three reported on validation and model modification. Only eight studies met more than 75% of the items in the TRIPOD checklist. Three major techniques were utilized among the studies to inform their predictive algorithms: 1) clinical-consensus models (n = 6), 2) regression models (n = 15), and 3) tree models (n = 5). The number of predictors included in the EWSs varied from 3 to 72 with a median of seven. Twenty-eight models included vital signs, while 11 included lab data. Pulse oximetry, mental status, and other variables extracted from electronic health records (EHRs) were among other frequently used predictors. In-hospital mortality, unplanned transfer to the intensive care unit (ICU), and cardiac arrest were commonly used clinical outcomes. Twenty-eight studies conducted a form of model validation either within the study or against other widely-used EWSs. Only three studies validated their model using an external database separate from the derived database. CONCLUSION: This literature review demonstrates that the characteristics of the cohort, predictors, and outcome selection, as well as the metrics for model validation, vary greatly across EWS studies. There is no consensus on the optimal strategy for developing such algorithms since data-driven models with acceptable predictive accuracy are often site-specific. A standardized checklist for clinical prediction model reporting exists, but few studies have included reporting aligned with it in their publications. Data-driven models are subjected to biases in the use of EHR data, thus it is particularly important to provide detailed study protocols and acknowledge, leverage, or reduce potential biases of the data used for EWS development to improve transparency and generalizability.
OBJECTIVES: This review aims to: 1) evaluate the quality of model reporting, 2) provide an overview of methodology for developing and validating Early Warning Score Systems (EWSs) for adult patients in acute care settings, and 3) highlight the strengths and limitations of the methodologies, as well as identify future directions for EWS derivation and validation studies. METHODOLOGY: A systematic search was conducted in PubMed, Cochrane Library, and CINAHL. Only peer reviewed articles and clinical guidelines regarding developing and validating EWSs for adult patients in acute care settings were included. 615 articles were extracted and reviewed by five of the authors. Selected studies were evaluated based on the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist. The studies were analyzed according to their study design, predictor selection, outcome measurement, methodology of modeling, and validation strategy. RESULTS: A total of 29 articles were included in the final analysis. Twenty-six articles reported on the development and validation of a new EWS, while three reported on validation and model modification. Only eight studies met more than 75% of the items in the TRIPOD checklist. Three major techniques were utilized among the studies to inform their predictive algorithms: 1) clinical-consensus models (n = 6), 2) regression models (n = 15), and 3) tree models (n = 5). The number of predictors included in the EWSs varied from 3 to 72 with a median of seven. Twenty-eight models included vital signs, while 11 included lab data. Pulse oximetry, mental status, and other variables extracted from electronic health records (EHRs) were among other frequently used predictors. In-hospital mortality, unplanned transfer to the intensive care unit (ICU), and cardiac arrest were commonly used clinical outcomes. Twenty-eight studies conducted a form of model validation either within the study or against other widely-used EWSs. Only three studies validated their model using an external database separate from the derived database. CONCLUSION: This literature review demonstrates that the characteristics of the cohort, predictors, and outcome selection, as well as the metrics for model validation, vary greatly across EWS studies. There is no consensus on the optimal strategy for developing such algorithms since data-driven models with acceptable predictive accuracy are often site-specific. A standardized checklist for clinical prediction model reporting exists, but few studies have included reporting aligned with it in their publications. Data-driven models are subjected to biases in the use of EHR data, thus it is particularly important to provide detailed study protocols and acknowledge, leverage, or reduce potential biases of the data used for EWS development to improve transparency and generalizability.
Authors: Hannah Wunsch; Derek C Angus; David A Harrison; Walter T Linde-Zwirble; Kathryn M Rowan Journal: Am J Respir Crit Care Med Date: 2011-03-25 Impact factor: 21.405
Authors: Raina M Merchant; Lin Yang; Lance B Becker; Robert A Berg; Vinay Nadkarni; Graham Nichol; Brendan G Carr; Nandita Mitra; Steven M Bradley; Benjamin S Abella; Peter W Groeneveld Journal: Crit Care Med Date: 2011-11 Impact factor: 7.598
Authors: J González Del Castillo; A Julian-Jiménez; F González-Martínez; J Álvarez-Manzanares; P Piñera; C Navarro-Bustos; M Martinez-Ortiz de Zarate; F Llopis-Roca; M Debán Fernández; J Gamazo-Del Rio; E J García-Lamberechts; F J Martín-Sánchez Journal: Eur J Clin Microbiol Infect Dis Date: 2017-07-28 Impact factor: 3.267
Authors: Sarah A Collins; Kenrick Cato; David Albers; Karen Scott; Peter D Stetson; Suzanne Bakken; David K Vawdrey Journal: Am J Crit Care Date: 2013-07 Impact factor: 2.228
Authors: Carlos A Alvarez; Christopher A Clark; Song Zhang; Ethan A Halm; John J Shannon; Carlos E Girod; Lauren Cooper; Ruben Amarasingham Journal: BMC Med Inform Decis Mak Date: 2013-02-27 Impact factor: 2.796
Authors: Sarah Collins Rossetti; Patricia C Dykes; Christopher Knaplund; Min-Jeoung Kang; Kumiko Schnock; Jose Pedro Garcia; Li-Heng Fu; Frank Chang; Tien Thai; Matthew Fred; Tom Z Korach; Li Zhou; Jeffrey G Klann; David Albers; Jessica Schwartz; Graham Lowenthal; Haomiao Jia; Fang Liu; Kenrick Cato Journal: JMIR Res Protoc Date: 2021-12-10
Authors: Shu-Ling Chong; Mark Sen Liang Goh; Gene Yong-Kwang Ong; Jason Acworth; Rehena Sultana; Sarah Hui Wen Yao; Kee Chong Ng Journal: Resusc Plus Date: 2022-06-29
Authors: Sherif Gonem; Adam Taylor; Grazziela Figueredo; Sarah Forster; Philip Quinlan; Jonathan M Garibaldi; Tricia M McKeever; Dominick Shaw Journal: Respir Res Date: 2022-08-11
Authors: Jim M Smit; Jesse H Krijthe; Andrei N Tintu; Henrik Endeman; Jeroen Ludikhuize; Michel E van Genderen; Shermarke Hassan; Rachida El Moussaoui; Peter E Westerweel; Robbert J Goekoop; Geeke Waverijn; Tim Verheijen; Jan G den Hollander; Mark G J de Boer; Diederik A M P J Gommers; Robin van der Vlies; Mark Schellings; Regina A Carels; Cees van Nieuwkoop; Sesmu M Arbous; Jasper van Bommel; Rachel Knevel; Yolanda B de Rijke; Marcel J T Reinders Journal: Intensive Care Med Exp Date: 2022-09-19
Authors: Sarah Collins Rossetti; Chris Knaplund; Dave Albers; Patricia C Dykes; Min Jeoung Kang; Tom Z Korach; Li Zhou; Kumiko Schnock; Jose Garcia; Jessica Schwartz; Li-Heng Fu; Jeffrey G Klann; Graham Lowenthal; Kenrick Cato Journal: J Am Med Inform Assoc Date: 2021-06-12 Impact factor: 4.497