Xinran Liu1,2, James Anstey1, Ron Li3, Chethan Sarabu4,5, Reiri Sono2, Atul J Butte6. 1. Division of Hospital Medicine, University of California, San Francisco, San Francisco, California, United States. 2. University of California, San Francisco, San Francisco, California, United States. 3. Division of Hospital Medicine, Stanford University, Stanford, California, United States. 4. doc.ai, Palo Alto, California, United States. 5. Department of Pediatrics, Stanford University, Stanford, California, United States. 6. Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, California, United States.
Abstract
BACKGROUND: Machine learning (ML) has captured the attention of many clinicians who may not have formal training in this area but are otherwise increasingly exposed to ML literature that may be relevant to their clinical specialties. ML papers that follow an outcomes-based research format can be assessed using clinical research appraisal frameworks such as PICO (Population, Intervention, Comparison, Outcome). However, the PICO frameworks strain when applied to ML papers that create new ML models, which are akin to diagnostic tests. There is a need for a new framework to help assess such papers. OBJECTIVE: We propose a new framework to help clinicians systematically read and evaluate medical ML papers whose aim is to create a new ML model: ML-PICO (Machine Learning, Population, Identification, Crosscheck, Outcomes). We describe how the ML-PICO framework can be applied toward appraising literature describing ML models for health care. CONCLUSION: The relevance of ML to practitioners of clinical medicine is steadily increasing with a growing body of literature. Therefore, it is increasingly important for clinicians to be familiar with how to assess and best utilize these tools. In this paper we have described a practical framework on how to read ML papers that create a new ML model (or diagnostic test): ML-PICO. We hope that this can be used by clinicians to better evaluate the quality and utility of ML papers. Thieme. All rights reserved.
BACKGROUND: Machine learning (ML) has captured the attention of many clinicians who may not have formal training in this area but are otherwise increasingly exposed to ML literature that may be relevant to their clinical specialties. ML papers that follow an outcomes-based research format can be assessed using clinical research appraisal frameworks such as PICO (Population, Intervention, Comparison, Outcome). However, the PICO frameworks strain when applied to ML papers that create new ML models, which are akin to diagnostic tests. There is a need for a new framework to help assess such papers. OBJECTIVE: We propose a new framework to help clinicians systematically read and evaluate medical ML papers whose aim is to create a new ML model: ML-PICO (Machine Learning, Population, Identification, Crosscheck, Outcomes). We describe how the ML-PICO framework can be applied toward appraising literature describing ML models for health care. CONCLUSION: The relevance of ML to practitioners of clinical medicine is steadily increasing with a growing body of literature. Therefore, it is increasingly important for clinicians to be familiar with how to assess and best utilize these tools. In this paper we have described a practical framework on how to read ML papers that create a new ML model (or diagnostic test): ML-PICO. We hope that this can be used by clinicians to better evaluate the quality and utility of ML papers. Thieme. All rights reserved.
Authors: Chanu Rhee; Raymund Dantes; Lauren Epstein; David J Murphy; Christopher W Seymour; Theodore J Iwashyna; Sameer S Kadri; Derek C Angus; Robert L Danner; Anthony E Fiore; John A Jernigan; Greg S Martin; Edward Septimus; David K Warren; Anita Karcz; Christina Chan; John T Menchaca; Rui Wang; Susan Gruber; Michael Klompas Journal: JAMA Date: 2017-10-03 Impact factor: 56.272
Authors: Jacob S Calvert; Daniel A Price; Uli K Chettipally; Christopher W Barton; Mitchell D Feldman; Jana L Hoffman; Melissa Jay; Ritankar Das Journal: Comput Biol Med Date: 2016-05-12 Impact factor: 4.589
Authors: Rahul Kumar Sevakula; Wan-Tai M Au-Yeung; Jagmeet P Singh; E Kevin Heist; Eric M Isselbacher; Antonis A Armoundas Journal: J Am Heart Assoc Date: 2020-02-13 Impact factor: 5.501
Authors: Myura Nagendran; Yang Chen; Christopher A Lovejoy; Anthony C Gordon; Matthieu Komorowski; Hugh Harvey; Eric J Topol; John P A Ioannidis; Gary S Collins; Mahiben Maruthappu Journal: BMJ Date: 2020-03-25