Literature DB >> 34095753

Patient-Specific Explanations for Predictions of Clinical Outcomes.

Mohammadamin Tajgardoon1, Malarkodi J Samayamuthu2, Luca Calzoni2, Shyam Visweswaran1,2.   

Abstract

BACKGROUND: Machine learning models that are used for predicting clinical outcomes can be made more useful by augmenting predictions with simple and reliable patient-specific explanations for each prediction.
OBJECTIVES: This article evaluates the quality of explanations of predictions using physician reviewers. The predictions are obtained from a machine learning model that is developed to predict dire outcomes (severe complications including death) in patients with community acquired pneumonia (CAP).
METHODS: Using a dataset of patients diagnosed with CAP, we developed a predictive model to predict dire outcomes. On a set of 40 patients, who were predicted to be either at very high risk or at very low risk of developing a dire outcome, we applied an explanation method to generate patient-specific explanations. Three physician reviewers independently evaluated each explanatory feature in the context of the patient's data and were instructed to disagree with a feature if they did not agree with the magnitude of support, the direction of support (supportive versus contradictory), or both.
RESULTS: The model used for generating predictions achieved a F1 score of 0.43 and area under the receiver operating characteristic curve (AUROC) of 0.84 (95% confidence interval [CI]: 0.81-0.87). Interreviewer agreement between two reviewers was strong (Cohen's kappa coefficient = 0.87) and fair to moderate between the third reviewer and others (Cohen's kappa coefficient = 0.49 and 0.33). Agreement rates between reviewers and generated explanations-defined as the proportion of explanatory features with which majority of reviewers agreed-were 0.78 for actual explanations and 0.52 for fabricated explanations, and the difference between the two agreement rates was statistically significant (Chi-square = 19.76, p-value < 0.01).
CONCLUSION: There was good agreement among physician reviewers on patient-specific explanations that were generated to augment predictions of clinical outcomes. Such explanations can be useful in interpreting predictions of clinical outcomes.

Entities:  

Keywords:  clinical decision support system; machine learning; patient-specific explanation; predictive model

Year:  2019        PMID: 34095753      PMCID: PMC8174671          DOI: 10.1055/s-0039-1697907

Source DB:  PubMed          Journal:  ACI open        ISSN: 2566-9346


  14 in total

1.  Case-based explanation of non-case-based learning methods.

Authors:  R Caruana; H Kangarloo; J D Dionisio; U Sinha; D Johnson
Journal:  Proc AMIA Symp       Date:  1999

2.  Defining community acquired pneumonia severity on presentation to hospital: an international derivation and validation study.

Authors:  W S Lim; M M van der Eerden; R Laing; W G Boersma; N Karalus; G I Town; S A Lewis; J T Macfarlane
Journal:  Thorax       Date:  2003-05       Impact factor: 9.139

Review 3.  Deep learning for healthcare: review, opportunities and challenges.

Authors:  Riccardo Miotto; Fei Wang; Shuang Wang; Xiaoqian Jiang; Joel T Dudley
Journal:  Brief Bioinform       Date:  2018-11-27       Impact factor: 11.622

4.  Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.

Authors:  E R DeLong; D M DeLong; D L Clarke-Pearson
Journal:  Biometrics       Date:  1988-09       Impact factor: 2.571

5.  Doctor AI: Predicting Clinical Events via Recurrent Neural Networks.

Authors:  Edward Choi; Mohammad Taha Bahadori; Andy Schuetz; Walter F Stewart; Jimeng Sun
Journal:  JMLR Workshop Conf Proc       Date:  2016-12-10

6.  Predicting dire outcomes of patients with community acquired pneumonia.

Authors:  Gregory F Cooper; Vijoy Abraham; Constantin F Aliferis; John M Aronis; Bruce G Buchanan; Richard Caruana; Michael J Fine; Janine E Janosky; Gary Livingston; Tom Mitchell; Stefano Monti; Peter Spirtes
Journal:  J Biomed Inform       Date:  2005-03-17       Impact factor: 6.317

7.  A prediction rule to identify low-risk patients with community-acquired pneumonia.

Authors:  M J Fine; T E Auble; D M Yealy; B H Hanusa; L A Weissfeld; D E Singer; C M Coley; T J Marrie; W N Kapoor
Journal:  N Engl J Med       Date:  1997-01-23       Impact factor: 91.245

8.  pROC: an open-source package for R and S+ to analyze and compare ROC curves.

Authors:  Xavier Robin; Natacha Turck; Alexandre Hainard; Natalia Tiberti; Frédérique Lisacek; Jean-Charles Sanchez; Markus Müller
Journal:  BMC Bioinformatics       Date:  2011-03-17       Impact factor: 3.307

9.  Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records.

Authors:  Riccardo Miotto; Li Li; Brian A Kidd; Joel T Dudley
Journal:  Sci Rep       Date:  2016-05-17       Impact factor: 4.379

10.  Interrater reliability: the kappa statistic.

Authors:  Mary L McHugh
Journal:  Biochem Med (Zagreb)       Date:  2012       Impact factor: 2.313

View more
  1 in total

1.  Implementation approaches and barriers for rule-based and machine learning-based sepsis risk prediction tools: a qualitative study.

Authors:  Mugdha Joshi; Keizra Mecklai; Ronen Rozenblum; Lipika Samal
Journal:  JAMIA Open       Date:  2022-04-18
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.