| Literature DB >> 35523917 |
Laura Moss1,2, David Corsar3, Martin Shaw4,5, Ian Piper6, Christopher Hawthorne7.
Abstract
Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.Entities:
Keywords: Algorithms; Artificial intelligence; Clinical decision-making; Critical care; Machine learning
Mesh:
Year: 2022 PMID: 35523917 PMCID: PMC9343258 DOI: 10.1007/s12028-022-01504-4
Source DB: PubMed Journal: Neurocrit Care ISSN: 1541-6933 Impact factor: 3.532
Selected examples of the use of interpretable machine learning approaches to NICU data
| Article | Study population | Data set(s) | Predicted variable | Machine learning algorithm applied | Interpretability technique(s) |
|---|---|---|---|---|---|
| Overweg et al. [ | ICU and TBI | CENTER-TBI, MIMIC-III | ICU/NICU mortality | BNN | HorseshoeBNN—a novel approach proposed by the authors; the horseshoe prior has been added to induce sparsity in the first layer of the BNN, enabling feature selection |
| Caicedo-Torres and Gutierrez [ | ICU | MIMIC-III | ICU mortality | Multiscale deep convolutional neural network (ConvNet) | DeepLIFT, visualizations |
| Thorsen-Meyer et al. [ | ICU | 5 Danish medical and surgical ICUs | All-cause 90-day mortality | Recurrent neural network with LSTM architecture | SHAP |
| Wang et al. [ | ICU patients diagnosed with cardiovascular disease | MIMIC-III | Survival | LSTM network | Counterfactual explanations |
| Fong et al. [ | ICU | eICU collaborative research database and 5 ICUs in Hong Kong | Hospital mortality | XGBoost | SHAP |
| Che et al. [ | Pediatric ICU patients with acute lung injury | Pediatric ICU at Children’s Hospital Los Angeles | Mortality, ventilator-free days | Interpretable mimic learning (using gradient boosting trees) | Partial dependence plots, feature importance, intrinsic interpretability of tree structure |
| Shickel et al. [ | ICU | UFHealth, MIMIC-III | In-hospital mortality | RNN with GRU | Modified GRU-RNN network with final self-attention mechanism (to identify feature importance) |
| Farzanah et al. [ | TBI | ProTECT III | Functional outcome – GOSE at 6 months | XGBoost | SHAP |
| Gao et al. [ | TBI | NICU at Cambridge University Hospitals, Cambridge | Mortality 6 months post brain injury | Decision tree | Intrinsic interpretability of model |
| Thoral et al. [ | ICU | AmsterdamUMCdb | ICU readmission and/or death, both within 7 days of ICU discharge | XGBoost | SHAP |