| Literature DB >> 32646437 |
Peipei Chen1,2, Wei Dong3, Jinliang Wang4, Xudong Lu1,2, Uzay Kaymak1,2, Zhengxing Huang5.
Abstract
BACKGROUND: The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret.Entities:
Keywords: Attention mechanism; Clinical prediction; Deep learning; Interpretability
Mesh:
Year: 2020 PMID: 32646437 PMCID: PMC7346336 DOI: 10.1186/s12911-020-1110-7
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 2.796
Fig. 1The overview of the proposed model
Fig. 2The (a) Accuracy and (b) AUC of the proposed model with different number of hidden layers in comparison with baseline models
The prediction performance of all the models (mean ± std. (standard deviation))
| Models | Accuracy | Precision | Recall | F1 | AUC |
|---|---|---|---|---|---|
| MLP_attention | 0.795 ± 0.059 | ||||
| MLP | 0.651 ± 0.028 | 0.692 ± 0.022 | 0.741 ± 0.030 | 0.683 ± 0.041 | |
| LR | 0.655 ± 0.027 | 0.700 ± 0.019 | 0.792 ± 0.043 | 0.743 ± 0.024 | 0.684 ± 0.039 |
| SDAE | 0.623 ± 0.025 | 0.670 ± 0.018 | 0.782 ± 0.038 | 0.722 ± 0.022 | 0.658 ± 0.033 |
The p-value of paired t-test between the proposed model and baseline models
| Models | MLP_attention | MLP | LR | SDAE |
|---|---|---|---|---|
| MLP_attention | – | 0.003 | 0.005 | 0.0008 |
| MLP | – | 0.009 | 0.005 | |
| LR | – | 0.003 | ||
| SDAE | – |
Fig. 3The heat map showing the contribution (attention weight) of each feature for readmission identified by the proposed model for 50 randomly selected patients
Fig. 4The bar plots of the attention weights of each patient feature for two randomly selected patients
The top-ranked features of the two randomly selected patients in Fig. 4 (attention weights> 0.2)
| Patient 1 | Patient 2 | ||||
|---|---|---|---|---|---|
| Feature ID | Name | Attention weights | Feature ID | Name | Attention weights |
| 21 | NT-proBNP | 0.114 | 21 | NT-proBNP | 0.192 |
| 32 | Sodium | 0.035 | 7 | SBP | 0.040 |
| 12 | CHD | 0.028 | 87 | Left ventricular end-systolic volume | 0.033 |
| 103 | Spironolactone | 0.027 | 15 | Diabetes | 0.028 |
| 89 | Left ventricular end-diastolic volume index | 0.025 | 60 | Platelet count | 0.028 |
| 94 | CCB | 0.024 | – | – | – |
| 78 | Left atrial diameter | 0.023 | – | – | – |
The top-ten globally ranked features of the proposed model and LR
| MLP_attention | LR | |||
|---|---|---|---|---|
| Feature ID | Name | Frequency | Feature ID | Name |
| 21 | 736 | 21 | ||
| 94 | CCB | 374 | 7 | |
| 78 | Left atrial diameter | 225 | 41 | Lactate dehydrogenase |
| 103 | Spironolactone | 180 | 57 | Monocytes ratio |
| 7 | 167 | 3 | Height | |
| 32 | Sodium | 146 | 8 | DBP (Diastolic blood pressure) |
| 85 | Interventricular septal thickness | 146 | 5 | BMI |
| 87 | Left ventricular end systolic volume | 140 | 34 | Phosphorus |
| 10 | Anemia | 135 | 59 | Basophil ratio |
| 60 | 126 | 60 | ||