| Literature DB >> 35330435 |
Anna Markella Antoniadi1,2, Miriam Galvin3, Mark Heverin3, Lan Wei1, Orla Hardiman2,3,4, Catherine Mooney1,2.
Abstract
Amyotrophic Lateral Sclerosis (ALS), also known as Motor Neuron Disease (MND), is a rare and fatal neurodegenerative disease. As ALS is currently incurable, the aim of the treatment is mainly to alleviate symptoms and improve quality of life (QoL). We designed a prototype Clinical Decision Support System (CDSS) to alert clinicians when a person with ALS is experiencing low QoL in order to inform and personalise the support they receive. Explainability is important for the success of a CDSS and its acceptance by healthcare professionals. The aim of this work isto announce our prototype (C-ALS), supported by a first short evaluation of its explainability. Given the lack of similar studies and systems, this work is a valid proof-of-concept that will lead to future work. We developed a CDSS that was evaluated by members of the team of healthcare professionals that provide care to people with ALS in the ALS/MND Multidisciplinary Clinic in Dublin, Ireland. We conducted a user study where participants were asked to review the CDSS and complete a short survey with a focus on explainability. Healthcare professionals demonstrated some uncertainty in understanding the system's output. Based on their feedback, we altered the explanation provided in the updated version of our CDSS. C-ALS provides local explanations of its predictions in a post-hoc manner, using SHAP (SHapley Additive exPlanations). The CDSS predicts the risk of low QoL in the form of a probability, a bar plot shows the feature importance for the specific prediction, along with some verbal guidelines on how to interpret the results. Additionally, we provide the option of a global explanation of the system's function in the form of a bar plot showing the average importance of each feature. C-ALS is available online for academic use.Entities:
Keywords: ALS: Amyotrophic Lateral Sclerosis; CDSS; Explainable AI; MND; XAI; artificial intelligence; clinical decision support systems; explainability; machine learning; quality of life
Year: 2022 PMID: 35330435 PMCID: PMC8955774 DOI: 10.3390/jpm12030435
Source DB: PubMed Journal: J Pers Med ISSN: 2075-4426
Figure 1Architecture of C-ALS (input, process, and output). The input feature space consists of the three features that were identified to be predictive of QoL (the patient’s age at disease onset, the primary caregiver’s employment status before the onset of their caregiving duties, and the patient’s ALSFRS-R score for orthopnoea). The three features are used by the XGBoost model to predict the outcome in the form of a probability, while SHAP is used to provide local explanations for the specific prediction in the form of a graphical representation.
Figure 2Screenshot of the first page of C-ALS. The first page describes the CDSS and three exemplar patients and allows the user to input the feature values to obtain a prediction. The prediction along with explanations opens in a new window.
Figure 3Screenshot of a prediction explained by version 1 of C-ALS.
Overall CDSS evaluation. Responses to 5-point Likert questions. 1: strongly disagree, 5: strongly agree.
| Question | 1 | 2 | 3 | 4 | 5 |
|---|---|---|---|---|---|
| Would you use a CDSS that may fall short in accuracy (i.e., sometimes make a wrong prediction) provided that an explanation is provided? | 0 (0%) | 2 (25%) | 6 (75%) | 0 (0%) | 0 (0%) |
| Would you find a CDSS that assesses the QoL of a patient with ALS useful for your decision-making regarding the patient’s and caregiver’s support provision? | 0 (0%) | 1 (12.5%) | 4 (50%) | 2 (25%) | 1 (12.5%) |
| Regarding our CDSS, would the provided output and explanation help you justify your clinical decision-making (e.g., to patients and colleagues)? | 0 (0%) | 1 (12.5%) | 4 (50%) | 3 (37.5%) | 0 (0%) |
| Does the visual representation of the CDSS output help you understand the predictions? | 0 (0%) | 0 (0%) | 3 (37.5%) | 5 (62.5%) | 0 (0%) |
| Does the visual representation of the CDSS output help you rationalise the predictions? | 1 (12.5%) | 0 (0%) | 2 (25%) | 5 (62.5%) | 0 (0%) |
| Does the explanation provided add towards your trust of model predictions? | 0 (0%) | 0 (0%) | 5 (62.5%) | 3 (37.5%) | 0 (0%) |
| Does the explanation provided help you decide on actionable steps you can undertake? | 0 (0%) | 1 (12.5%) | 5 (62.5%) | 2 (25%) | 0 (0%) |
Figure 4Screenshot of a prediction explained by version 2 of C-ALS.