| Literature DB >> 35194568 |
Abstract
The principle behind artificial intelligence is mimicking human intelligence in the way that it can perform tasks, recognize patterns, or predict outcomes through learning from the acquired data of various sources. Artificial intelligence and machine learning algorithms have been widely used in autonomous driving, recommender systems in electronic commerce and social media, fintech, natural language understanding, and question answering systems. Artificial intelligence is also gradually changing the landscape of healthcare research (Yu et al. in Biomed Eng 2:719-731, 25). The rule-based approach that relied on the curation of medical knowledge and the construction of robust decision rules had drawn significant attention in diagnosing diseases and clinical decision support since half a century ago. In recent years, machine learning algorithms such as deep learning that can account for complex interactions between features is shown to be promising in predictive modeling in healthcare (Deo in Circulation 132:1920-1930, 26). Although many of these artificial intelligence and machine learning algorithms can achieve remarkably high performance, it is often difficult to be completely adopted in practical clinical environments due to the lack of explainability in some of these algorithms. Explainable artificial intelligence (XAI) is emerging to assist in the communication of internal decisions, behavior, and actions to health care professionals. Through explaining the prediction outcomes, XAI gains the trust of the clinicians as they may learn how to apply the predictive modeling in practical situations instead of blindly following the predictions. There are still many scenarios to explore how to make XAI effective in clinical settings due to the complexity of medical knowledge.Entities:
Keywords: Explainable artificial intelligence; Healthcare informatics; Human–computer interactions; Predictive modeling
Year: 2022 PMID: 35194568 PMCID: PMC8832418 DOI: 10.1007/s41666-022-00114-1
Source DB: PubMed Journal: J Healthc Inform Res ISSN: 2509-498X
Information-based explanation questions
| Input | What database is being utilized for the source of the data? What are the limitations of the utilized databases? What are the inclusion/exclusion criteria for the patient population? What are the characteristics of the patient population? What is the sample size? Is the data documentation consistent? What is the data latency? Is the data validated by clinical stakeholders? Is there any missing data? What is the ratio of training and testing data? How is the ground truth labeled? |
| Output | What type of output does the predictive model produce? What is the definition and logic of the output in the specific health problem? How do you interpret the presentation of the model output by the end users? What are the features that contributed to the model output? Which part of the medical workflow is the appropriate place to present the output of the model? How is the risk analysis affected by a model output? |
| Performance | What is the accuracy, sensitivity, specificity, and reliability of the model? How often do errors occur? What are the common errors in the model? Can the errors be identified quickly and how? How does the system monitor the model performance? How does the model perform with different clinical scenarios or patient populations? How can we improve the performance of the model? |
| How | How are the variables selected in the predictive model? How are the parameters in the model configured? How does the model deal with missing or inconsistent data? How can the end users access the features that contributed to a specific model output? What kind of algorithm is used? How does the model impact the current medical workflow? |
Instance-based clarification questions
| Why | Why does a particular patient have this prediction? Why do patient A and patient B have the same prediction? |
| Why not | Why is a particular patient not predicted in this class? Why is a particular patient predicted in this class but not another class? Why do patient A and patient B have different predictions? |
| What if | What does the prediction model predict if there are changes in the patient (e.g., treatment)? What does the prediction model predict if there is a patient with certain features (e.g., demographic features)? What if a parameter is adjusted in a certain variable? What if a different combination of variables is utilized in the algorithm? |
| How to be that | What kind of patient gets a different prediction? How should the features change for this patient to get a different prediction? |
| How to still be this | What is the change permitted for this patient to still get the same prediction? What is the limit of the feature(s) a patient can have to still get the same prediction? What is the necessary feature(s) present or absent to guarantee the same prediction? What kind of patient gets this prediction? |
Fig. 1Dialogue between the XAI system (explainer) and health care professionals (explainee): information-based explanation and instance-based clarification
Other questions
| Others | What is the problem the model is trying to solve? What measures can be utilized to demonstrate the impact of the model? What is the education and communication plan for the impacted users? What is the reporting strategy for the model? Are the metrics and measures of the model application defined? Is the solution scalable? |