| Literature DB >> 32089788 |
Andreas Holzinger1, Georg Langs2, Helmut Denk3, Kurt Zatloukal3, Heimo Müller1,3.
Abstract
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black-box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use-case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system This article is categorized under: Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction.Entities:
Keywords: artificial intelligence; causability; explainability; explainable AI; histopathology; medicine
Year: 2019 PMID: 32089788 PMCID: PMC7017860 DOI: 10.1002/widm.1312
Source DB: PubMed Journal: Wiley Interdiscip Rev Data Min Knowl Discov ISSN: 1942-4795
Figure 1The best performing statistical approaches today are black‐boxes and do not foster understanding, trust and error correction (above). This implies an urgent need not only for explainable models, but also for explanation interfaces—and as a measure for the human‐AI interaction we need the concept of causability—analogous to usability in classic human‐computer interaction
Figure 2An overview of how deep learning models can be probed for information regarding uncertainty, attribution, and prototypes
Figure 3Features in a histology slide annotated by a human expert pathologist
| Explainability | in a technical sense highlights decision‐relevant parts of the used representations of the algorithms and active parts in the algorithmic model, that either contribute to the model accuracy on the training set, or to a specific prediction for one particular observation. It does not refer to an explicit human model. |
| Causability | as the extent to which an explanation of a statement to a human expert achieves a specified level of causal understanding with effectiveness, efficiency and satisfaction in a specified context of use. |