Literature DB >> 33309898

The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.

Aniek F Markus1, Jan A Kors2, Peter R Rijnbeek2.   

Abstract

Artificial intelligence (AI) has huge potential to improve the health and well-being of people, but adoption in clinical practice is still limited. Lack of transparency is identified as one of the main barriers to implementation, as clinicians should be confident the AI system can be trusted. Explainable AI has the potential to overcome this issue and can be a step towards trustworthy AI. In this paper we review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems for the health-care domain and contribute to formalization of the field of explainable AI. We argue the reason to demand explainability determines what should be explained as this determines the relative importance of the properties of explainability (i.e. interpretability and fidelity). Based on this, we propose a framework to guide the choice between classes of explainable AI methods (explainable modelling versus post-hoc explanation; model-based, attribution-based, or example-based explanations; global and local explanations). Furthermore, we find that quantitative evaluation metrics, which are important for objective standardized evaluation, are still lacking for some properties (e.g. clarity) and types of explanations (e.g. example-based methods). We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice and complementary measures might be needed to create trustworthy AI in health care (e.g. reporting data quality, performing extensive (external) validation, and regulation).
Copyright © 2020 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Explainable artificial intelligence; Explainable modelling; Interpretability; Post-hoc explanation; Trustworthy artificial intelligence

Year:  2020        PMID: 33309898     DOI: 10.1016/j.jbi.2020.103655

Source DB:  PubMed          Journal:  J Biomed Inform        ISSN: 1532-0464            Impact factor:   6.317


  16 in total

1.  Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach, Development and Application.

Authors:  Vyacheslav Kharchenko; Herman Fesenko; Oleg Illiashenko
Journal:  Sensors (Basel)       Date:  2022-06-27       Impact factor: 3.847

Review 2.  Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review.

Authors:  Haomin Chen; Catalina Gomez; Chien-Ming Huang; Mathias Unberath
Journal:  NPJ Digit Med       Date:  2022-10-19

3.  A mental models approach for defining explainable artificial intelligence.

Authors:  Michael Merry; Pat Riddle; Jim Warren
Journal:  BMC Med Inform Decis Mak       Date:  2021-12-09       Impact factor: 2.796

4.  Integrating artificial intelligence in bedside care for covid-19 and future pandemics.

Authors:  Michael Yu; An Tang; Kip Brown; Rima Bouchakri; Pascal St-Onge; Sheng Wu; John Reeder; Louis Mullie; Michaël Chassé
Journal:  BMJ       Date:  2021-12-31

Review 5.  Overview of Explainable Artificial Intelligence for Prognostic and Health Management of Industrial Assets Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

Authors:  Ahmad Kamal Mohd Nor; Srinivasa Rao Pedapati; Masdi Muhammad; Víctor Leiva
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

6.  Development and Structure of an Accurate Machine Learning Algorithm to Predict Inpatient Mortality and Hospice Outcomes in the Coronavirus Disease 2019 Era.

Authors:  Stephen Chi; Aixia Guo; Kevin Heard; Seunghwan Kim; Randi Foraker; Patrick White; Nathan Moore
Journal:  Med Care       Date:  2022-05-01       Impact factor: 2.983

7.  Editorial: Artificial Intelligence in Positron Emission Tomography.

Authors:  Hanyi Fang; Kuangyu Shi; Xiuying Wang; Chuantao Zuo; Xiaoli Lan
Journal:  Front Med (Lausanne)       Date:  2022-01-31

8.  Evolution of hospitalized patient characteristics through the first three COVID-19 waves in Paris area using machine learning analysis.

Authors:  Camille Jung; Jean-Baptiste Excoffier; Mathilde Raphaël-Rousseau; Noémie Salaün-Penquer; Matthieu Ortala; Christos Chouaid
Journal:  PLoS One       Date:  2022-02-22       Impact factor: 3.240

9.  Use of unstructured text in prognostic clinical prediction models: a systematic review.

Authors:  Tom M Seinen; Egill A Fridgeirsson; Solomon Ioannou; Daniel Jeannetot; Luis H John; Jan A Kors; Aniek F Markus; Victor Pera; Alexandros Rekkas; Ross D Williams; Cynthia Yang; Erik M van Mulligen; Peter R Rijnbeek
Journal:  J Am Med Inform Assoc       Date:  2022-06-14       Impact factor: 7.942

Review 10.  Human Factors and Technological Characteristics Influencing the Interaction of Medical Professionals With Artificial Intelligence-Enabled Clinical Decision Support Systems: Literature Review.

Authors:  Michael Knop; Sebastian Weber; Marius Mueller; Bjoern Niehaves
Journal:  JMIR Hum Factors       Date:  2022-03-24
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.