Literature DB >> 33733193

The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions.

Sebastian Bruckert1, Bettina Finzel1, Ute Schmid1.   

Abstract

Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
Copyright © 2020 Bruckert, Finzel and Schmid.

Entities:  

Keywords:  companion; explainable artificial intelligence; interactive ML; interpretability; medical decision support; medical diagnosis; trust

Year:  2020        PMID: 33733193      PMCID: PMC7861251          DOI: 10.3389/frai.2020.507973

Source DB:  PubMed          Journal:  Front Artif Intell        ISSN: 2624-8212


  14 in total

Review 1.  Explanation and understanding.

Authors:  Frank C Keil
Journal:  Annu Rev Psychol       Date:  2006       Impact factor: 24.137

2.  An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening.

Authors:  Liming Hu; David Bell; Sameer Antani; Zhiyun Xue; Kai Yu; Matthew P Horning; Noni Gachuhi; Benjamin Wilson; Mayoore S Jaiswal; Brian Befano; L Rodney Long; Rolando Herrero; Mark H Einstein; Robert D Burk; Maria Demarco; Julia C Gage; Ana Cecilia Rodriguez; Nicolas Wentzensen; Mark Schiffman
Journal:  J Natl Cancer Inst       Date:  2019-09-01       Impact factor: 13.506

3.  A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.

Authors:  Kristin E Schaefer; Jessie Y C Chen; James L Szalma; P A Hancock
Journal:  Hum Factors       Date:  2016-03-22       Impact factor: 2.888

4.  Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data.

Authors:  Milena A Gianfrancesco; Suzanne Tamang; Jinoos Yazdany; Gabriela Schmajuk
Journal:  JAMA Intern Med       Date:  2018-11-01       Impact factor: 21.873

5.  Elicitation of neurological knowledge with argument-based machine learning.

Authors:  Vida Groznik; Matej Guid; Aleksander Sadikov; Martin Možina; Dejan Georgiev; Veronika Kragelj; Samo Ribarič; Zvezdan Pirtošek; Ivan Bratko
Journal:  Artif Intell Med       Date:  2012-10-12       Impact factor: 5.326

6.  Interactive machine learning for health informatics: when do we need the human-in-the-loop?

Authors:  Andreas Holzinger
Journal:  Brain Inform       Date:  2016-03-02

7.  Can machine-learning improve cardiovascular risk prediction using routine clinical data?

Authors:  Stephen F Weng; Jenna Reps; Joe Kai; Jonathan M Garibaldi; Nadeem Qureshi
Journal:  PLoS One       Date:  2017-04-04       Impact factor: 3.240

8.  Identifying Clinical Terms in Medical Text Using Ontology-Guided Machine Learning.

Authors:  Aryan Arbabi; David R Adams; Sanja Fidler; Michael Brudno
Journal:  JMIR Med Inform       Date:  2019-05-10

9.  Deep neural networks outperform human expert's capacity in characterizing bioleaching bacterial biofilm composition.

Authors:  Antoine Buetti-Dinh; Vanni Galli; Sören Bellenberg; Olga Ilie; Malte Herold; Stephan Christel; Mariia Boretska; Igor V Pivkin; Paul Wilmes; Wolfgang Sand; Mario Vera; Mark Dopson
Journal:  Biotechnol Rep (Amst)       Date:  2019-03-07

10.  Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.

Authors:  Miriam Hägele; Philipp Seegerer; Sebastian Lapuschkin; Michael Bockmayr; Wojciech Samek; Frederick Klauschen; Klaus-Robert Müller; Alexander Binder
Journal:  Sci Rep       Date:  2020-04-14       Impact factor: 4.379

View more
  2 in total

1.  Effects of a Differential Diagnosis List of Artificial Intelligence on Differential Diagnoses by Physicians: An Exploratory Analysis of Data from a Randomized Controlled Study.

Authors:  Yukinori Harada; Shinichi Katsukura; Ren Kawamura; Taro Shimizu
Journal:  Int J Environ Res Public Health       Date:  2021-05-23       Impact factor: 3.390

2.  Interactive System for Similarity-Based Inspection and Assessment of the Well-Being of mHealth Users.

Authors:  Subash Prakash; Vishnu Unnikrishnan; Rüdiger Pryss; Robin Kraft; Johannes Schobel; Ronny Hannemann; Berthold Langguth; Winfried Schlee; Myra Spiliopoulou
Journal:  Entropy (Basel)       Date:  2021-12-17       Impact factor: 2.524

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.