Literature DB >> 34278297

Principles and Practice of Explainable Machine Learning.

Vaishak Belle1,2, Ioannis Papantonis1.   

Abstract

Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with a significant challenge: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods-machine learning (ML) and pattern recognition models in particular-so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions. From an organization viewpoint, after motivating the area broadly, we discuss the main developments, including the principles that allow us to study transparent models vs. opaque models, as well as model-specific or model-agnostic post-hoc explainability approaches. We also briefly reflect on deep learning models, and conclude with a discussion about future research directions.
Copyright © 2021 Belle and Papantonis.

Entities:  

Keywords:  black-box models; explainable AI; machine learning; survey; transparent models

Year:  2021        PMID: 34278297      PMCID: PMC8281957          DOI: 10.3389/fdata.2021.688969

Source DB:  PubMed          Journal:  Front Big Data        ISSN: 2624-909X


  7 in total

1.  Multiple additive regression trees with application in epidemiology.

Authors:  Jerome H Friedman; Jacqueline J Meulman
Journal:  Stat Med       Date:  2003-05-15       Impact factor: 2.373

2.  Neural network explanation using inversion.

Authors:  Emad W Saad; Donald C Wunsch
Journal:  Neural Netw       Date:  2006-10-06

3.  Interpretable Deep Models for ICU Outcome Prediction.

Authors:  Zhengping Che; Sanjay Purushotham; Robinder Khemani; Yan Liu
Journal:  AMIA Annu Symp Proc       Date:  2017-02-10

4.  An incremental explanation of inference in Bayesian networks for increasing model trustworthiness and supporting clinical decision making.

Authors:  Evangelia Kyrimi; Somayyeh Mossadegh; Nigel Tai; William Marsh
Journal:  Artif Intell Med       Date:  2020-01-31       Impact factor: 5.326

5.  Improving the explainability of Random Forest classifier - user centered approach.

Authors:  Dragutin Petkovic; Russ Altman; Mike Wong; Arthur Vigil
Journal:  Pac Symp Biocomput       Date:  2018

6.  Applications of Bayesian network models in predicting types of hematological malignancies.

Authors:  Rupesh Agrahari; Amir Foroushani; T Roderick Docking; Linda Chang; Gerben Duns; Monika Hudoba; Aly Karsan; Habil Zare
Journal:  Sci Rep       Date:  2018-05-03       Impact factor: 4.379

Review 7.  Neurorobots as a Means Toward Neuroethology and Explainable AI.

Authors:  Kexin Chen; Tiffany Hwu; Hirak J Kashyap; Jeffrey L Krichmar; Kenneth Stewart; Jinwei Xing; Xinyun Zou
Journal:  Front Neurorobot       Date:  2020-10-19       Impact factor: 2.650

  7 in total
  15 in total

1.  Predicting Bulk Average Velocity with Rigid Vegetation in Open Channels Using Tree-Based Machine Learning: A Novel Approach Using Explainable Artificial Intelligence.

Authors:  D P P Meddage; I U Ekanayake; Sumudu Herath; R Gobirahavan; Nitin Muttil; Upaka Rathnayake
Journal:  Sensors (Basel)       Date:  2022-06-10       Impact factor: 3.847

2.  EdgeSHAPer: Bond-centric Shapley value-based explanation method for graph neural networks.

Authors:  Andrea Mastropietro; Giuseppe Pasculli; Christian Feldmann; Raquel Rodríguez-Pérez; Jürgen Bajorath
Journal:  iScience       Date:  2022-08-30

3.  Calculation of exact Shapley values for support vector machines with Tanimoto kernel enables model interpretation.

Authors:  Christian Feldmann; Jürgen Bajorath
Journal:  iScience       Date:  2022-08-27

4.  Machine Learning for Understanding and Predicting Injuries in Football.

Authors:  Aritra Majumdar; Rashid Bakirov; Dan Hodges; Suzanne Scott; Tim Rees
Journal:  Sports Med Open       Date:  2022-06-07

5.  Machine Learning Approaches for Hospital Acquired Pressure Injuries: A Retrospective Study of Electronic Medical Records.

Authors:  Joshua J Levy; Jorge F Lima; Megan W Miller; Gary L Freed; A James O'Malley; Rebecca T Emeny
Journal:  Front Med Technol       Date:  2022-06-16

Review 6.  Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations.

Authors:  Anastasiya Kiseleva; Dimitris Kotzinos; Paul De Hert
Journal:  Front Artif Intell       Date:  2022-05-30

Review 7.  Translating the Machine: Skills that Human Clinicians Must Develop in the Era of Artificial Intelligence.

Authors:  Tariq M Aslam; David C Hoyle
Journal:  Ophthalmol Ther       Date:  2021-11-22

8.  Interpretable Clinical Decision Support System for Audiology Based on Predicted Common Audiological Functional Parameters (CAFPAs).

Authors:  Mareike Buhl
Journal:  Diagnostics (Basel)       Date:  2022-02-11

9.  Machine Learning Algorithms: Prediction and Feature Selection for Clinical Refracture after Surgically Treated Fragility Fracture.

Authors:  Hirokazu Shimizu; Ken Enda; Tomohiro Shimizu; Yusuke Ishida; Hotaka Ishizu; Koki Ise; Shinya Tanaka; Norimasa Iwasaki
Journal:  J Clin Med       Date:  2022-04-05       Impact factor: 4.241

10.  EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability.

Authors:  Andrea Tocchetti; Lorenzo Corti; Marco Brambilla; Irene Celino
Journal:  Front Artif Intell       Date:  2022-04-22
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.