Literature DB >> 35603010

Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.

Cynthia Rudin1.   

Abstract

Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.

Entities:  

Year:  2019        PMID: 35603010      PMCID: PMC9122117          DOI: 10.1038/s42256-019-0048-x

Source DB:  PubMed          Journal:  Nat Mach Intell        ISSN: 2522-5839


  5 in total

1.  Definitions, methods, and applications in interpretable machine learning.

Authors:  W James Murdoch; Chandan Singh; Karl Kumbier; Reza Abbasi-Asl; Bin Yu
Journal:  Proc Natl Acad Sci U S A       Date:  2019-10-16       Impact factor: 11.205

2.  The World Health Organization Adult Attention-Deficit/Hyperactivity Disorder Self-Report Screening Scale for DSM-5.

Authors:  Berk Ustun; Lenard A Adler; Cynthia Rudin; Stephen V Faraone; Thomas J Spencer; Patricia Berglund; Michael J Gruber; Ronald C Kessler
Journal:  JAMA Psychiatry       Date:  2017-05-01       Impact factor: 21.596

3.  Population-Level Prediction of Type 2 Diabetes From Claims Data and Analysis of Risk Factors.

Authors:  Narges Razavian; Saul Blecker; Ann Marie Schmidt; Aaron Smith-McLallen; Somesh Nigam; David Sontag
Journal:  Big Data       Date:  2015-12       Impact factor: 2.128

4.  Modeling recovery curves with application to prostatectomy.

Authors:  Fulton Wang; Cynthia Rudin; Tyler H Mccormick; John L Gore
Journal:  Biostatistics       Date:  2019-10-01       Impact factor: 5.899

5.  Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Authors:  John R Zech; Marcus A Badgeley; Manway Liu; Anthony B Costa; Joseph J Titano; Eric Karl Oermann
Journal:  PLoS Med       Date:  2018-11-06       Impact factor: 11.069

  5 in total
  112 in total

1.  Explaining a series of models by propagating Shapley values.

Authors:  Hugh Chen; Scott M Lundberg; Su-In Lee
Journal:  Nat Commun       Date:  2022-08-03       Impact factor: 17.694

2.  Life-threatening ventricular arrhythmia prediction in patients with dilated cardiomyopathy using explainable electrocardiogram-based deep neural networks.

Authors:  Arjan Sammani; Rutger R van de Leur; Michiel T H M Henkens; Mathias Meine; Peter Loh; Rutger J Hassink; Daniel L Oberski; Stephane R B Heymans; Pieter A Doevendans; Folkert W Asselbergs; Anneline S J M Te Riele; René van Es
Journal:  Europace       Date:  2022-10-13       Impact factor: 5.486

Review 3.  Causal machine learning for healthcare and precision medicine.

Authors:  Pedro Sanchez; Jeremy P Voisey; Tian Xia; Hannah I Watson; Alison Q O'Neil; Sotirios A Tsaftaris
Journal:  R Soc Open Sci       Date:  2022-08-03       Impact factor: 3.653

4.  PIP: Pictorial Interpretable Prototype Learning for Time Series Classification.

Authors:  Alireza Ghods; Diane J Cook
Journal:  IEEE Comput Intell Mag       Date:  2022-01-12       Impact factor: 9.809

5.  Mitigating Bias in Radiology Machine Learning: 3. Performance Metrics.

Authors:  Shahriar Faghani; Bardia Khosravi; Kuan Zhang; Mana Moassefi; Jaidip Manikrao Jagtap; Fred Nugen; Sanaz Vahdati; Shiba P Kuanar; Seyed Moein Rassoulinejad-Mousavi; Yashbir Singh; Diana V Vera Garcia; Pouria Rouzrokh; Bradley J Erickson
Journal:  Radiol Artif Intell       Date:  2022-08-24

6.  Artificial intelligence decision points in an emergency department.

Authors:  Hansol Chang; Won Chul Cha
Journal:  Clin Exp Emerg Med       Date:  2022-09-30

7.  EdgeSHAPer: Bond-centric Shapley value-based explanation method for graph neural networks.

Authors:  Andrea Mastropietro; Giuseppe Pasculli; Christian Feldmann; Raquel Rodríguez-Pérez; Jürgen Bajorath
Journal:  iScience       Date:  2022-08-30

8.  All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously.

Authors:  Aaron Fisher; Cynthia Rudin; Francesca Dominici
Journal:  J Mach Learn Res       Date:  2019       Impact factor: 5.177

9.  Translational NLP: A New Paradigm and General Principles for Natural Language Processing Research.

Authors:  Denis Newman-Griffis; Jill Fain Lehman; Carolyn Rosé; Harry Hochheiser
Journal:  Proc Conf       Date:  2021-06

10.  Artificial intelligence in hospitals: providing a status quo of ethical considerations in academia to guide future research.

Authors:  Milad Mirbabaie; Lennart Hofeditz; Nicholas R J Frick; Stefan Stieglitz
Journal:  AI Soc       Date:  2021-06-28
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.