Literature DB >> 33954293

Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations.

Neil Jethani1, Mukund Sudarshan2, Yindalon Aphinyanaphongs3, Rajesh Ranganath2.   

Abstract

While the need for interpretable machine learning has been established, many common approaches are slow, lack fidelity, or hard to evaluate. Amortized explanation methods reduce the cost of providing interpretations by learning a global selector model that returns feature importances for a single instance of data. The selector model is trained to optimize the fidelity of the interpretations, as evaluated by a predictor model for the target. Popular methods learn the selector and predictor model in concert, which we show allows predictions to be encoded within interpretations. We introduce EVAL-X as a method to quantitatively evaluate interpretations and REAL-X as an amortized explanation method, which learn a predictor model that approximates the true data generating distribution given any subset of the input. We show EVAL-X can detect when predictions are encoded in interpretations and show the advantages of REAL-X through quantitative and radiologist evaluation.

Entities:  

Year:  2021        PMID: 33954293      PMCID: PMC8096519     

Source DB:  PubMed          Journal:  Proc Mach Learn Res


  4 in total

1.  Evaluating the Visualization of What a Deep Neural Network Has Learned.

Authors:  Wojciech Samek; Alexander Binder; Gregoire Montavon; Sebastian Lapuschkin; Klaus-Robert Muller
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2017-11       Impact factor: 10.451

2.  Mastering the game of Go without human knowledge.

Authors:  David Silver; Julian Schrittwieser; Karen Simonyan; Ioannis Antonoglou; Aja Huang; Arthur Guez; Thomas Hubert; Lucas Baker; Matthew Lai; Adrian Bolton; Yutian Chen; Timothy Lillicrap; Fan Hui; Laurent Sifre; George van den Driessche; Thore Graepel; Demis Hassabis
Journal:  Nature       Date:  2017-10-18       Impact factor: 49.962

3.  Predicting effects of noncoding variants with deep learning-based sequence model.

Authors:  Jian Zhou; Olga G Troyanskaya
Journal:  Nat Methods       Date:  2015-08-24       Impact factor: 28.547

4.  Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Authors:  John R Zech; Marcus A Badgeley; Manway Liu; Anthony B Costa; Joseph J Titano; Eric Karl Oermann
Journal:  PLoS Med       Date:  2018-11-06       Impact factor: 11.069

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.