Literature DB >> 33623933

Human Evaluation of Models Built for Interpretability.

Isaac Lage1, Emily Chen1, Jeffrey He1, Menaka Narayanan1, Been Kim2, Samuel J Gershman1, Finale Doshi-Velez1.   

Abstract

Recent years have seen a boom in interest in interpretable machine learning systems built on models that can be understood, at least to some degree, by domain experts. However, exactly what kinds of models are truly human-interpretable remains poorly understood. This work advances our understanding of precisely which factors make models interpretable in the context of decision sets, a specific class of logic-based model. We conduct carefully controlled human-subject experiments in two domains across three tasks based on human-simulatability through which we identify specific types of complexity that affect performance more heavily than others-trends that are consistent across tasks and domains. These results can inform the choice of regularizers during optimization to learn more interpretable models, and their consistency suggests that there may exist common design principles for interpretable machine learning systems.

Entities:  

Year:  2019        PMID: 33623933      PMCID: PMC7899148     

Source DB:  PubMed          Journal:  Proc AAAI Conf Hum Comput Crowdsourc


  6 in total

1.  Minimization of Boolean complexity in human concept learning.

Authors:  J Feldman
Journal:  Nature       Date:  2000-10-05       Impact factor: 49.962

2.  The magical number seven plus or minus two: some limits on our capacity for processing information.

Authors:  G A MILLER
Journal:  Psychol Rev       Date:  1956-03       Impact factor: 8.934

Review 3.  Interface design principles for usable decision support: a targeted review of best practices for clinical prescribing interventions.

Authors:  Jan Horsky; Gordon D Schiff; Douglas Johnston; Lauren Mercincavage; Douglas Bell; Blackford Middleton
Journal:  J Biomed Inform       Date:  2012-09-17       Impact factor: 6.317

4.  Interpretable Decision Sets: A Joint Framework for Description and Prediction.

Authors:  Himabindu Lakkaraju; Stephen H Bach; Leskovec Jure
Journal:  KDD       Date:  2016-08

5.  A feature-integration theory of attention.

Authors:  A M Treisman; G Gelade
Journal:  Cogn Psychol       Date:  1980-01       Impact factor: 3.468

6.  Human Evaluation of Models Built for Interpretability.

Authors:  Isaac Lage; Emily Chen; Jeffrey He; Menaka Narayanan; Been Kim; Samuel J Gershman; Finale Doshi-Velez
Journal:  Proc AAAI Conf Hum Comput Crowdsourc       Date:  2019-10-28
  6 in total
  4 in total

Review 1.  A Review of Recent Deep Learning Approaches in Human-Centered Machine Learning.

Authors:  Tharindu Kaluarachchi; Andrew Reis; Suranga Nanayakkara
Journal:  Sensors (Basel)       Date:  2021-04-03       Impact factor: 3.576

2.  What is Interpretability?

Authors:  Adrian Erasmus; Tyler D P Brunet; Eyal Fisher
Journal:  Philos Technol       Date:  2020-11-12

3.  The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.

Authors:  Kanika Goel; Renuka Sindhgatta; Sumit Kalra; Rohan Goel; Preeti Mutreja
Journal:  Comput Biol Med       Date:  2022-05-08       Impact factor: 6.698

4.  Human Evaluation of Models Built for Interpretability.

Authors:  Isaac Lage; Emily Chen; Jeffrey He; Menaka Narayanan; Been Kim; Samuel J Gershman; Finale Doshi-Velez
Journal:  Proc AAAI Conf Hum Comput Crowdsourc       Date:  2019-10-28
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.