Literature DB >> 32106285

Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator.

William K Diprose1, Nicholas Buist2, Ning Hua3, Quentin Thurier3, George Shand4, Reece Robinson3.   

Abstract

OBJECTIVE: Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods.
MATERIALS AND METHODS: We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association.
RESULTS: The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior.
CONCLUSIONS: Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.

Entities:  

Keywords:  artificial intelligence; decision support; explainability; interpretability; medicine

Year:  2020        PMID: 32106285      PMCID: PMC7647292          DOI: 10.1093/jamia/ocz229

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


  36 in total

1.  Clinical decision-making: coping with uncertainty.

Authors:  A F West; R R West
Journal:  Postgrad Med J       Date:  2002-06       Impact factor: 2.401

2.  Assessing the motivation of MDs to use computer-based support at the point-of-care in the Emergency Department.

Authors:  Dympna M O'Sullivan; Julie S Doyle; Wojtek J Michalowski; Szymon A Wilk; Ken J Farion; Craig E Kuziemsky
Journal:  AMIA Annu Symp Proc       Date:  2011-10-22

3.  Adoption of clinical decision support systems in a developing country: Antecedents and outcomes of physician's threat to perceived professional autonomy.

Authors:  Pouyan Esmaeilzadeh; Murali Sambasivan; Naresh Kumar; Hossein Nezakati
Journal:  Int J Med Inform       Date:  2015-04-08       Impact factor: 4.046

4.  Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?

Authors:  Hannah R Sullivan; Scott J Schweikart
Journal:  AMA J Ethics       Date:  2019-02-01

5.  What This Computer Needs Is a Physician: Humanism and Artificial Intelligence.

Authors:  Abraham Verghese; Nigam H Shah; Robert A Harrington
Journal:  JAMA       Date:  2018-01-02       Impact factor: 56.272

6.  How Should AI Be Developed, Validated, and Implemented in Patient Care?

Authors:  Michael Anderson; Susan Leigh Anderson
Journal:  AMA J Ethics       Date:  2019-02-01

7.  Explainable machine-learning predictions for the prevention of hypoxaemia during surgery.

Authors:  Scott M Lundberg; Bala Nair; Monica S Vavilala; Mayumi Horibe; Michael J Eisses; Trevor Adams; David E Liston; Daniel King-Wai Low; Shu-Fang Newman; Jerry Kim; Su-In Lee
Journal:  Nat Biomed Eng       Date:  2018-10-10       Impact factor: 25.671

8.  Relationship between nursing documentation and patients' mortality.

Authors:  Sarah A Collins; Kenrick Cato; David Albers; Karen Scott; Peter D Stetson; Suzanne Bakken; David K Vawdrey
Journal:  Am J Crit Care       Date:  2013-07       Impact factor: 2.228

9.  Neural hypernetwork approach for pulmonary embolism diagnosis.

Authors:  Matteo Rucco; David Sousa-Rodrigues; Emanuela Merelli; Jeffrey H Johnson; Lorenzo Falsetti; Cinzia Nitti; Aldo Salvi
Journal:  BMC Res Notes       Date:  2015-10-29

10.  Personalized glucose forecasting for type 2 diabetes using data assimilation.

Authors:  David J Albers; Matthew Levine; Bruce Gluckman; Henry Ginsberg; George Hripcsak; Lena Mamykina
Journal:  PLoS Comput Biol       Date:  2017-04-27       Impact factor: 4.475

View more
  19 in total

1.  Adapting the stage-based model of personal informatics for low-resource communities in the context of type 2 diabetes.

Authors:  Meghan Reading Turchioe; Marissa Burgermaster; Elliot G Mitchell; Pooja M Desai; Lena Mamykina
Journal:  J Biomed Inform       Date:  2020-09-20       Impact factor: 6.317

2.  Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review.

Authors:  Seyedeh Neelufar Payrovnaziri; Zhaoyi Chen; Pablo Rengifo-Moreno; Tim Miller; Jiang Bian; Jonathan H Chen; Xiuwen Liu; Zhe He
Journal:  J Am Med Inform Assoc       Date:  2020-07-01       Impact factor: 4.497

3.  Building trust in research through information and intent transparency with health information: representative cross-sectional survey of 502 US adults.

Authors:  Sabrina Mangal; Leslie Park; Meghan Reading Turchioe; Jacky Choi; Stephanie Niño de Rivera; Annie Myers; Parag Goyal; Lydia Dugdale; Ruth Masterson Creber
Journal:  J Am Med Inform Assoc       Date:  2022-08-16       Impact factor: 7.942

4.  Personalized Surgical Transfusion Risk Prediction Using Machine Learning to Guide Preoperative Type and Screen Orders.

Authors:  Sunny S Lou; Hanyang Liu; Chenyang Lu; Troy S Wildes; Bruce L Hall; Thomas Kannampallil
Journal:  Anesthesiology       Date:  2022-07-01       Impact factor: 8.986

Review 5.  Systematic review of current natural language processing methods and applications in cardiology.

Authors:  Meghan Reading Turchioe; Alexander Volodarskiy; Jyotishman Pathak; Drew N Wright; James Enlou Tcheng; David Slotwiner
Journal:  Heart       Date:  2022-05-25       Impact factor: 7.365

6.  Examining the effect of explanation on satisfaction and trust in AI diagnostic systems.

Authors:  Lamia Alam; Shane Mueller
Journal:  BMC Med Inform Decis Mak       Date:  2021-06-03       Impact factor: 2.796

7.  Artificial intelligence in breast cancer screening: primary care provider preferences.

Authors:  Nathaniel Hendrix; Brett Hauber; Christoph I Lee; Aasthaa Bansal; David L Veenstra
Journal:  J Am Med Inform Assoc       Date:  2021-06-12       Impact factor: 4.497

8.  Deployment of artificial intelligence for radiographic diagnosis of COVID-19 pneumonia in the emergency department.

Authors:  Morgan Carlile; Brian Hurt; Albert Hsiao; Michael Hogarth; Christopher A Longhurst; Christian Dameff
Journal:  J Am Coll Emerg Physicians Open       Date:  2020-11-05

9.  Stroke risk prediction using machine learning: a prospective cohort study of 0.5 million Chinese adults.

Authors:  Matthew Chun; Robert Clarke; Benjamin J Cairns; David Clifton; Derrick Bennett; Yiping Chen; Yu Guo; Pei Pei; Jun Lv; Canqing Yu; Ling Yang; Liming Li; Zhengming Chen; Tingting Zhu
Journal:  J Am Med Inform Assoc       Date:  2021-07-30       Impact factor: 4.497

10.  Exploring perceptions of healthcare technologies enabled by artificial intelligence: an online, scenario-based survey.

Authors:  Alison L Antes; Sara Burrous; Bryan A Sisk; Matthew J Schuelke; Jason D Keune; James M DuBois
Journal:  BMC Med Inform Decis Mak       Date:  2021-07-20       Impact factor: 2.796

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.