Nathaniel Hendrix1, Brett Hauber1,2, Christoph I Lee3,4,5, Aasthaa Bansal1, David L Veenstra1. 1. The Comparative Health Outcomes, Policy & Economics (CHOICE) Institute, University of Washington School of Pharmacy, Seattle, Washington, USA. 2. RTI Health Solutions, Research Triangle Park, North Carolina, USA. 3. Department of Radiology, University of Washington School of Medicine, Seattle, Washington, USA. 4. Department of Health Services, University of Washington School of Public Health, Seattle, Washington, USA. 5. Hutchinson Institute for Cancer Outcomes Research, Seattle, Washington, USA.
Abstract
BACKGROUND: Artificial intelligence (AI) is increasingly being proposed for use in medicine, including breast cancer screening (BCS). Little is known, however, about referring primary care providers' (PCPs') preferences for this technology. METHODS: We identified the most important attributes of AI BCS for ordering PCPs using qualitative interviews: sensitivity, specificity, radiologist involvement, understandability of AI decision-making, supporting evidence, and diversity of training data. We invited US-based PCPs to participate in an internet-based experiment designed to force participants to trade off among the attributes of hypothetical AI BCS products. Responses were analyzed with random parameters logit and latent class models to assess how different attributes affect the choice to recommend AI-enhanced screening. RESULTS: Ninety-one PCPs participated. Sensitivity was most important, and most PCPs viewed radiologist participation in mammography interpretation as important. Other important attributes were specificity, understandability of AI decision-making, and diversity of data. We identified 3 classes of respondents: "Sensitivity First" (41%) found sensitivity to be more than twice as important as other attributes; "Against AI Autonomy" (24%) wanted radiologists to confirm every image; "Uncertain Trade-Offs" (35%) viewed most attributes as having similar importance. A majority (76%) accepted the use of AI in a "triage" role that would allow it to filter out likely negatives without radiologist confirmation. CONCLUSIONS AND RELEVANCE: Sensitivity was the most important attribute overall, but other key attributes should be addressed to produce clinically acceptable products. We also found that most PCPs accept the use of AI to make determinations about likely negative mammograms without radiologist confirmation.
BACKGROUND: Artificial intelligence (AI) is increasingly being proposed for use in medicine, including breast cancer screening (BCS). Little is known, however, about referring primary care providers' (PCPs') preferences for this technology. METHODS: We identified the most important attributes of AI BCS for ordering PCPs using qualitative interviews: sensitivity, specificity, radiologist involvement, understandability of AI decision-making, supporting evidence, and diversity of training data. We invited US-based PCPs to participate in an internet-based experiment designed to force participants to trade off among the attributes of hypothetical AI BCS products. Responses were analyzed with random parameters logit and latent class models to assess how different attributes affect the choice to recommend AI-enhanced screening. RESULTS: Ninety-one PCPs participated. Sensitivity was most important, and most PCPs viewed radiologist participation in mammography interpretation as important. Other important attributes were specificity, understandability of AI decision-making, and diversity of data. We identified 3 classes of respondents: "Sensitivity First" (41%) found sensitivity to be more than twice as important as other attributes; "Against AI Autonomy" (24%) wanted radiologists to confirm every image; "Uncertain Trade-Offs" (35%) viewed most attributes as having similar importance. A majority (76%) accepted the use of AI in a "triage" role that would allow it to filter out likely negatives without radiologist confirmation. CONCLUSIONS AND RELEVANCE: Sensitivity was the most important attribute overall, but other key attributes should be addressed to produce clinically acceptable products. We also found that most PCPs accept the use of AI to make determinations about likely negative mammograms without radiologist confirmation.
Authors: L van Dam; L Hol; E W de Bekker-Grob; E W Steyerberg; E J Kuipers; J D F Habbema; M L Essink-Bot; M E van Leerdam Journal: Eur J Cancer Date: 2010-01 Impact factor: 9.162
Authors: Martin P Ho; Juan Marcos Gonzalez; Herbert P Lerner; Carolyn Y Neuland; Joyce M Whang; Michelle McMurry-Heath; A Brett Hauber; Telba Irony Journal: Surg Endosc Date: 2015-01-01 Impact factor: 4.584
Authors: Diana L Miglioretti; Laura Ichikawa; Robert A Smith; Diana S M Buist; Patricia A Carney; Berta Geller; Barbara Monsees; Tracy Onega; Robert Rosenberg; Edward A Sickles; Bonnie C Yankaskas; Karla Kerlikowske Journal: Acad Radiol Date: 2017-05-24 Impact factor: 3.173
Authors: F Reed Johnson; Emily Lancsar; Deborah Marshall; Vikram Kilambi; Axel Mühlbacher; Dean A Regier; Brian W Bresnahan; Barbara Kanninen; John F P Bridges Journal: Value Health Date: 2013 Jan-Feb Impact factor: 5.725
Authors: Myura Nagendran; Yang Chen; Christopher A Lovejoy; Anthony C Gordon; Matthieu Komorowski; Hugh Harvey; Eric J Topol; John P A Ioannidis; Gary S Collins; Mahiben Maruthappu Journal: BMJ Date: 2020-03-25
Authors: Eric Li; Christopher Manz; Manqing Liu; Jinbo Chen; Corey Chivers; Jennifer Braun; Lynn Mara Schuchter; Pallavi Kumar; Mitesh S Patel; Lawrence N Shulman; Ravi B Parikh Journal: PLoS One Date: 2022-05-27 Impact factor: 3.752