Literature DB >> 35396183

The medical algorithmic audit.

Xiaoxuan Liu1, Ben Glocker2, Melissa M McCradden3, Marzyeh Ghassemi4, Alastair K Denniston5, Lauren Oakden-Rayner6.   

Abstract

Artificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. We suggest several approaches for testing algorithmic errors, including exploratory error analysis, subgroup testing, and adversarial testing, and provide examples from our own work and previous studies. The medical algorithmic audit is a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact. We propose that safety monitoring and medical algorithmic auditing should be a joint responsibility between users and developers, and encourage the use of feedback mechanisms between these groups to promote learning and maintain safe deployment of artificial intelligence systems.
Copyright © 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.

Entities:  

Mesh:

Year:  2022        PMID: 35396183     DOI: 10.1016/S2589-7500(22)00003-6

Source DB:  PubMed          Journal:  Lancet Digit Health        ISSN: 2589-7500


  3 in total

1.  Generative adversarial networks and synthetic patient data: current challenges and future perspectives.

Authors:  Anmol Arora; Ananya Arora
Journal:  Future Healthc J       Date:  2022-07

Review 2.  Sex trouble: Sex/gender slippage, sex confusion, and sex obsession in machine learning using electronic health records.

Authors:  Kendra Albert; Maggie Delano
Journal:  Patterns (N Y)       Date:  2022-08-12

Review 3.  Expectations for Artificial Intelligence (AI) in Psychiatry.

Authors:  Scott Monteith; Tasha Glenn; John Geddes; Peter C Whybrow; Eric Achtyes; Michael Bauer
Journal:  Curr Psychiatry Rep       Date:  2022-10-10       Impact factor: 8.081

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.