Literature DB >> 34901082

Commentary: Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent?

Michael D Abramoff1,2, Zachary Mortensen1, Chris Tava2.   

Abstract

Entities:  

Keywords:  artificial intelligence; autonomous; bias; equity; healthcare; informed consent; patient outcome

Year:  2021        PMID: 34901082      PMCID: PMC8651697          DOI: 10.3389/fmed.2021.765936

Source DB:  PubMed          Journal:  Front Med (Lausanne)        ISSN: 2296-858X


× No keyword cloud information.

Introduction

The recent paper by Ursin et al. (1) brings up crucial issues about the ethics of healthcare AI and more specifically autonomous AI. These issues include the responsibilities and liabilities of disclosing information to patients. We appreciate the authors illustrating these issues with IDx-DR, which as the first autonomous AI approved by US Food and Drugs Administration (FDA), crystallized so many issues around “a computer making a medical decision,” as the authors carefully point out.

Ethics in Healthcare Ai

During the development, validation, and implementation of IDx-DR (Digital Diagnostics Inc), we started with an ethical framework built on the principles of non-maleficence, autonomy, and justice, which continue to be developed in various publications (2–4). This framework made it possible to track metrics around safety, equity, efficiency, transparency, validability, and accountability, allowing AI to be done the right way. This has led to validation of this biomarker-based AI under FDA oversight, inclusion in standards of care. An important milestone was reimbursement at the $55 level by publicly funded insurance in the United States. This required an understanding of the value of “autonomous AI work” by all stakeholders, and has led to rapid and increasingly widespread implementation (5–7). The ethical framework thus continues to serve all stakeholders well, as we continue to jointly develop considerations and requirements for healthcare AI. In this context, it is interesting to contrast healthcare autonomous AI, to another type of digital technology, social media. Healthcare autonomous AI was grounded from the start in an ethical framework, and the technology stack was then built according to this framework. Social media, instead, started with the tech, and only now, almost two decades later, are we starting to grapple with its ethical consequences. a. IDx-DR is a fully autonomous AI system. While the authors refer to IDx-DR as “AI-aided DR diagnosis,” it is in fact a fully autonomous AI system, as explained above. As a consequence, Digital Diagnostics assumes liability for the performance of the AI, as is now also required by the American Medical Association's AI Policy (8). We remain convinced that clarifying this liability issue helps foster acceptance by physicians and other stakeholders. b. AI bias. The authors rightfully bring up the problem of undesirable bias, including racial and ethnic bias. In Digital Diagnostics' ethical framework, including metrics for equity, we recognize that the bias problem applies to the entire AI lifecycle. This includes choice of disease and disease severity to be diagnosed, AI algorithm design, including the use of priors such as biomarkers, instead of prior-less blank slate black box algorithms, the distribution of the training sets, rigorous validation for improved outcome metrics including equity, and the choice of where it is implemented after regulatory approval (2, 9). As illustrated in these studies, IDx-DR is a biomarker based AI system, and explicitly not a black box system, avoiding the latter's' many risks, including catastrophic failure and risk of bias (10–12). c. Patient informed consent. The authors are correct that informed consent of patients, notifying them that an AI will be used, should be considered. For IDx-DR, both operators of the AI system, as well as the physicians ordering it, are trained in how to discuss the use of IDx-DR with patients. In fact, Digital Diagnostics has developed an AI facts label as part of the diagnostic output, so as to maximize transparency about which AI algorithms are used, their accuracy, and the relevant scientific evidence of their use and benefit. d. CE Certification. Finally, IDx-DR was for autonomous use in the European Economic Area per its CE Certificate (13) and complies with GDPR Article 22.

Author Contributions

Ethics in healthcare AI section attributed to MDA. Section A: IDx-DR is a fully autonomous AI system was written by CT and ZM. Section B: AI bias was written by MDA and CT. Section C: Patient informed consent was written by MDA and ZM. Section D: CE certification was written by MDA. All authors contributed to the article and approved the submitted version.

Funding

This work was supported in part by the Robert C. Watzke MD Professorship (to MDA) and Research to Prevent Blindness, Inc., New York, New York (unrestricted grant to the Department of Ophthalmology, and Visual Sciences, University of Iowa.

Conflict of Interest

MDA is a founder, executive chairman, consultant, investor, and shareholder of Digital Diagnostics. CT is a shareholder of Digital Diagnostics. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
  7 in total

1.  Adversarial attacks on medical machine learning.

Authors:  Samuel G Finlayson; John D Bowers; Joichi Ito; Jonathan L Zittrain; Andrew L Beam; Isaac S Kohane
Journal:  Science       Date:  2019-03-22       Impact factor: 47.728

2.  Identifying Ethical Considerations for Machine Learning Healthcare Applications.

Authors:  Danton S Char; Michael D Abràmoff; Chris Feudtner
Journal:  Am J Bioeth       Date:  2020-11       Impact factor: 11.229

3.  Lessons Learned About Autonomous AI: Finding a Safe, Efficacious, and Ethical Path Through the Development Process.

Authors:  Michael D Abràmoff; Danny Tobey; Danton S Char
Journal:  Am J Ophthalmol       Date:  2020-03-12       Impact factor: 5.258

Review 4.  11. Microvascular Complications and Foot Care: Standards of Medical Care in Diabetes-2020.

Authors: 
Journal:  Diabetes Care       Date:  2020-01       Impact factor: 19.112

5.  Foundational Considerations for Artificial Intelligence Using Ophthalmic Images.

Authors:  Michael D Abràmoff; Brad Cunningham; Bakul Patel; Malvina B Eydelman; Theodore Leng; Taiji Sakamoto; Barbara Blodi; S Marlene Grenon; Risa M Wolf; Arjun K Manrai; Justin M Ko; Michael F Chiang; Danton Char
Journal:  Ophthalmology       Date:  2021-08-31       Impact factor: 14.277

6.  Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.

Authors:  Michael D Abràmoff; Philip T Lavin; Michele Birch; Nilay Shah; James C Folk
Journal:  NPJ Digit Med       Date:  2018-08-28

7.  Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent?

Authors:  Frank Ursin; Cristian Timmermann; Marcin Orzechowski; Florian Steger
Journal:  Front Med (Lausanne)       Date:  2021-07-21
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.