| Literature DB >> 35362822 |
Torbjørn Gundersen1, Kristine Bærøe2.
Abstract
This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.Entities:
Keywords: Artificial intelligence; Collaboration; Deliberation; Ethical design; Machine learning; Medical ethics; Professional responsibility
Mesh:
Year: 2022 PMID: 35362822 PMCID: PMC8975759 DOI: 10.1007/s11948-022-00369-2
Source DB: PubMed Journal: Sci Eng Ethics ISSN: 1353-3452 Impact factor: 3.777
The main components of the ideal of shared decision-making
| Short description of the components of shared decision-making | Expanded descriptions of the components of what is required of doctors in shared decision-making. These can be perceived as minimum standards | How AI can undermine the conditions for shared decision-making |
|---|---|---|
| (a) Understanding the patient’s condition | Doctors must understand the connection between patients’ conditions and the need for potential interventions on a general, technical, and normative level and as translated into the particular contexts of individual patients. | If the clinical outcome of AI is beyond what doctors are able to understand themselves, their clinical competence is undermined, and by that a crucial presupposition for why the patients have reason to trust them in the first place (Kerasidou, |
| (b) Trust in evidence | Doctors must base their decision on sources of evidence they trust to make sure the information is relevant and adequate. | If doctors suggest treatments on the basis of AI sources to information they cannot fully account for, they force patients to place blind trust in their recommendations. This is just another version of paternalism. |
| (c) Due assessment of benefits and risks | Doctors must understand all relevant information of benefits and risk and trade-offs between them. | If doctors cannot fully understand how, and why, AI has reached an outcome, say, a classification of an x-ray, uncertainty regarding assessments of risk, benefits and trade-offs will follow. This, in turn, undermines patients’ reasons to have confidence in their judgments as their role as the expert in the relation. |
| (d) Accommodating patient’s understanding, communication, and deliberation | Doctors must convey assessment of risks and benefits to patients in a clear and accessible manner, ensure they have understood the information, and invite them to share their thoughts and deliberate together on the matter. | If AI systems makes it hard for doctors to understand how, and why, they reach their outcome, they cannot facilitate patients understanding either. Rather, they will have to paternalistically require that the patient should accept that the AI ’ knows best’. |
The four models
| Transformative and disruptive technology | Ethical attention | Division of labor | Benefits | Challenges | |
|---|---|---|---|---|---|
| Ordinary evidence model | No | Mainly in use, not design | Distinct | Fits well with widely held notions of the professional responsibility | Lacks proper response to challenges pertaining to algorithmic risk, transparency, and accountability |
| Ethical design model | Yes | Mainly in design, not use | Distinct | Takes the distinct ethical challenges of medical AI seriously | Technocratic view on ethical choices and the problem of formalizing ethics |
| Collaborative model | Yes | Both in design and use | Integrated | Alleviates some of the accountability problem and promotes shared decision-making | No proper response to severe ethical risks |
| Public deliberation model | Yes | Both in design and use, and the public sphere | Partly distinct, partly integrated | Can deal with “meta-ethical risks” | The models need more organizational specification |