| Literature DB >> 34219989 |
Milad Mirbabaie1, Lennart Hofeditz2, Nicholas R J Frick2, Stefan Stieglitz2.
Abstract
The application of artificial intelligence (AI) in hospitals yields many advantages but also confronts healthcare with ethical questions and challenges. While various disciplines have conducted specific research on the ethical considerations of AI in hospitals, the literature still requires a holistic overview. By conducting a systematic discourse approach highlighted by expert interviews with healthcare specialists, we identified the status quo of interdisciplinary research in academia on ethical considerations and dimensions of AI in hospitals. We found 15 fundamental manuscripts by constructing a citation network for the ethical discourse, and we extracted actionable principles and their relationships. We provide an agenda to guide academia, framed under the principles of biomedical ethics. We provide an understanding of the current ethical discourse of AI in clinical environments, identify where further research is pressingly needed, and discuss additional research questions that should be addressed. We also guide practitioners to acknowledge AI-related benefits in hospitals and to understand the related ethical concerns. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00146-021-01239-4.Entities:
Keywords: Artificial intelligence; Discourse approach; Ethics; Healthcare; Hospitals
Year: 2021 PMID: 34219989 PMCID: PMC8238382 DOI: 10.1007/s00146-021-01239-4
Source DB: PubMed Journal: AI Soc ISSN: 0951-5666
Fig. 1Adapted discourse approach based on Larsen et al. (2019) to derive a research agenda
Formal grouping of research questions to guide future research on ethical dimensions of AI in hospitals
| Bioethical principles | Actionable principles | Exemplary research questions |
|---|---|---|
| Beneficence | Vigilance Security Privacy Avoid bias and harms | 1. How can the principle of fairness be defined in the context of using AI in hospitals? 2. Which medical data should be used to derive AI recommendations for therapeutic and treatment processes? 3. How can AI systems inform decisions made by healthcare professionals? 4. How can disadvantages to patients belonging to certain minority groups be removed or reduced? 5. In which application domains of digital health can AI be introduced as decision support systems to enhance hospital procedures and patient treatment? 6. To what extent can AI assist with difficult therapy decisions for certain patient groups? |
| Non-maleficence | Privacy Security Vigilance | 1. What are possible harms caused using AI in hospitals? 2. How can bias within the medical data used by AI be recognized and resolved by healthcare professionals? 3. How could a control mechanism for decision support for physicians through AI in hospitals be designed and developed? 4. How can the awareness of vigilance regarding AI used in hospitals be increased? 5. How can it be ensured that medical information is not retrieved by third parties? 6. To what extent can external data manipulations within AI datasets be detected and prevented by physicians? |
| Justice | Humanity Feasibility Interoperability/generalizability | 1. How can AI applications in hospitals contribute to the common good of a society? 2. How can common good be defined and interpreted by AI applied in clinical environments? 3. Which guidelines are essential to ensure common good when using AI in hospitals? 4. To what extent can physicians be psychologically relieved of moral dilemmas when using AI in hospitals? 5. How is AI able to improve the doctor-patient relationship in hospitals? 6. How can existing AI applications in hospitals be transferred to other conditions, departments, countries, and cultures? 7. To what extent are generalizable AI results ensured? |
| Autonomy | Accountability (Social) Responsibility (Legal) Liability Interventions Informed consent Education | 1. To what extent do physicians perceive themselves to be losing their autonomy when AI is applied in hospitals? 2. How should the application of AI in hospitals be transparently presented to medical experts and patients? 3. Who can be held accountable and socially responsible for AI-driven decisions, and under which clinical conditions? 4. How can the legal liability for using AI in hospitals be clarified and implemented in a legal foundation? 5. Who is accountable and responsible for ensuring legal alignment when using AI in hospitals? 6. How can AI accompany its outputs with concrete recommendations for use in medical interventions? 7. How can it be ensured that both the physicians and the patients are aware of the consequences when consenting to the use of AI in a hospital? 8. How should AI applications be designed to be utilized only under voluntary conditions among clinicians and patients? 9. How do we need to educate and train physicians to ensure an ethical use of AI in hospitals? 10. What kind of training increases trustworthiness in using AI in hospitals? |
Identified fundamental manuscripts of the discourse on the ethical use of AI in healthcare
| Authors | Count | Score |
|---|---|---|
| Vayena et al. ( | 8 | 2.667 |
| Ting et al. ( | 8 | 2 |
| Char et al. ( | 6 | 2 |
| McKinney et al. ( | 2 | 2 |
| Zeng et al. ( | 2 | 2 |
| Yu et al. ( | 5 | 1.667 |
| Gulshan et al. ( | 8 | 1.6 |
| Reddy et al. ( | 3 | 1.5 |
| Yu and Kohane ( | 3 | 1.5 |
| Schiff and Borenstein ( | 3 | 1.5 |
| Parikh et al. ( | 3 | 1.5 |
| Luxton ( | 3 | 1.5 |
| He et al. ( | 3 | 1.5 |
| Froomkin et al. ( | 3 | 1.5 |
| Cath ( | 4 | 1.334 |
Fig. 2Citation network of the 15 fundamental manuscripts
Sample overview of expert interviews with physicians and senior level experts
| Interviewee | Gender | Age | Tenure (years) | Position | Discipline | Hospital | Duration |
|---|---|---|---|---|---|---|---|
| E1 | f | 31 | 3.5 | Resident doctor | Obstetric care | University Hospital of Frankfurt, Germany | 28:17 |
| E2 | f | 38 | 7 | Senior physician | Cranio-maxillofacial surgery | University Hospital of Dusseldorf, Germany | 31:38 |
| E3 | f | 35 | 5 | Senior physician | Cranio-maxillofacial surgery | University Hospital of Dusseldorf, Germany | 30:30 |
| E4 | f | 31 | 2 | Resident doctor | Cranio-maxillofacial surgery | University Hospital of Dusseldorf, Germany | 35:42 |
| E5 | m | 67 | 20 | Chief physician | Anesthesia | Retired | 42:33 |
| E6 | m | 44 | 17 | Head of Corporate Communications | Digitization Think Tank | Clinical Center Dortmund, Germany | 32:41 |
Interview guideline (German interview questions have been translated into English)
| Phase | Research goal | Questions |
|---|---|---|
| Briefing | Welcoming the interviewee and providing general information about the research and brief introduction to the topic | – |
| Demographic data | Getting an understanding of the interviewee including position within the hospital and the areas of responsibility | a. Could you please introduce yourself? b. What is your current position in the hospital? c. What responsibilities does your position involve? d. How long have you been working in this position / in this hospital? |
| Ethical considerations in healthcare and hospitals | Ethical considerations physicians are confronted with and whether they follow a certain codex | a. What ethical considerations are you confronted with in your everyday work? b. What is the ethical code you follow? |
| Ethical considerations and technology | Ethical problems technology raises and how they are capable to resolve ethical issues | a. Which technologies are used in your hospital to support your work? b. Which technologies do you rely on for your decisions? c. Which ethical problems can technology cause? What questions arise? d. Which ethical problems can a technology help to solve? |
| Ethical considerations and AI | Specific questions on the application of AI in hospitals and which factors are crucial for a deployment and what ethical guidelines must be follow | a. What do you associate with the term “artificial intelligence”? Providing an explanation of AI and current examples to assume the same knowledge among all participants b. For which tasks can AI be used as support in hospitals? c. Which tasks can AI be allowed to take over independently and which not? d. Which factors must AI consider when being used hospitals? Which rules must be obeyed? e. What is AI not allowed to decide for itself? What outcomes need to be prevented? What negative consequences may result? f. What are ethical conditions, requirements, and challenges for the application of AI in hospitals? g. Which morally reprehensible decisions should AI not derive? h. Which moral decisions could an AI make better compared to a human being? |
| AI and future perspectives | Future ways of AI implementations in hospitals improving clinical procedures | a. For what purposes would you use like to use AI in hospitals? b. Which decision would you rather follow, that of a human or an AI? Please elaborate c. How do you think is the role of AI in hospitals changing in the future? |
| Debriefing | Debriefing of the interviewee and explanation of the research background, possibility for the interviewee to ask further question or giving closing remarks | a. What other question did you expect but was not asked? b. Do you have further questions / comments on the topic? |
Ethical principles for the use of AI in hospitals extracted from the fundamental manuscripts
| Type of issue | Principle | References | Description |
|---|---|---|---|
| Regulatory issues | Accountability | Cath | The determination of who is accountable for errors, who is socially responsible for the outcome of an AI, and which legal obligations have to be taken into account should be ensured |
| Responsibility | Cath ( | ||
| (Legal) liability | Schiff and Borenstein ( | ||
| Privacy | Cath ( | The protection of users’ data and the compliance with general data protection regulations should be ensured | |
| Normative issues | Avoiding bias and harms | Cath ( | The prevention of damage to one or more patients from the use of AI in healthcare should be ensured |
| Patient safety | He et al. ( | ||
| Fairness | Cath ( | The avoidance of discrimination of patients should be ensured using algorithmic fairness | |
| Informed consent | Schiff and Borenstein ( | It should be ensured that physicians be able to explain the exact use of an AI to be sure that the patients know to what they are consenting | |
| Technical issues | Interoperability and generalizability | He et al. ( | It should be ensured that the training data for an AI represents a large population to provide interoperable and generalizable systems |
| Iterative controllability and updatability | Yu and Kohane ( | It should be ensured that AI in hospitals is always controlled by trained physicians and updated with clinical workflow disruption | |
| Vigilance | Yu et al. ( | It should be ensured that responsible physicians frequently monitor the AI system | |
| Security | Zeng et al. ( | It should be ensured that the system has a certain level of robustness against cyber-attacks | |
| Organizational issues | Feasibility and humanity | Gulshan et al. ( | It should be determined if and how AI is capable of improving care in hospitals |
| Education of an AI-literate workforce | He et al. ( | It should be ensured that healthcare professionals are well trained and educated in the fields of medical informatics and statistics | |
| Interventions | Parikh et al. ( | It should be ensured that the output of a predictive AI is accompanied by guidance for medical interventions | |
| Explainability | Vayena et al. ( | It should be ensured that the use of AI in hospitals is understandable to the patient | |
| Transparency | Cath ( | The visibility of the general logic of machine learning algorithms and its explanation should be ensured | |
| Trustworthiness | Yu and Kohane ( | It should be ensured that the patients and the physicians who use AI trust the systems’ predictions |
Fig. 3Visualization of the relationship between actionable ethical principles for using AI in hospitals and bioethical principles according to Beauchamp and Childress (2019) and Floridi et al. (2018)
Ranking of identified articles according to their number of citations
| Number of citations | Number of papers |
|---|---|
| 8 | 2 |
| 6 | 2 |
| 5 | 2 |
| 4 | 7 |
| 3 | 23 |
| 2 | 115 |
| 1 | 2713 |