| Literature DB >> 35707765 |
Anastasiya Kiseleva1,2, Dimitris Kotzinos2, Paul De Hert1.
Abstract
The lack of transparency is one of the artificial intelligence (AI)'s fundamental challenges, but the concept of transparency might be even more opaque than AI itself. Researchers in different fields who attempt to provide the solutions to improve AI's transparency articulate different but neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there is no common taxonomy neither within one field (such as data science) nor between different fields (law and data science). In certain areas like healthcare, the requirements of transparency are crucial since the decisions directly affect people's lives. In this paper, we suggest an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and we propose a single point of reference for both legal scholars and data scientists on transparency and related concepts. Based on the analysis of the European Union (EU) legislation and literature in computer science, we submit that transparency shall be considered the "way of thinking" and umbrella concept characterizing the process of AI's development and use. Transparency shall be achieved through a set of measures such as interpretability and explainability, communication, auditability, traceability, information provision, record-keeping, data governance and management, and documentation. This approach to deal with transparency is of general nature, but transparency measures shall be always contextualized. By analyzing transparency in the healthcare context, we submit that it shall be viewed as a system of accountabilities of involved subjects (AI developers, healthcare professionals, and patients) distributed at different layers (insider, internal, and external layers, respectively). The transparency-related accountabilities shall be built-in into the existing accountability picture which justifies the need to investigate the relevant legal frameworks. These frameworks correspond to different layers of the transparency system. The requirement of informed medical consent correlates to the external layer of transparency and the Medical Devices Framework is relevant to the insider and internal layers. We investigate the said frameworks to inform AI developers on what is already expected from them with regards to transparency. We also discover the gaps in the existing legislative frameworks concerning AI's transparency in healthcare and suggest the solutions to fill them in.Entities:
Keywords: accountability; artificial intelligence (AI); explainability; healthcare; informed medical consent; interpretability; medical devices; transparency
Year: 2022 PMID: 35707765 PMCID: PMC9189302 DOI: 10.3389/frai.2022.879603
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Figure 1Activities associated in the EU legislation (listed in Annex I) with transparency measures.
Figure 2XAI Word Cloud created by Adadi and Berrada (2018).
Figure 3Multilayered System of AI's Transparency in Healthcare.
Summary of the requirement of the informed medical consent applicable to artificial intelligence (AI)'s external transparency.
| Who to whom: | Healthcare professional (physician) to patient |
| When: | Before intervention in the health; |
| What: | Information as to the purpose and nature of the intervention as well as on its consequences and risks; |
| How: | Appropriate information; |
| Why: | To enable free and informed consent/rejection of the intervention. |
Although the article does not directly mention healthcare professional as the one required to provide information to patient, it is implied because he is the one who carries out intervention into health under its professional duties.
Summary of the Medical Devices Framework (illustrated by the MDR) requirements relevant to AI' internal transparency measures.
| Who to whom: | AI provider to healthcare professional (device's user) and patient; |
| When: | When a device is placed on the market and being used; |
| What: | Providing of information about: |
| How: | Information relevant to the user and tailored to his technical knowledge, experience, education, or training. Instructions for use shall be written in terms readily understood by the intended user and, where appropriate, supplemented with drawings and diagrams. |
| Why: | • To enable healthcare providers making choices, including diagnosis and treatment ones; |
Summary of the MDF requirements (illustrated by the MDR) relevant to AI' insider transparency measures.
| Who to whom: | AI providers to themselves; |
| When: | During the whole life cycle of AI-device; |
| What: | – Information provision (as specified at the internal transparency level); – Development of explanations for AI systems; |
| How: | Records-keeping and documentation shall be carried out in the way that enables notified bodies to audit the activities of AI provider and verify the quality and safety of AI devices. Explanations shall be provided to the maximum technically possible extent and in way that enables further tailor explanations to users. Information shall be provided as specified at the internal transparency level. |
| Why: | • To hold AI providers accountable; |