| Literature DB >> 34796318 |
Jean-Christophe Bélisle-Pipon1, Vincent Couture2, Marie-Christine Roy3, Isabelle Ganache4, Mireille Goetghebeur4, I Glenn Cohen5.
Abstract
The application of artificial intelligence (AI) may revolutionize the healthcare system, leading to enhance efficiency by automatizing routine tasks and decreasing health-related costs, broadening access to healthcare delivery, targeting more precisely patient needs, and assisting clinicians in their decision-making. For these benefits to materialize, governments and health authorities must regulate AI, and conduct appropriate health technology assessment (HTA). Many authors have highlighted that AI health technologies (AIHT) challenge traditional evaluation and regulatory processes. To inform and support HTA organizations and regulators in adapting their processes to AIHTs, we conducted a systematic review of the literature on the challenges posed by AIHTs in HTA and health regulation. Our research question was: What makes artificial intelligence exceptional in HTA? The current body of literature appears to portray AIHTs as being exceptional to HTA. This exceptionalism is expressed along 5 dimensions: 1) AIHT's distinctive features; 2) their systemic impacts on health care and the health sector; 3) the increased expectations towards AI in health; 4) the new ethical, social and legal challenges that arise from deploying AI in the health sector; and 5) the new evaluative constraints that AI poses to HTA. Thus, AIHTs are perceived as exceptional because of their technological characteristics and potential impacts on society at large. As AI implementation by governments and health organizations carries risks of generating new, and amplifying existing, challenges, there are strong arguments for taking into consideration the exceptional aspects of AIHTs, especially as their impacts on the healthcare system will be far greater than that of drugs and medical devices. As AIHTs begin to be increasingly introduced into the health care sector, there is a window of opportunity for HTA agencies and scholars to consider AIHTs' exceptionalism and to work towards only deploying clinically, economically, socially acceptable AIHTs in the health care system.Entities:
Keywords: artificial intelligence; ethical; exceptionalism; health regulation; health technology assessment; social and legal implications
Year: 2021 PMID: 34796318 PMCID: PMC8594317 DOI: 10.3389/frai.2021.736697
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Search strategy.
| Concepts | Terms |
|---|---|
|
|
|
|
| |
|
| |
|
|
|
|
| |
|
| |
|
|
|
|
| |
|
|
Legend. PB = PubMed; EM = Embase; OJ = Journals@Ovid Full Text; Databasel; WoS = Web of Science; iHTAd = International HTA.
Selection criteria.
| Specifics | |
|---|---|
| Date | 2016–2020 (5 years) |
| Language | English; French |
| Study design | Descriptive; Experimental; Opinion/Perspective; Empirical Research; Literature Review |
| Type of publication | Original research; Commentary; Editorial |
FIGURE 1PRISMA Flowchart. AI = artificial intelligence; ELSI = ethical, legal, and social implications; HTA = health technology assessment.
FIGURE 2The five main aspects of artificial intelligence health technologies’ exceptionalism.
A summary of the five main aspects of AIHT’s exceptionalism.
| Key considerations | Examples from the reviewed sample | |
|---|---|---|
| 1. AIHT’s Distinctive Features from Traditional Health Technologies | AIHTs are different from traditional health technologies because of their capacity to continuously learn, their potential for ubiquity throughout the health care system, the opaqueness of their recommendations and the ambiguity of their definition ( | Locked AIHTs could become outdated potentially from the moment they are prevented from evolving. Thus, locking AIHT may cause it to become outdated and increase chance of contextual bias in real-life contexts |
| Locked algorithms will always yield the same result when it is fed by the same data. They are not | ||
| Algorithms will need to be regularly updated (at high or even prohibitive prices) due to advances in medical knowledge and access to new datasets or at the risk of their usage becoming malpractice. Updating or replacing an AIHT will involve additional post-acquisition costs to the clinics and hospitals that purchased them. The difficulty of managing the consequences of an outdated algorithm outweighs those of a drug or other health product that must be withdrawn from the market ( | ||
| 2. Systemic Impacts on Health | AI may have systemic effects that can be felt across an entire health care system, or across health care systems in several jurisdictions, initiating extensive and lasting transformations that are likely to affect all actors working in, using or financing the health system. In addition, AIHTs can have systemic real-world consequences for patients and non-ill or non-frequent users of the health care system. However, AI will not address everything that has to do with the overall well-being of people ( | AI’s role in health surveillance, care optimization, prevention, public health, and telemedicine will cause AIHTs to affect non-ill or non-frequent users of the health care systemAn AIHT trained on medico-administrative data in a context where physicians have often modified their billing to enter the highest paying codes for clinical procedures would cause the algorithm to infer that these codes represent the usual, standard, or common practice to be recommended, thus introducing a bias in the algorithm and leading to a cascade of non-cost effective recommendations |
| Mistakes due to AIHTs used in clinical care and within the health care system have the potential to widely affect the patient population, suggesting that it is all the more necessary that all algorithms should submitted to extensive scrutiny. In addition, “tropic effects” (i.e., code embedded propensity towards certain behaviors or effects) may increase the risk of inappropriate treatment and care, and may result in importing AIHT-fueled standards and practices that are exogenous and non-idiosyncratic to local organizations. Furthermore, the large-scale systematization of certain behaviors may end up resulting in significant costs and harms ( | ||
| Some authors suggest AIHTs should be regarded as a “health system transformation lever” for improving health care and a key enabler of learning healthcare systems (LHS) ( | ||
| 3. Increased Expectations | The “automation bias” describes the belief that an AI-generated outcome is inherently better than a human one. This is reinforced by the technological imperative, i.e., the pressure to use a new technology just because it exists ( | The adoption and impact of AIHTs are unlikely to be uniform or to improve performance in all health care contexts because of the technology’s distinctive features, its systemic effects on health care organizations and the human biases associated with the use of these technologies. AIHTs can significantly affect and highlight particularities of workflow and design of individual hospital systems, causing them not to respond in an intended way. Therefore, AIHTs represent great challenges for deciding whether marketing authorization is justified |
| AI is currently in an era of promises rather than of fulfillment of what is expected from it. Possible consequences of this hype can be very significant but HTA agencies and regulators have an important role to play ( | ||
| 4. New Ethical, Legal and Social Challenges | AIHTs present new ethical, legal and social challenges in the context of health care delivery; by calling into question the roles of patients, HCPs and decision-makers; and by conflicting with medicine’s ethos of transparency | Patients who compare very well with historic patient data will be the ones benefiting the most from AIHTs, calling for caution with regards to patient and disease heterogeneity |
| Key AIHT-stemmed ethical challenges in care delivery are: AI-fostered potential bias; patient privacy protection; trust of clinicians and the general public towards machine-led medicine; new health inequalities ( | Practical and procedural ethical guidance for supporting HTA for AIHTs has not yet been thoroughly defined. For instance, distributive justice role in HTA for AIHT is not well specified | |
| AI being unlike most other health technologies, it forces the questioning of the very essence of humans. It also raises new existential questions regarding the role of regulators and public decision-makers AIHTs unparalleled autonomy intensifies ethical and regulatory challenges ( | AI-stemmed existential questionning includes the reflection that more and more clinicians are having about the proper role of healthcare professionals and what it means to be a doctor, a nurse, etc. And from the patients’ perspective, what it means to be cared for by machines and to feel more and more like a number in a vast system run by algorithms | |
| AIHTs are often opaque, which poses serious problems for their acceptance, regulation and implementation in the health care system. AI’s benefits for health care will come at the price of raising ethical issues specific to the technology ( | ||
| 5. New Evaluative Constraints | AIHTs raise new evaluative constrains at the technological level due to the data and infrastructure required ( | The adoption and impact of AIHTs are unlikely to be uniform or to improve performance in all health care contexts because of the technology’s distinctive features, its systemic effects on health care organizations and the human biases associated with the use of these technologies. Therefore, AIHTs represent great challenges for deciding whether marketing authorization is justified, and it forces to question whether marketing authorization at the 10,000 foot level for the product is appropriate and efficient as opposed to for more specific uses closer to the impacted communities and the point of delivery |
| This high level of complexity requires a special regulation of AIHT, specifically adapted to its complexity ( |