| Literature DB >> 35978652 |
Ulrich von Ulmenstein1, Max Tretter2, David B Ehrlich3, Christina Lauppert von Peharnik1.
Abstract
Current technological and medical advances lend substantial momentum to efforts to attain new medical certainties. Artificial Intelligence can enable unprecedented precision and capabilities in forecasting the health conditions of individuals. But, as we lay out, this novel access to medical information threatens to exacerbate adverse selection in the health insurance market. We conduct an interdisciplinary conceptual analysis to study how this risk might be averted, considering legal, ethical, and economic angles. We ask whether it is viable and effective to ban or limit AI and its medical use as well as to limit medical certainties and find that neither of these limitation-based approaches provides an entirely sufficient resolution. Hence, we argue that this challenge must not be neglected in future discussions regarding medical applications of AI forecasting, that it should be addressed on a structural level and we encourage further research on the topic.Entities:
Keywords: adverse selection; artificial intelligence; health insurance; healthcare system (HCS); medical certainties
Year: 2022 PMID: 35978652 PMCID: PMC9376350 DOI: 10.3389/frai.2022.913093
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Figure 1Illustration of the choice an individual may face between either statutory or private health insurance and the according price determinants income or health risk (created by the authors).
Figure 2Illustration of how AI produces new medical certainties from medical data, how these certainties are used by insurance companies and individual policyholders, and how this ultimately exacerbates adverse selection (created by the authors).
Figure 3Illustration of the different possibilities of banning and limiting medical AI or the access to or use of medical certainties in order to stop them from exacerbating adverse selection—as well as the different reasons why these options fail to do so (created by the authors).