| Literature DB >> 32558657 |
Onur Asan1, Alparslan Emrah Bayrak1, Avishek Choudhury1.
Abstract
Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable-though imperfect-clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians' use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use. ©Onur Asan, Alparslan Emrah Bayrak, Avishek Choudhury. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.06.2020.Entities:
Keywords: FDA policy; bias; health care; human-AI collaboration; technology adoption; trust
Year: 2020 PMID: 32558657 DOI: 10.2196/15154
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428