| Literature DB >> 34807907 |
Darius-Aurel Frank1, Christian T Elbæk1, Caroline Kjær Børsting1, Panagiotis Mitkidis1,2, Tobias Otterbring3,4, Sylvie Borau5.
Abstract
The COVID-19 pandemic continues to impact people worldwide-steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people's intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people's trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people's medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures-without eroding trust in human physicians.Entities:
Mesh:
Year: 2021 PMID: 34807907 PMCID: PMC8608336 DOI: 10.1371/journal.pone.0259928
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Summary statistics of main predictors and choice for Denmark and France.
| Denmark | France | ||||
|---|---|---|---|---|---|
| Mean | SD | Mean | SD | ||
| Trust | Human | .76 | .20 | .85 | .16 |
| AI | .56 | .25 | .63 | .25 | |
| Uniqueness neglect | Human | .47 | .24 | .69 | .23 |
| AI | .52 | .24 | .70 | .25 | |
| Count | Percentage | Count | Percentage | ||
| Physician choice | Human | 512 | 90.46 | 542 | 90.79 |
| AI | 54 | 9.54 | 55 | 9.21 | |
Note: Responses standardized on a scale from 0 to 1.
Summary of logistic regression models on adoption of medical AI.
| Model 1 | Model 2 | Model 3 | ||||
|---|---|---|---|---|---|---|
| Predictors | OR | 95% CI | OR | 95% CI | OR | 95% CI |
| Intercept | .03 | .02 –.05 | .03 | .02 –.05 | .03 | .02 –.05 |
| Uniqueness neglect [AI] | .75 | .55–1.01 | .77 | .57–1.05 | .76 | .56–1.03 |
| Uniqueness neglect [Human] | 1.46 | 1.09–1.97 | 1.42 | 1.06–1.93 | 1.44 | 1.07–1.96 |
| Trust [AI] | 7.41 | 4.85–11.80 | 7.55 | 4.89–12.17 | 7.44 | 4.82–12.00 |
| Trust [Human] | .31 | .23 –.41 | .31 | .23 –.41 | .31 | .23 –.41 |
|
| ||||||
| Anti-COVID-19 policy support | .87 | .67–1.14 | .87 | .67–1.14 | ||
| Belief in conspiracy theories | .86 | .65–1.13 | .85 | .63–1.11 | ||
| Open-mindedness | 1.79 | 1.08–3.00 | 1.92 | 1.14–3.26 | ||
| Trait optimism | 1.11 | .82–1.52 | 1.09 | .80–1.50 | ||
| Social belonging | .61 | .45 –.84 | .64 | .46 –.89 | ||
| Self-esteem | 1.14 | .86–1.52 | 1.15 | .86–1.54 | ||
| COVID-19 risk perception | 1.04 | .82–1.33 | 1.00 | .78–1.27 | ||
| Political ideology | 1.09 | .86–1.38 | 1.12 | .88–1.42 | ||
|
| ||||||
| Age | .81 | .63–1.03 | ||||
| Sex | .92 | .72–1.16 | ||||
| Socioeconomic Status (SES) | 1.00 | .80–1.25 | ||||
| Rural residence | .83 | .64–1.05 | ||||
| Country [France] | .97 | .61–1.54 | ||||
| Observations | 1129 | 1129 | 1129 | |||
| R2 Tjur | .20 | .22 | .23 | |||
| AIC | 544.31 | 544.35 | 547.88 | |||
| BIC | 569.45 | 609.72 | 638.41 | |||
Note: OR = 1 Predictor does not affect medical AI adoption; OR > 1 Predictor is associated with higher odds of medical AI adoption; OR < 1 Predictor is associated with lower odds of medical AI adoption; 95% CI = 95% confidence interval (CI) estimate for the precision of the OR.
Fig 1Odds ratios shown for predictors of medical AI adoption in Model 3.
Note: Dots indicate odds ratios and lines indicate 95% confidence intervals, red (blue) indicates negative (positive) coefficients, * p < .05, ** p < .01, *** p < .001.