| Literature DB >> 34898454 |
Thomas Ploug1, Anna Sundby1, Thomas B Moeslund2, Søren Holm3.
Abstract
BACKGROUND: Certain types of artificial intelligence (AI), that is, deep learning models, can outperform health care professionals in particular domains. Such models hold considerable promise for improved diagnostics, treatment, and prevention, as well as more cost-efficient health care. They are, however, opaque in the sense that their exact reasoning cannot be fully explicated. Different stakeholders have emphasized the importance of the transparency/explainability of AI decision making. Transparency/explainability may come at the cost of performance. There is need for a public policy regulating the use of AI in health care that balances the societal interests in high performance as well as in transparency/explainability. A public policy should consider the wider public's interests in such features of AI.Entities:
Keywords: artificial Intelligence; explainability; performance; population preferences; public policy; transparency
Mesh:
Year: 2021 PMID: 34898454 PMCID: PMC8713089 DOI: 10.2196/26611
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Figure 1An example of a choice task with 3 concepts. AI: artificial intelligence.
Importance of attributes and part-worth utilities of levels.
| Attribute | Importance (%) | Level (part-worth utility) |
| Type | 3.0 |
Diagnostics (0.123) Treatment planning (–0.123) |
| Explanation | 27.3 |
Equally explainable as physician’s decision (1.106) Not as explainable as physician’s decision (–0.270) No explanation available (–0.836) |
| Performance | 6.6 |
System decision significantly better than physician’s (0.267) System decision somewhat better than physician’s (0.052) System decision equally good as physician’s (–0.319) |
| Responsibility | 46.8 |
Physician responsible for decision (1.900) System responsible for decision (–1.900) |
| Discrimination | 14.8 |
System tested for biased decisions (0.602) System not been tested for biased decisions (–0.602) |
| Severity of disease | 1.5 |
System use only when less severe disease (0.060) System use both when less severe and when very severe disease (–0.060) |
Respondent trust and opinions about AIa (N=1027).
| Opinion | None/not at all, | Very little, | Little, | Some, | A lot/certainly, | Don’t know, | |
|
| |||||||
|
| I have trust in the health care system. | 7 (0.7) | 35 (3.4) | 102 (9.9) | 438 (42.6) | 424 (41.3) | 21 (2.1) |
|
| I have trust in physicians. | 2 (0.2) | 29 (2.8) | 72 (7.0) | 412 (40.1) | 502 (48.9) | 10 (1.0) |
|
| I have trust in technology. | 4 (0.4) | 32 (3.1) | 129 (12.6) | 519 (50.5) | 313 (30.5) | 30 (2.9) |
|
| |||||||
|
| I believe that AI will lead to unemployment. | 122 (11.9) | 216 (21.0) | 251 (24.4) | 169 (16.5) | 95 (9.3) | 174 (16.9) |
|
| I believe that AI will cause unintentional harm to humans. | 55 (5.4) | 206 (20.1) | 303 (29.5) | 181 (17.6) | 67 (6.5) | 215 (20.9) |
|
| I believe that AI will lead to loss of control to machines. | 86 (8.4) | 167 (16.3) | 249 (24.2) | 241 (23.5) | 141 (13.7) | 143 (13.9) |
|
| I believe that AI will lead to increased data collection and mass surveillance. | 22 (2.1) | 30 (2.9) | 106 (10.3) | 309 (30.1) | 435 (42.4) | 125 (12.2) |
|
| |||||||
|
| I believe that AI will lead to more jobs. | 119 (11.6) | 180 (17.5) | 300 (29.2) | 164 (16.0) | 55 (5.3) | 209 (20.4) |
|
| I believe that AI will lead to longer lives. | 69 (6.7) | 114 (11.1) | 243 (23.6) | 284 (27.7) | 92 (9.0) | 225 (21.9) |
|
| I believe that AI will lead to more quality of life. | 82 (8.0) | 119 (11.6) | 279 (27.2) | 265 (25.8) | 93 (9.0) | 189 (18.4) |
|
| I believe that AI will lead to peace and political stability. | 225 (21.9) | 209 (20.4) | 232 (22.6) | 66 (6.4) | 20 (1.9) | 275 (26.8) |
aAI: artificial intelligence.
Respondent characteristics and the importance of attributes.a
| Attribute (average weight) | Gender | Age | Level of education | Urban/rural background | Chronic disease | Inpatient last year | GPb visits last year | Trust scale | Fear scale | Hope scale |
| Type (0.12268) | —c | — | — | — | — | — | — | — | — |
r=–.097 |
| Explanation (1.10638) | — | — | — | — | — | — | — | — | — | — |
| Performance (0.31895) |
Md=.337 Fe=.300 |
More important with lower age |
Lowest level of education=.179 Highest level of education=.659 |
Most rural=.271 Most urban=.354 | — | — | — |
r=.093 |
r=.170 |
r=.243 |
| Responsibility (1.90018) | — | — | — |
Most rural=1.940 Most urban=1.792 | — | — | — | — | — | — |
| Discrimination (0.60190) |
M=.542 F=.682 | — | — | — | — | — | — | — | — |
r=–.120 |
| Severity of disease (0.06042) | — | — |
Lowest level of education=.122 Highest level of education=–.226 | — | — | — | — |
r=–.099 | — |
r=–.168 |
aNumerical data only shown for cells where there is a statistically significant difference.
bGP: general practitioner.
cNot applicable.
dM: male.
eF: female.