| Literature DB >> 33302942 |
Emma K Frost1, Stacy M Carter2.
Abstract
BACKGROUND: Healthcare is a rapidly expanding area of application for Artificial Intelligence (AI). Although there is considerable excitement about its potential, there are also substantial concerns about the negative impacts of these technologies. Since screening and diagnostic AI tools now have the potential to fundamentally change the healthcare landscape, it is important to understand how these tools are being represented to the public via the media.Entities:
Keywords: Artificial intelligence; Diagnosis; Ethics; Frame analysis; Media framing; Screening
Mesh:
Year: 2020 PMID: 33302942 PMCID: PMC7725880 DOI: 10.1186/s12911-020-01353-1
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 2.796
Nisbet's [19] framing typology
| Nisbet frame | Nisbet’s description of this framea |
|---|---|
| Social progress | “A means of improving quality of life or solving problems; alternative interpretation as a way to be in harmony with nature instead of mastering it” |
| Economic development | “An economic investment; market benefit or risk; or a point of local, national, or global competitiveness” |
| Conflict and strategy | “A game among elites, such as who is winning or losing the debate; or a battle of personalities or groups (usually a journalist-driven interpretation)” |
| Morality and Ethics | “A matter of right or wrong; or of respect or disrespect for limits, thresholds, or boundaries” |
| Scientific and technical uncertainty | “A matter of expert understanding or consensus; a debate over what is known versus unknown; or peer-reviewed, confirmed knowledge versus hype or alarmism” |
| Pandora’s box/Frankenstein’s monster/runaway science | “A need for precaution or action in face of possible catastrophe and out-of-control consequences; or alternatively as fatalism, where there is no way to avoid the consequences or chosen path” |
| public accountability and governance | “Research or policy either in the public interest or serving special interests, emphasizing issues of control, transparency, participation, responsiveness, or ownership; or debate over proper use of science and expertise in decision-making (“politicization”)” |
| Middle way | “A third way between conflicting or polarized views or options” |
aThese frames and descriptions are taken directly from Nisbet [19]
Terms used for media article search
| (" AI " OR "ARTIFICIAL INTELLIGENCE" OR "MACHINE LEARNING") AND ("SCREENING TEST" OR "SCREENING FOR" OR "DIAGNOSIS OF" OR "DIAGNOSING" OR "TEST FOR" OR "TESTING FOR") AND ("HEALTH" OR "HEALTHCARE") |
Fig. 1Flow diagram of inclusion process
Health conditions addressed in each article
| Health condition | Count | % total |
|---|---|---|
| Cancers (multiple) | 16 | 11.8 |
| Cardiovascular disease | 9 | 6.6 |
| Colorectal cancer | 8 | 5.9 |
| Breast cancer | 7 | 5.1 |
| Mental health | 7 | 5.1 |
| Alzheimer's disease | 6 | 4.4 |
| Lung cancer | 6 | 4.4 |
| Diabetic retinopathy | 5 | 3.7 |
| Kidney disease | 5 | 3.7 |
| Prostate cancer | 4 | 2.9 |
| Eye conditions | 3 | 2.2 |
| Bowel cancer | 2 | 1.5 |
| COVID-19 | 2 | 1.5 |
| Intracranial haemorrhage | 2 | 1.5 |
| Neonatal conditions | 2 | 1.5 |
| Suicide | 2 | 1.5 |
| Variousa | 21 | 15.4 |
| Other | 29 | 21.3 |
| Total | 136 |
Articles were coded as ‘various’ if they were not addressing a technology for one specific health condition. E.g. talking about AI in screening in general, or technologies used to screen for a wide range of diseases
Tally of articles in each frame
| Frame | Count (%) | Nisbet frame | Count (%) |
|---|---|---|---|
| Frame 1—Social progress | 131 (96.3) | Social progress | 131 (96.3) |
| Frame 2—Economic development/conflict and strategy | 59 (43.4) | Economic development | 59 (43.4) |
| Conflict and strategy | 1 (0.7) | ||
| Frame 3—Alternative perspectives | 9 (6.6) | Morality and ethics | 4 (2.9) |
| Scientific and technical uncertainty | 5 (3.7) | ||
| Pandora’s box/Frankenstein’s monster/runaway science | 6 (4.4) | ||
| Public accountability and governance | 5 (3.7) | ||
| Middle way | 3 (2.2) |
Descriptions of frames from Nisbet [19]
ELSIs discussed in the nine alternative perspectives articles
| Article no. (ref); title | Short description of ELSIs |
|---|---|
| A143 [ | Historical bias—algorithms that use historical data may produce biased outputs (e.g. algorithms may find a relationship between a disease and a minority group that has historically had worse access to healthcare) |
| Black box systems—problems arise when doctors cannot access information about the features algorithms use to produce outputs | |
| Physician deskilling—doctors may become over-reliant on algorithms to make decisions and lose the skills to make those decisions without the aid of algorithms | |
| A22 [ | Harm to patients—if AI fails to integrate into workflows or is poorly validated for clinical use it may lead to worse patient outcomes |
| Value tension between health and for-profit enterprise—AI is proprietary and there is a value collision with the bedside clinician | |
| Impact on clinician workflow—AI may be given authority over clinician workflow (e.g. patients’ insurers may only reimburse for the treatments an algorithm recommends, meaning clinicians lose their ability to exercise their own discretion in treating patients) | |
| Exacerbation of human bias—when algorithms are not designed to take structural inequalities into account, they will produce flawed results | |
| A93 [ | Concerns about data privacy—using AI tools routinely will raise the need for better data protection regulations |
| A91 [ | Lacking involvement with medical research—concerns developers of AI are not using normal channels for testing and disseminating algorithms. Claims that they make to consumers are unvalidated and the safety of innovations are not regulated |
| Poor transparency protocol in tech companies | |
| Value tension between health and for-profit enterprise—tech emphasises disruption and convenience, whereas healthcare emphasises safety. The values behind AI development conflict with the Hippocratic oath | |
| Harm to patients—poorly implemented algorithms may lead to iatrogenic health impacts | |
| A3 [ | Need for better data protection regulations |
| Value tension between public and for-profit values | |
| A113 [ | Concerns about data privacy |
| A117 [ | Concerns about data privacy—private companies requesting access to public healthcare data |
| A8 [ | Concerns about data privacy |
| A260 [ | Inaccuracy of AI techniques |