| Literature DB >> 34249620 |
Jerry M Spiegel1, Rodney Ehrlich2, Annalee Yassi1, Francisco Riera3, James Wilkinson4, Karen Lockhart1, Stephen Barker1, Barry Kistnasamy5.
Abstract
Although Artificial Intelligence (AI) is being increasingly applied, considerable distrust about introducing "disruptive" technologies persists. Intrinsic and contextual factors influencing where and how such innovations are introduced therefore require careful scrutiny to ensure that health equity is promoted. To illustrate one such critical approach, we describe and appraise an AI application - the development of computer assisted diagnosis (CAD) to support more efficient adjudication of compensation claims from former gold miners with occupational lung disease in Southern Africa. In doing so, we apply a bio-ethical lens that considers the principles of beneficence, non-maleficence, autonomy and justice and add explicability as a core principle. We draw on the AI literature, our research on CAD validation and process efficiency, as well as apprehensions of users and stakeholders. Issues of concern included AI accuracy, biased training of AI systems, data privacy, impact on human skill development, transparency and accountability in AI use, as well as intellectual property ownership. We discuss ways in which each of these potential obstacles to successful use of CAD could be mitigated. We conclude that efforts to overcoming technical challenges in applying AI must be accompanied from the onset by attention to ensuring its ethical use. Copyright:Entities:
Year: 2021 PMID: 34249620 PMCID: PMC8252970 DOI: 10.5334/aogh.3206
Source DB: PubMed Journal: Ann Glob Health ISSN: 2214-9996 Impact factor: 2.462
The case for and against use of an AI application (Computer Assisted Diagnosis of silicosis and tuberculosis) for assessment of claims for occupational lung disease in miners.
| THE CASE AGAINST USING AI | COUNTER-ARGUMENTS OR MITIGATION | ACTIONS NEEDED |
|---|---|---|
| AI is of value if accurate. | Rigorous monitoring and evaluation would need to continue after introduction to ensure satisfactory sensitivity and specificity – so that CAD systems continue to improve with feedback. | Committing resources for ongoing monitoring and evaluation real world applications. |
| Privacy and security of personal information could be compromised. | Privacy and security of data are equally of concern in systems that do not use AI. Arguably data protection measures could more easily be put in place in data-driven systems. | Protocols covering access to data need to be written/agreed upon by all users. |
| AI training could be subject to bias – for example, if trained against “gold standards” that are themselves inaccurate. | The AI systems need to be repeatedly assessed for accuracy against different and independent “gold standards” to avoid the biases of any one group of experts. | Willingness to share databases alongside ongoing resource commitment. |
| Reliance on AI could decrease availability of needed skilled experts and lead to de-skilling of clinical judgment. | If the system is used for triage, rather than replacement of human expertise, it would serve to make specialists’ time more efficient and reduce the cost burden of specialist services. Specialists would need to understand the limits of AI to avoid over-reliance on the AI. | In the compensation context, there needs to be a strong understanding amongst all stakeholders that the intent is for triage rather than screening out. Ongoing monitoring is needed to ensure that complacency doesn’t take hold. Also, specialists should be trained to expect and look for false negatives and false positives. |
| Transparency could be diminished such that users are disempowered. | Assumptions inherent in the systems should be transparent, including accuracy, i.e. sensitivity and specific for each type of assessment. Accountability would need to remain with clinicians who use the system and the medical professionals who sign off on cases. | Sustained commitment to openness and transparency is needed. |
| Proprietary ownership of AI could make it prohibitively expensive for the public sector. | As public sector data are being used to train AI systems, | A change of payment provisions may be needed, as AI companies depend on royalty revenue unless access provisions are specified for public interest uses. |