| Literature DB >> 31022751 |
Harshana Liyanage1, Siaw-Teng Liaw2, Jitendra Jonnagaddala2, Richard Schreiber3, Craig Kuziemsky4, Amanda L Terry5, Simon de Lusignan1.
Abstract
BACKGROUND: Artificial intelligence (AI) is heralded as an approach that might augment or substitute for the limited processing power of the human brain of primary health care (PHC) professionals. However, there are concerns that AI-mediated decisions may be hard to validate and challenge, or may result in rogue decisions.Entities:
Year: 2019 PMID: 31022751 PMCID: PMC6697547 DOI: 10.1055/s-0039-1677901
Source DB: PubMed Journal: Yearb Med Inform ISSN: 0943-4747
Examples of benefit use cases in which AI can be leveraged in a primary care setting as suggested by the panel members.
| Themes | Examples of benefit use cases of AI in primary care setting |
|---|---|
| Decision support to improve primary health care processes | a) Improving accessibility by triaging primary care patients and conduct a preliminary analysis suggesting likely diagnosis. |
| Pattern recognition in imaging results | a) Automatic detection of tumours using whole slide digital pathology images |
| Predictive modelling performed on primary care health data | a) Detection of high risk for mental health disorders/ cardiovascular disease |
| Business analytics for primary care provider | a) AI applications that operate on routinely collected administrative data could provide regular feedback to practice managers, business owners, and individual clinicians (doctors, nurses, and others) to reduce variability and improve quality of care |
Examples of risk use cases in which AI could result in a potential risk to patients in primary care as suggested by the panel members.
| Themes | Examples of risk use cases of AI in primary care setting |
|---|---|
| AI technology currently available to deploy in primary care is still not competent to replace human decision making in clinical scenarios | a) Interpreting the results of an analysis using AI without an understanding of the primary health care context |
| Risk of medical errors | a) Potential for errors in prescribing. If a doctor prescribes a medication using adult doses for a child, and the AI doesn’t have a guideline to spot the error, the AI could propagate the error into the child’s future and that of other children on the same medication. This happens with humans (who are experts and specialists) and can happen in a learning AI scenario |
| Risk of bias | a) That the data behind the constructed AI knowledge model was biased, or not compatible with the patient to whom the clinician applies the AI: e.g., a model learned in a population with specific sub-phenotypes may not be adequate to another population, or a model learned with past data models (ICD-9) may not be adequate/generalizable to new data models (ICD-10) |
| Risk of secondary effects of utilising AI | a) Insurance providers using AI for higher premiums or even excluding certain people for insurance |
Consensus statements generated from the analysis of Round 1’s responses (with Agreement written in green, Equivocation in brown, and Disagreement in red according to responses from Round 1).
| Statement 1 - The most prevalent use of AI currently in primary care is for predictive modelling (e.g. detection of high risk for mental health disorders / cardiovascular disease) based on knowledge inferred from large clinical datasets. |
| Statement 2 - AI in primary care is currently needed more to manage provision of care (e.g. triage) than for clinical decision support. |
| Statement 3 - AI applications can be incorporated more easily in business analytics in primary care than analytics to support the clinical process. |
| Statement 4 - AI applications should be capable of assessing and adapting to the preferences of a clinician (e.g. learning about preferred medication that a clinician prescribes for male adult hypertensive). |
| Statement 5 - (Over) reliance on AI applications to make clinical decisions can be harmful to patients. |
| Statement 6 - Current AI applications mainly operate as black boxes (from the perspective of clinicians) and therefore need regular scrutiny by users (e.g. clinicians and managers) |
| Statement 6 - Excessive patient data will reduce the effectiveness of patients’ online experience. [Inhibitor] [Equivocation] |
| Statement 7 - Current datasets used to train and testing AI applications are not representative of patient services enhances shared decision-making. [Enabler] [Disagreement] |
| Statement 9 - Access to patient data such as radiology results or lab results will not be cost-beneficial as it will not be used by the wider patient population. [Inhibitor] [Disagreement] |
| Statement 8 - Clinical decisions made by AI applications may lead to unnecessary treatment which may not be those recommended by evidence-based guidelines. |
| Statement 9 - Ethics committees (or institutional risk management committees) should be trained in formal processes to assess the ethical processing of data in AI applications. |
| Statement 10 - Data governance committees should also oversee AI applications. |
| Statement 11 - Data processing in AI applications needs to be monitored closely. |
| Statement 12 - Data output display needs to be assessed for fidelity and quality. |
| Statement 13 - Mechanisms to identify biases in unsupervised algorithms need to be implemented in all AI applications. |
| Statement 14 - Advances in AI application in primary care will lead to improvement of a) clinical decision making; b) risk assessment; c) care processes; d) continuity of care; e) coordination of care; f) safety of care; and g) managerial processes in health care. |