| Literature DB >> 30559686 |
Lindsey C McKernan1,2, Ellen W Clayton3,4, Colin G Walsh1,5,6.
Abstract
In the United States, suicide increased by 24% in the past 20 years, and suicide risk identification at point-of-care remains a cornerstone of the effort to curb this epidemic (1). As risk identification is difficult because of symptom under-reporting, timing, or lack of screening, healthcare systems rely increasingly on risk scoring and now artificial intelligence (AI) to assess risk. AI remains the science of solving problems and accomplishing tasks, through automated or computational means, that normally require human intelligence. This science is decades-old and includes traditional predictive statistics and machine learning. Only in the last few years has it been applied rigorously in suicide risk prediction and prevention. Applying AI in this context raises significant ethical concern, particularly in balancing beneficence and respecting personal autonomy. To navigate the ethical issues raised by suicide risk prediction, we provide recommendations in three areas-communication, consent, and controls-for both providers and researchers (2).Entities:
Keywords: artificial intelligence; code of ethics; ethics; machine learning; suicide
Year: 2018 PMID: 30559686 PMCID: PMC6287030 DOI: 10.3389/fpsyt.2018.00650
Source DB: PubMed Journal: Front Psychiatry ISSN: 1664-0640 Impact factor: 4.157
Recommendations for risk mitigation applying AI for suicide prevention in healthcare settings.
| Consent | Develop informed consent for patients to sign detailing the actions and limitations of AI | Develop consent forms to all literacy levels and test for understanding |
| Develop similar consent for providers | Develop patient education materials that detail the purpose of AI and evaluate for understanding | |
| Provide patients with “opt-out” of AI monitoring | ||
| Provide time limits or expiration to consent | ||
| Re-consent each year with evolving technology | ||
| Have consent documents approved by experts and medical review board | ||
| Controls | Adopt standards for suicide monitoring with AI, such as determining what percentage of at-risk individuals will be monitored | Compare provider-informed vs. AI-only model to assess for increased accuracy with feedback |
| Form an AI oversight panel with multidisciplinary specialty | ||
| Request provider feedback routinely and update systems accordingly | ||
| Create a system for providers to defer or activate risk monitoring with explanation | ||
| Log model successes and failures, re-train models | ||
| Communication | Conduct focus groups with stakeholders to assess for appropriateness and utility of integrating AI into healthcare | Develop provider materials and elicit feedback for appropriateness |
| Provide communication materials for provider use to discuss AI and the monitoring process | ||