| Literature DB >> 33644301 |
Brian R Jackson1,2, Ye Ye3, James M Crawford4, Michael J Becich3, Somak Roy5, Jeffrey R Botkin6, Monica E de Baca7, Liron Pantanowitz8.
Abstract
Growing numbers of artificial intelligence applications are being developed and applied to pathology and laboratory medicine. These technologies introduce risks and benefits that must be assessed and managed through the lens of ethics. This article describes how long-standing principles of medical and scientific ethics can be applied to artificial intelligence using examples from pathology and laboratory medicine.Entities:
Keywords: algorithms; artificial intelligence; big data; ethics; machine learning; privacy
Year: 2021 PMID: 33644301 PMCID: PMC7894680 DOI: 10.1177/2374289521990784
Source DB: PubMed Journal: Acad Pathol ISSN: 2374-2895
Ethical Principles Relevant to AI.
| Core principle | Description |
|---|---|
| Patient autonomy (respect for persons) | Acknowledging patients have decision-making capacity |
| Beneficence and nonmaleficence | Realistic prospect of benefit for the patient |
| Justice | Equitable distribution of costs, risks, and benefits across populations |
| Scientific inquiry | Creating knowledge and sharing knowledge |
Abbreviation: AI, artificial intelligence.
Mechanisms to Assure Adherence to Ethical Principles.
| Organizational level | Examples |
|---|---|
| Individual accountability | Professional codes of ethics and codes of conduct |
| Organizational accountability | Institutional policies and procedures for conflict of interest, management of external business relationships, transparency to stakeholders |
| Regulatory accountability | Government regulation, regulation by professional entities |
Summary of Principles for the Ethical Development and Use of AI in Pathology and Laboratory Medicine.
|
Developers of AI systems should proactively inform patients and the public of how their data are collected and used to develop and validate their systems. Clinical organizations and developers should provide for informed individuals to control whether and how their personal data are used in the development of pathology AI systems. Developers, validators, and implementers of pathology AI systems should ensure that their systems provide measurable benefit to patients and/or populations, while minimizing risks and harms. Developers, validators, and implementers of pathology AI systems should ensure that both the development processes and the developed systems themselves promote fair treatment across all populations. This includes fair distribution of benefits, risks, and harms across all populations whose data are used for development of these systems, as well as those impacted by the use of such systems. Developers and implementers of pathology AI systems should ensure that their systems are sufficiently transparent and auditable to ensure that the above principles are being followed. They should also establish formal auditing processes and provide for public transparency of any findings (eg, through scientific publication). Developers, validators, and implementers of pathology AI systems should follow scientific norms of broad knowledge sharing and research integrity. Developers, validators, and implementers of pathology AI systems should establish formal oversight mechanisms, akin to institutional review boards, to ensure accountability to these ethical principles. Organizations engaged in developing, validating, implementing, selling, or purchasing pathology AI system should hold each other accountable to this set of ethical principles through formal mechanisms such as contracts. This includes requirements of transparency, auditability and auditing, and prohibitions against reidentification and other misuse. |