| Literature DB >> 35360424 |
Nithesh Naik1,2, B M Zeeshan Hameed2,3, Dasharathraj K Shetty4, Dishant Swain5, Milap Shah2,6, Rahul Paul7, Kaivalya Aggarwal8, Sufyan Ibrahim2,9, Vathsala Patil10, Komal Smriti10, Suyog Shetty3, Bhavan Prasad Rai2,11, Piotr Chlosta12, Bhaskar K Somani2,13.
Abstract
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecurity of associated vulnerabilities.Entities:
Keywords: artificial intelligence; ethical issues; legal issues; machine learning; social issues
Year: 2022 PMID: 35360424 PMCID: PMC8963864 DOI: 10.3389/fsurg.2022.862322
Source DB: PubMed Journal: Front Surg ISSN: 2296-875X
Figure 1Various ethical and legal conundrums involved with the usage of artificial intelligence in healthcare.
Considerations for ethical review for healthcare-based Machine learning research: procedural and conceptual changes (31).
|
| |
| Group-based approval | Providing access to specific, qualified individuals who are grouped around a common governance structure, subject to certain conditions, and with a specific aim in mind. |
| PHI (Protected health information) protection | PHI that isn't required is deleted, leaving the option of examining raw or masked data. |
| Broad goal without pre-determined methodology | Allows comparison of alternative methodologies to help implementation and avoids biasing study outputs. |
| Data-access frameworks | A greater emphasis on data governance, with accountability gained by access and rationale records. |
| Pre-specified, frequent data retrieval without repeated amendments | Ascertains if the model is learning from the most recent patterns in health data. |
|
| |
| Prospective non-interventional trial application as a template | Patients do not receive treatments, and machine learning results do not reach the treating team in time to influence decision-making or the trial's evaluation. |
| Goal of the trial | To see if the model is feasible and if it can be used in clinical settings. |
| Model validation | Technical performance and calibration were evaluated using ML best practices. |
| Clinical evaluation | By comparing quiet predictions to real-time patient labeling, evidence for the model's clinical usefulness is obtained. |
|
| |
| Goal of the trial | To see if the model is more effective than the existing standard of treatment. |
| Generalizability | Rather than demonstrating the model's generalizability, the goal is to demonstrate the approach's generalizability. |
| Disaggregated performance metrics | Patient safety and justice depend on disaggregated performance indicators, which will guide clinical acceptance. |
| Clinically relevant evaluation | Disaggregated performance measures will guide clinician acceptance, ensuring patient safety and justice. |