| Literature DB >> 35720034 |
Abstract
With the continuous deepening of artificial intelligence (AI) in the medical field, the social risks brought by the development and application of medical AI products have become increasingly prominent, bringing hidden worries to the protection of civil rights, social stability, and healthy development. There are many new problems that need to be solved in our country's existing risk regulation theories when dealing with such risks. By introducing the theory of risk administrative law, it analyzes the social risks of medical AI, organically combines the principle of risk prevention with benefit measurement, and systematically and flexibly reconstructs the theoretical system of medical AI social risk assessment. This paper has completed the following work: (1) reviewed and sorted out the works and papers related to medical AI ethics, medical AI risk, etc., and sorted out the current situation of medical AI social risk regulation at home and abroad to provide help for follow-up research. (2) The related technologies of artificial neural network (ANN) are introduced, and the risk assessment index system of medical AI is constructed. (3) With the self-designed dataset, the trained neural network model is utilized to assess risk. The experimental results reveal that the created BPNN model's error is relatively tiny, indicating that the algorithm model developed in this research is worth popularizing and applying.Entities:
Mesh:
Year: 2022 PMID: 35720034 PMCID: PMC9200582 DOI: 10.1155/2022/5413202
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.809
Figure 1Artificial neuron model.
Figure 2BP neural network structure diagram.
Medical AI product development risk assessment system.
| Index | Label |
|---|---|
| Weakening of doctors' practical ability | R1 |
| Weakening of the moral responsibility of doctors | R2 |
| Patient privacy violated | R3 |
| The humanity of the patient is challenged | R4 |
| The rights of patients cannot be guaranteed | R5 |
| Lack of doctor-patient trust | R6 |
| Weakening of humanistic care | R7 |
| Influencing the scientific nature of patient care research | R8 |
| Exacerbating the unequal distribution of medical resources | R9 |
| Simple labor unemployment risk | R10 |
| Risk of increased medical burden on patients | R11 |
| Medical data collection distortion | R12 |
| Risk of medical data breach | R13 |
| Generate ethical and moral hazard | R14 |
Training results for different numbers of hidden layer nodes.
| Number of hidden layer nodes | Mean squared error |
|---|---|
| 5 | 0.0000977 |
| 6 | 0.0000869 |
| 7 | 0.0000619 |
| 8 | 0.0000602 |
| 9 | 0.0000788 |
| 10 | 0.0000872 |
| 11 | 0.0000948 |
| 12 | 0.0000982 |
| 13 | 0.0000965 |
| 14 | 0.0000990 |
| 15 | 0.0000974 |
| 25 | 0.0000893 |
| 35 | 0.0000989 |
| 45 | 0.0000948 |
Figure 3BP neural network training error curve.
Figure 4Variation curve of training simulation error of neural network.
Figure 5Training sample simulation results.
Figure 6Test sample simulation results.