| Literature DB >> 36135147 |
Hongjun Guan1,2, Liye Dong1, Aiwu Zhao1,2.
Abstract
While artificial intelligence (AI) technology can enhance social wellbeing and progress, it also generates ethical decision-making dilemmas such as algorithmic discrimination, data bias, and unclear accountability. In this paper, we identify the ethical risk factors of AI decision making from the perspective of qualitative research, construct a risk-factor model of AI decision making ethical risks using rooting theory, and explore the mechanisms of interaction between risks through system dynamics, based on which risk management strategies are proposed. We find that technological uncertainty, incomplete data, and management errors are the main sources of ethical risks in AI decision making and that the intervention of risk governance elements can effectively block the social risks arising from algorithmic, technological, and data risks. Accordingly, we propose strategies for the governance of ethical risks in AI decision making from the perspectives of management, research, and development.Entities:
Keywords: artificial intelligence decision making; ethical risk; mechanism of action; risk factors
Year: 2022 PMID: 36135147 PMCID: PMC9495402 DOI: 10.3390/bs12090343
Source DB: PubMed Journal: Behav Sci (Basel) ISSN: 2076-328X
Figure 1Process of rooting theory.
Figure 2Three-level coding process in NVivo.
Examples of open coding and scoping.
| No | Initial Scope | Initial Concept |
|---|---|---|
| 1 | Algorithmic discrimination risk | Human-caused discrimination, data-driven discrimination, discrimination caused by machine self-learning, discriminatory algorithm design, non-discriminatory algorithm design, data bias, prejudice, discrimination, user equality |
| 2 | Algorithmic security risks | Algorithm vulnerabilities, malicious exploitation, algorithm design, training, algorithm opacity, algorithm uncontrollability, algorithm unreliability |
| 3 | Algorithm interpretability risk | Informed human interest and subjectivity, algorithmic transparency, algorithmic verifiability |
| 4 | Algorithmic decision-making risk | Algorithm prediction and incorrect decision making, unpredictability of algorithm results, algorithm termination mechanisms |
| 5 | Risk of algorithm abuse/misuse | Algorithm abuse, algorithm misuse, code insecurity, technical malpractice/misuse, over-reliance on algorithms |
| 6 | Technical defect risk | Limited technical competence, inadequate technical awareness, technical failures, inadequate technical manipulation, technical misuse, technical defects, technical immaturity, “black box”, technical uncertainty |
| 7 | Data risk | Hacking, compliant data behavior, biased data omissions, lack of hardware stability, data management gaps, poor data security, image recognition, voice recognition, smart home, data adequacy, false information |
| 8 | Privacy breach risk | Privacy breach due to data resource exploitation, privacy breach due to data management vulnerability, data breach, privacy breach, user knowledge, user consent |
| 9 | Managing risk | Deficiencies in the management of application subjects, inadequate risk management capabilities, lack of supervision, legal loopholes, poor risk-management capabilities, inadequate safety and security measures, inadequate liability mechanisms |
| 10 | Unemployment risk | Machines replacing humans, mass unemployment |
| 11 | Risk of ecological imbalance | High energy consumption in the development of AI, problem of asymmetry in biodiversity [ |
| 12 | Imbalance in the social order | Imbalance in the social order, social stratification, and solidification due to technological wealth disparity, imbalance in human–computer relations, social order, disruption of equity, uncontrolled ethical norms [ |
| 13 | Autonomous controlled risk in human decision making | Substitute human decision making, machine emotions, AI entrusted with the ability to make decisions on human affairs, lack of ethical judgment on decision outcomes, participants and influencers of human decisions, changes in the rights of decision subjects |
| 14 | Risk governance | Educational reform, ethical norms, technical support, legal regulation, international cooperation [ |
| 15 | Risk of unclear liability | Improper attribution of responsibility, unclear attribution of responsibility for safety, debate over the identification of rights and responsibilities of smart technologies, review and identification of attribution of responsibility, complex ethical subject matter [ |
| 16 | Risk of inadequate decision-making mechanisms | Inadequate ethical norms and frameworks, inadequate ethical institution building |
| 17 | Decision judgment deficiency risk | Inadequate ethical judgment, poorly described algorithms for ethical implications, faulty instructions, complex algorithmic models, human-centered ethical decision-making frameworks |
| 18 | Decision making in team risk | Expert governance structures reveal limitations and shortcomings, illogical expert decision making structures, low levels of expert accountability |
| 19 | Consensus risk in decision making | Humans often disagree on solutions to real ethical dilemmas, no consensus, a crisis of confidence |
| 20 | Risk prevention | Enhance bottom-line thinking and risk awareness, strengthen the study and judgment of potential risks of AI development, carry out timely and systematic risk monitoring and assessment, establish an effective risk warning mechanism, improve the ability to control and dispose of ethical AI risks |
| 21 | Risk management | Awareness and culture of ethical risk management; establish a risk management department, risk identification, and assessment and handling; establish an ethical risk oversight department; development of internal policies and systems related to ethical risks; establish open lines of communication and consultation; establish a review mechanism for partners and risk reporting; focus on cultural factors and the significance of ethical risk management; governance with coordination |
| 22 | Ethical norms | Fairness, justice, harmony, security, accountability, traceability, reliability, control, right to control, good governance, social wellbeing |
Spindle code and main scope.
| No | Main Scope | Initial Scope |
|---|---|---|
| 1 | Algorithm risk | Algorithmic discrimination risk; algorithmic security risk; algorithm interpretability risk; algorithmic decision-making risk; risk of algorithm abuse/misuse |
| 2 | Data risk | Data risk |
| 3 | Technology risk | Technical defect risk |
| 4 | Social risk | Unemployment risk; risk of ecological imbalance; imbalance in the social order; privacy breach risk |
| 5 | Management risk | Autonomously controlled risk in human decision making; risk of unclear liability; managing risk |
| 6 | Decision risk | Risk of inadequate decision-making mechanisms; decision judgment deficiency risk; decision making in team risk; consensus risk in decision making |
| 7 | Risk management | Risk prevention; risk management; risk governance; ethical norms |
Figure 3Conceptual model of ethical risk factors for AI decision making.
Figure 4Structural model of the dimensions of ethical risk factors regarding AI decision making.
Figure 5Risk subsystem causality diagram. Loop 1: degree of social risk (DSR) → data risk rate (DRR) → algorithmic discrimination risk (ADR) → rate of algorithm risk (RAR) → rate of management risk (RMR) → risk of decision failure (RDF) → rate of decision risk (RDR) → degree of social risk (DSR); Loop 2: DSR → DRR → ADR → RAR → RAR → privacy leakage (PL) → risk of imbalance in the social order (RISO) → DSR.
Figure 6Risk management system causality diagram. Loop 3: degree in risk management (DRM) → DSR → ADR → RAR → RAR → data leakage risk (RDF) → RISO (RDF) → DSR → risk management (RM) → DRM; Loop 4: DRM → incident rate of technical defect (IRTD) → rate of technical defect (RTD) (→ data leakage risk (DLR) → DSR → ADR) → RAR → RAR → RDF (DLR) → RDF (RISO) → DSR → RM → DRM; Loop 5: DRM → DLR → DSR → ADR → RAR → RAR → RDF (DLR) → RDF (RISO) → DSR → RM → DRM; Loop 6: DRM → IRTD → degree of algorithm interpretability (DAI) → risk of algorithm abuse/misuse (RA A/M) → risk of algorithm security incidents (RASI) → RAR → RAR → RDF → RDF → RM → DRM. Note: “Underline” indicates the simultaneous alternative path.
Figure 7Risk subsystem flow diagram.
Figure 8Risk management system flow diagram.
Ethical risk variables and equations for AI decision making.
| No | Variable | Type | Relationship Equation |
|---|---|---|---|
| 1 | Quality of decision-making teams | Constant | 1 |
| 2 | Employee accident risk rate | Constant | 0.01 |
| 3 | Decision-making mechanism | Constant | 0.8 (assuming a 0.2 flaw in the decision-making mechanism) |
| 4 | Incident rate of a technical defect | Constant | 0.2 (technical risk management can reduce most of the risk of technical defects) |
| 5 | Degree of algorithm interpretability | Auxiliary variable | “The incident rate of technical defect” × 0.5 + 0.2 (design discrimination in the algorithm itself + algorithmic black box issues) |
| 6 | Rate of algorithm abuse/misuse | Auxiliary variable | “Employee accident risk rate” + “Degree of algorithm interpretability” |
| 7 | Rate of algorithm security incidents | Auxiliary variable | “The rate of algorithm abuse/misuse” × 2 (algorithm abuse/misuse rate accelerates algorithm security incidents) |
| 8 | Decision consensus rate | Auxiliary variable | “Quality of decision-making teams” × 0.8 (assumes 80% consistency of decision making in absolute teams) |
| 9 | Risk of unclear liability for accidents | Auxiliary variable | “Decision consensus rate” × 0.2 (the higher the consensus rate of decision making, the lower the risk of liability accidents) |
| 10 | Algorithmic discrimination risk | Auxiliary variable | “Data risk rate” × 0.8 + 0.2 (much of the algorithmic discrimination comes from input data + algorithmic design discrimination) |
| 11 | Rate of a technical defect | Auxiliary variable | “The incident rate of technical defect” × 0.95 + “Employee accident risk rate” × 0.05 (a large part of this is due to technical defects and a small part to problems with the designers themselves) |
| 12 | Rate of algorithm risk | Auxiliary variable | “The rate of algorithm security incidents” + “The rate of technical defect” × 0.1 + “Algorithmic discrimination risk” × 0.1 (the algorithmic risk rate is in addition to the risk rate summarized by the current data. There are also risks that may be caused by future technologies and algorithms) |
| 13 | Rate of management risk | Auxiliary variable | “Risk of unclear liability for accidents” + “The rate of algorithm risk” + 0.2 (unclear responsibility for accidents and algorithmic risks can both contribute to management failures, coupled with the risks inherent in management) |
| 14 | Data leakage | Auxiliary variable | “The rate of technical defect” × 0.5 + 0.2 |
| 15 | Data risk rate | Auxiliary variable | “Data leakage” × 2 + “Degree of social risk” (data breaches can accelerate data risk and are extremely risky for the data generated; the level of social risk also increases data risk) |
| 16 | Risk of imbalance in the social order | Auxiliary variable | “Privacy leakage” × 0.3 + 0.1 (privacy breaches can create social injustice by causing citizen panic and creating problems such as big data killings) |
| 17 | Privacy leakage | Auxiliary variable | “The rate of management risk” × 0.9 + 0.1 (privacy breaches are largely the result of mismanagement) |
| 18 | Risk of decision failure | Auxiliary variable | “The rate of management risk” × 0.9 − “Decision-making mechanism” (management failures can lead to decision failure and decision-making mechanisms can reduce the risk of poor decision making by at least half with a decision-making mechanism of 0.5) |
| 19 | Rate of decision risk | Auxiliary variable | “Risk of decision failure” × 0.9 + 0.1 (decision failure is a large part of the cause of decision risk) |
| 20 | Incidence of social risks | Rate variable | “The rate of decision risk” + “The risk of imbalance in the social order” + 0.1 |
| 21 | Degree of social risk | Level variable | INTEG (“Incidence of social risks”, 1) |
Ethical risk governance variables and equations for AI decision making.
| No | Variable | Type | Relationship Equation |
|---|---|---|---|
| 1 | Incident rate of a technical defect | Auxiliary variable | 1-“Degree of risk management” + 0.1 (technology risk management, which reduces the risk of technical defects) |
| 2 | Data leakage | Auxiliary variable | “The rate of technical defect” − “Degree of risk management” (technical risk (including human) due to their data breach, but risk management will reduce the extent of the breach) |
| 3 | Risk prevention rate | Auxiliary variable | “Degree of social risk” × 0.9 + 0.1 (the higher the social risk, the higher the degree of the social risk equation, and the more ethical norms and management systems will strengthen the risk prevention rate) |
| 4 | Risk management rate | Rate variable | “Risk prevention rate” × 0.5 + 0.1 |
| 5 | Degree of social risk | Level variable | INTEG (“Incidence of social risks” − “Risk management rate”, 1) |
| 6 | Degree in risk management | Level variable | INTEG (1-“Risk management rate”, 1) |
Figure 9Pre-governance risk development.
Figure 10Risk development after governance.