| Literature DB >> 32246245 |
Luciano Floridi1,2, Josh Cowls3,4, Thomas C King1, Mariarosaria Taddeo1,2.
Abstract
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.Entities:
Keywords: AI4SG; Artificial intelligence; Ethics; Privacy; Safety; Social good; Transparency
Year: 2020 PMID: 32246245 PMCID: PMC7286860 DOI: 10.1007/s11948-020-00213-5
Source DB: PubMed Journal: Sci Eng Ethics ISSN: 1353-3452 Impact factor: 3.525
Summary of seven factors supporting AI4SG and the corresponding best practices
| Factors | Corresponding best practices | Corresponding ethical principle |
|---|---|---|
| Falsifiability and incremental deployment | Identify falsifiable requirements and test them in incremental steps from the lab to the “outside world” | Nonmaleficence |
| Safeguards against the manipulation of predictors | Adopt safeguards which (i) ensure that non-causal indicators do not inappropriately skew interventions, and (ii) limit, when appropriate, knowledge of how inputs affect outputs from AI4SG systems, to prevent manipulation | Nonmaleficence |
| Receiver-contextualised intervention | Build decision-making systems in consultation with users interacting with and impacted by these systems; with understanding of users’ characteristics, the methods of coordination, the purposes and effects of an intervention; and with respect for users’ right to ignore or modify interventions | Autonomy |
| Receiver-contextualised explanation and transparent purposes | Choose a Level of Abstraction for AI explanation that fulfils the desired explanatory purpose and is appropriate to the system and the receivers; then deploy arguments that are rationally and suitably persuasive for the receiver to deliver the explanation; and ensure that the goal (the system’s purpose) for which an AI4SG system is developed and deployed is knowable to receivers of its outputs by default | Explicability |
| Privacy protection and data subject consent | Respect the threshold of consent established for the processing of datasets of personal data | Nonmaleficence; autonomy |
| Situational fairness | Remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives | Justice |
| Human-friendly semanticisation | Do not hinder the ability for people to semanticise (that is, to give meaning to, and make sense of) something | Autonomy |
| Name | References | Areas | Relevant factor(s) | |
|---|---|---|---|---|
| A | Field optimization of the protection assistant for wildlife security | Fang et al. ( | Environmental sustainability | (1), (3) |
| B | Identifying students at risk of adverse academic outcomes | Lakkaraju et al. ( | Education | (4) |
| C | Health information for homeless youth to reduce the spread of HIV | Yadav et al. ( | Poverty, public welfare, public health | (4) |
| D | Interactive activity recognition and prompting to assist people with cognitive disabilities | Chu et al. ( | Disability, public health | (3), (4), (7) |
| E | Virtual teaching assistant experiment | Eicher et al. ( | Education | 4), 6) |
| F | Detecting evolutionary financial statement fraud | Zhou and Kapoor ( | Finance, crime | (2) |
| G | Tracking and monitoring hand hygience compliance | Haque et al. ( | Health | (5) |