| Literature DB >> 34822096 |
Marianna Capasso1, Steven Umbrello2.
Abstract
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.Entities:
Keywords: Artificial intelligence; Medical AI; Nudging; Technoethics
Mesh:
Year: 2021 PMID: 34822096 PMCID: PMC8613457 DOI: 10.1007/s11019-021-10062-z
Source DB: PubMed Journal: Med Health Care Philos ISSN: 1386-7423
AI for social good meaning and factors
| AI4SG factor | AI4SG factor imperative |
|---|---|
| 1. Falsifiability and incremental deployment | AI4SG designers should identify falsifiable requirements and test them in incremental steps from the lab to the “outside world” (Floridi et al. |
| 2. Safeguards against the manipulation of predictors | AI4SG designers should adopt safeguards that (i) ensure that non-causal indicators do not inappropriately skew interventions and (ii) limit, when appropriate, knowledge of how inputs affect outputs from AI4SG systems to prevent manipulation (Floridi et al. |
| 3. Receiver-contextualised intervention | AI4SG designers should build-decision-making systems in consultation with users interacting with and impacted by these systems; with understanding of users’ characteristics, of the methods of coordination, and the purposes and effects of an intervention, and with respect for users’ right to ignore or modify interventions (Floridi et al. |
| 4. Receiver-contextualised explanation and transparent purposes | AI4SG designers should choose a Level of Abstraction for AI explanation that fulfils the desired explanatory purpose and is appropriate to the system and the receivers; then deploy arguments that are rationally and suitably persuasive for the receivers to deliver the explanation and ensure that the goal (the system’s purpose) for which an AI4SG system is developed and deployed is knowable to receivers of its outputs by default (Floridi et al. |
| 5. Privacy protection and data subject consent | AI4SG designers should respect the threshold of consent established for the processing of datasets of personal data (Floridi et al. |
| 6. Situational fairness | AI4SG designers should remove from relevant datasets variables and proxies that are irrelevant to an outcome, except when their inclusion supports inclusivity, safety, or other ethical imperatives (Floridi et al. |
| 7. Human-friendly semanticisation | AI4SG designers should not hinder the ability for people to semanticise (that is, to give meaning to and make sense of) something (Floridi et al. |
Fig. 1Relationship between higher-order values of the EU HLEG on AI and AI4SG norms. 2021
Source: Umbrello and van de Poel (2021)
Fig. 2The recursive VSD tripartite framework employed in this study. Source, Umbrello (2020)
Fig. 3Values hierarchy.
Source: van de Poel (2013)
Fig. 4AI4SG-VSD design process. 2021
Source: Umbrello and van de Poel (2021)
Fig. 5Translating the value of Fairness to design requirements through AI4SG norms