| Literature DB >> 35052036 |
Luís Moniz Pereira1, The Anh Han2, António Barata Lopes3.
Abstract
We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are "blind" to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents.Entities:
Keywords: AI governance; artificial intelligence; evolutionary game theory; human morality; machine ethics
Year: 2021 PMID: 35052036 PMCID: PMC8774644 DOI: 10.3390/e24010010
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Prisoners’ Dilemma.
| Prisoner B: Silence | Prisoner B: Confesses | |
|---|---|---|
| Prisoner A: Silence |
Prison for six years, for
|
A: Prison for ten years |
| Prisoner A: Confesses |
A: Prison for two years | Prison for eight years, for each of them |