| Literature DB >> 35588025 |
Abstract
Recent advancements in artificial intelligence (AI) have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to alleviate these issues, both on a practical and theoretical level of analysis. First, we describe two approaches to machine ethics: the philosophical approach and the engineering approach and show how tensions between the two arise due to discipline specific practices and aims. Using the concept of disciplinary capture, we then discuss potential promises and pitfalls to cross-disciplinary collaboration. Drawing on recent work in philosophy of science, we finally describe how metacognitive scaffolds can be used to avoid epistemological obstacles and foster innovative collaboration in AI ethics in general and machine ethics in particular.Entities:
Keywords: AI ethics; Applied epistemology; Artificial moral agent; Disciplinary perspectives; Interdisciplinarity; Machine ethics
Mesh:
Year: 2022 PMID: 35588025 PMCID: PMC9120092 DOI: 10.1007/s11948-022-00378-1
Source DB: PubMed Journal: Sci Eng Ethics ISSN: 1353-3452 Impact factor: 3.777
List of topics (left column) with possible questions and answers (right column) that can be used to describe, analyze, and compare views central for different approaches to machine morality (inspired by Baalen and Boon (2019))
| Consciousness | Q: What sort of consciousness is sufficient/necessary for morality? A: Dualism, physicalism, functionalism, computationalism, behaviorism |
| Autonomy | Q: What kind of autonomy is sufficient/necessary for moral agency and responsibility? A: Self-legislative (Kantian), independent from human supervision (AI) |
| Rationality | Q: What sort of rationality is sufficient/necessary for morality? A: Instrumental (rational agent), human-like rationality, Hobbesian empiricism, Kantian rationalism |
| Normative ethics | Q: What is morally good? A: Maximization of well-being (utilitarianism), duties and rights (deontology), virtues and flourishing (virtue ethics) |
| Metaethics | Q1: What is the nature of moral judgements? A: universalism, relativism, nihilism Q2: What is the meaning of moral terms? A: cognitivism, non-cognitivism Q3: Is moral knowledge possible?A: empiricism, rationalism, intuitionism, skepticism Q4: What is the nature of ethics? A: philosophical, social, psychological, biological Q5: How is morality evaluated? A: societal good, human experts, moral law |
| Implementation | Q: How should ethics be implemented in machines? A: Top-down, bottom-up, hybrid |
| Technology | Q: What are the most suitable technical methods for developing moral machines? A: Logical reasoning, probability, machine learning, optimization |
| Research aim | Q: What is the overall aim of the research? A: Epistemic, normative, critical, theoretical, practical, constructive, monetary |
| Justification | Q: How is the research justified? A: Inevitability, harm-prevention, public trust, preventing immoral use, moral superiority of AMAs, better understanding of morality |
| Technological assessment | Q: How realistic is the explored artificial moral agent? A: Theoretically possible in the long-term, practically feasible with current technology |