Literature DB >> 33496885

Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.

Christian Herzog1.   

Abstract

In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value-e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual's moral stances with the purpose to increase, what I term, 'moral efficiency'. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford 'moral replicas' and further reinforce social inequalities. The second thought experiment deals with the idea of a 'moral calculator'. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, 'moral calculators' as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of 'moral calculators' without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue-a trend that can already be observed in the literature.

Entities:  

Keywords:  AI ethics; Artificial intelligence; Artificial moral agents; Machine ethics; Robot ethics

Year:  2021        PMID: 33496885      PMCID: PMC7838071          DOI: 10.1007/s11948-021-00283-z

Source DB:  PubMed          Journal:  Sci Eng Ethics        ISSN: 1353-3452            Impact factor:   3.525


  6 in total

1.  The intelligence of the moral intuitions: comment on Haidt (2001).

Authors:  David A Pizarro; Paul Bloom
Journal:  Psychol Rev       Date:  2003-01       Impact factor: 8.934

Review 2.  Automation bias - a hidden issue for clinical decision support system use.

Authors:  Kate Goddard; Abdul Roudsari; Jeremy C Wyatt
Journal:  Stud Health Technol Inform       Date:  2011

3.  Semantics derived automatically from language corpora contain human-like biases.

Authors:  Aylin Caliskan; Joanna J Bryson; Arvind Narayanan
Journal:  Science       Date:  2017-04-14       Impact factor: 47.728

4.  Building Moral Robots: Ethical Pitfalls and Challenges.

Authors:  John-Stewart Gordon
Journal:  Sci Eng Ethics       Date:  2019-01-30       Impact factor: 3.525

5.  The Artificial Moral Advisor. The "Ideal Observer" Meets Artificial Intelligence.

Authors:  Alberto Giubilini; Julian Savulescu
Journal:  Philos Technol       Date:  2017-12-08

6.  Critiquing the Reasons for Making Artificial Moral Agents.

Authors:  Aimee van Wynsberghe; Scott Robbins
Journal:  Sci Eng Ethics       Date:  2018-02-19       Impact factor: 3.525

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.