Literature DB >> 33733154

The Moral Choice Machine.

Patrick Schramowski1, Cigdem Turan1, Sophie Jentzsch1,2, Constantin Rothkopf3,4, Kristian Kersting1,4.   

Abstract

Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning about "right" and "wrong" conduct. We create a template list of prompts and responses, such as "Should I [action]?", "Is it okay to [action]?", etc. with corresponding answers of "Yes/no, I should (not)." and "Yes/no, it is (not)." The model's bias score is the difference between the model's score of the positive response ("Yes, I should") and that of the negative response ("No, I should not"). For a given choice, the model's overall bias score is the mean of the bias scores of all question/answer templates paired with that choice. Specifically, the resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context. It is objectionable to kill living beings, but it is fine to kill time. It is essential to eat, yet one might not eat dirt. It is important to spread information, yet one should not spread misinformation. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and moral choices, even with context information. Actually, training the Moral Choice Machine on different temporal news and book corpora from the year 1510 to 2008/2009 demonstrate the evolution of moral and ethical choices over different time periods for both atomic actions and actions with context information. By training it on different cultural sources such as the Bible and the constitution of different countries, the dynamics of moral choices in culture, including technology are revealed. That is the fact that moral biases can be extracted, quantified, tracked, and compared across cultures and over time.
Copyright © 2020 Schramowski, Turan, Jentzsch, Rothkopf and Kersting.

Entities:  

Keywords:  AI; fairness in machine learning; machine learning; moral bias; natural language processing; text-embedding models

Year:  2020        PMID: 33733154      PMCID: PMC7861227          DOI: 10.3389/frai.2020.00036

Source DB:  PubMed          Journal:  Front Artif Intell        ISSN: 2624-8212


  5 in total

1.  Measuring individual differences in implicit cognition: the implicit association test.

Authors:  A G Greenwald; D E McGhee; J L Schwartz
Journal:  J Pers Soc Psychol       Date:  1998-06

2.  Semantics derived automatically from language corpora contain human-like biases.

Authors:  Aylin Caliskan; Joanna J Bryson; Arvind Narayanan
Journal:  Science       Date:  2017-04-14       Impact factor: 47.728

3.  The role of a "common is moral" heuristic in the stability and change of moral norms.

Authors:  Björn Lindström; Simon Jangard; Ida Selbing; Andreas Olsson
Journal:  J Exp Psychol Gen       Date:  2017-09-11

4.  How Moral Perceptions Influence Intergroup Tolerance: Evidence From Lebanon, Morocco, and the United States.

Authors:  Nadine Obeid; Nichole Argo; Jeremy Ginges
Journal:  Pers Soc Psychol Bull       Date:  2017-03

5.  Math = male, me = female, therefore math not = me.

Authors:  Brian A Nosek; Mahzarin R Banaji; Anthony G Greenwald
Journal:  J Pers Soc Psychol       Date:  2002-07
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.