Literature DB >> 28343626

The rat-a-gorical imperative: Moral intuition and the limits of affective learning.

Joshua D Greene1.   

Abstract

Decades of psychological research have demonstrated that intuitive judgments are often unreliable, thanks to their inflexible reliance on limited information (Kahneman, 2003, 2011). Research on the computational underpinnings of learning, however, indicates that intuitions may be acquired by sophisticated learning mechanisms that are highly sensitive and integrative. With this in mind, Railton (2014) urges a more optimistic view of moral intuition. Is such optimism warranted? Elsewhere (Greene, 2013) I've argued that moral intuitions offer reasonably good advice concerning the give-and-take of everyday social life, addressing the basic problem of cooperation within a "tribe" ("Me vs. Us"), but that moral intuitions offer unreliable advice concerning disagreements between tribes with competing interests and values ("Us vs. Them"). Here I argue that a computational perspective on moral learning underscores these conclusions. The acquisition of good moral intuitions requires both good (representative) data and good (value-aligned) training. In the case of inter-tribal disagreement (public moral controversy), the problem of bad training looms large, as training processes may simply reinforce tribal differences. With respect to moral philosophy and the paradoxical problems it addresses, the problem of bad data looms large, as theorists seek principles that minimize counter-intuitive implications, not only in typical real-world cases, but in unusual, often hypothetical, cases such as some trolley dilemmas. In such cases the prevailing real-world relationships between actions and consequences are severed or reversed, yielding intuitions that give the right answers to the wrong questions. Such intuitions-which we may experience as the voice of duty or virtue-may simply reflect the computational limitations inherent in affective learning. I conclude, in optimistic agreement with Railton, that progress in moral philosophy depends on our having a better understanding of the mechanisms behind our moral intuitions.
Copyright © 2017 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Consequentialism; Deontology; Ethics; Machine learning; Model-free learning; Moral judgment; Normative ethics; Reinforcement learning; Utilitarianism

Mesh:

Year:  2017        PMID: 28343626     DOI: 10.1016/j.cognition.2017.03.004

Source DB:  PubMed          Journal:  Cognition        ISSN: 0010-0277


  5 in total

1.  Model-free decision making is prioritized when learning to avoid harming others.

Authors:  Patricia L Lockwood; Miriam C Klein-Flügge; Ayat Abdurahman; Molly J Crockett
Journal:  Proc Natl Acad Sci U S A       Date:  2020-10-14       Impact factor: 11.205

2.  When do caregivers ignore the veil of ignorance? An empirical study on medical triage decision-making.

Authors:  Azgad Gold; Binyamin Greenberg; Rael Strous; Oren Asman
Journal:  Med Health Care Philos       Date:  2021-01-04

Review 3.  The intractable problems with brain death and possible solutions.

Authors:  Ari R Joffe; Gurpreet Khaira; Allan R de Caen
Journal:  Philos Ethics Humanit Med       Date:  2021-10-09       Impact factor: 2.464

4.  The Characteristics of Moral Judgment of Psychopaths: The Mediating Effect of the Deontological Tendency.

Authors:  Shenglan Li; Daoqun Ding; Ji Lai; Xiangyi Zhang; Zhihui Wu; Chang Liu
Journal:  Psychol Res Behav Manag       Date:  2020-03-09

5.  Moral judgements of fairness-related actions are flexibly updated to account for contextual information.

Authors:  Milan Andrejević; Daniel Feuerriegel; William Turner; Simon Laham; Stefan Bode
Journal:  Sci Rep       Date:  2020-10-20       Impact factor: 4.379

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.