Literature DB >> 22013409

Frontiers research topic on the neurobiology of choice.

Julia Trommershäuser1.   

Abstract

Entities:  

Year:  2011        PMID: 22013409      PMCID: PMC3189640          DOI: 10.3389/fnins.2011.00119

Source DB:  PubMed          Journal:  Front Neurosci        ISSN: 1662-453X            Impact factor:   4.677


× No keyword cloud information.
Research on economic decision-making seeks to understand how subjects choose between plans of action (lotteries, gambles, prospects) that have economic consequences. The key difficulty in making such decisions is that typically no plan of action available to the decision-maker guarantees a specific outcome, rather, consequences are risky or uncertain. More recently, researchers in psychology, behavioral and computational neuroscience, and psychology have started to apply these theoretical principles to studying choice behavior and its neural basis in the laboratory, for instance in electrophysiological studies of animals making choices for primary reward such as juice, and neuroimaging studies of humans making choices for money. Moreover, researchers across all these fields are, in parallel, studying how decisions are guided by learning and how the computations relevant to decisions and choices are represented neurally. This Frontiers Research Topic on The Neurobiology of Choice combines contributions from researchers from the fields of neurobiology, behavioral, and computational neuroscience that discuss the neural computations underlying decision-making and adaptive behavior. Placing motor and cognitive decisions in a common theoretical framework brings into sharp relief one apparent difference between them. Researchers have long argued that humans and animals mostly make choices in sensorimotor tasks that are nearly optimal – in the sense of approaching maximal expected utility – or complying with principles of statistical inference. In contrast, work in traditional economic decision-making tasks often focuses on situations in which participants violate the predictions of expected utility theory, for instance by misrepresenting the frequency of rare events or due to interference with emotional factors (Kirk et al., 2011). More recently, researchers in psychology and neuroscience have started to apply these theoretical principles to studying choice behavior and its neural basis in the laboratory, for instance in electrophysiological studies of animals making choices for primary reward such as juice (Milstein and Dorris, 2011; Opris et al., 2011) and neuroimaging studies of humans making choices for money (Delgado et al., 2011). Meanwhile, a largely different group of researchers, working in the field of sensorimotor control, have also recently drawn on statistical decision theory and reinforcement learning in order to reformulate the problem of hand and eye movement control (Stoloff et al., 2011). An important area of current research in both areas is how decisions are impacted by learning (Delgado et al., 2011) and when reward is delayed. The latter type of task is concerned with the discounting of future rewards, as opposed to smaller rewards that may be preferred when available immediately (Ray and Bossaerts, 2011). Bayesian decision theory describes how to select between possible courses of action on the basis of a specified loss function, e.g., expected utility, in many circumstances (Pezzulo and Rigoli, 2011). However, during most decision tasks, neither the outcomes associated with different plans of actions, nor the probability of their occurrence is available to the decision-maker prior to making the decision. Under these conditions, it is necessary to learn about the available outcomes from trial-and-error experience (Stoloff et al., 2011). The field of reinforcement learning (e.g., Sutton and Barto, 1998; Balleine et al., 2008; Niv and Montague, 2008) extends decision-theoretic accounts to situations involving learning. This theoretical framework, and its underlying statistical principles, have been used to explain the role of learning both in traditional choice tasks (e.g., Behrens et al., 2007; Dayan and Daw, 2008), and in sensorimotor adaptation (e.g., Körding et al., 2007). Reinforcement learning theories also play an important role in another key area of current work in decision-making: the study of the neural processes underlying these functions. Notably, this system is involved both in motivated decisions (Kurniawan et al., 2011) and in movement, though how these functions relate is a subject of ongoing research and controversy. In addition to dopaminergic recordings, monkey work on learning about decisions from rewards has focused on frontal cortex (e.g., Lee and Seo, 2007) and also posterior parietal cortex, which is classically thought to be involved in the so-called dorsal visual processing stream. Besides the underlying theoretical parallels between the two fields, and the growing interest in both fields in similar learning processes and common neural mechanisms, two recent developments make the time ripe to begin building a bridge between research on decision-making and the research on optimal motor control. The first is the availability of new experimental tools such as functional MRI to assess and measure the neural processes underlying human and non-human decision behavior, during the decision process and following choice (Hansen et al., 2011; Santos et al., 2011). The second are new analytical tools, specifically the growing application of behavioral and computational methods from psychophysics and Bayesian decision theory in the context of decision-making (Baldassi and Simoncini, 2011). This has created a situation in which researchers across fields have started to use a common set of conceptual tools for defining problems, building computational models, and designing and analyzing experiments.
  14 in total

1.  The dynamics of memory as a consequence of optimal adaptation to a changing body.

Authors:  Konrad P Kording; Joshua B Tenenbaum; Reza Shadmehr
Journal:  Nat Neurosci       Date:  2007-05-13       Impact factor: 24.884

Review 2.  Mechanisms of reinforcement learning and decision making in the primate dorsolateral prefrontal cortex.

Authors:  Daeyeol Lee; Hyojung Seo
Journal:  Ann N Y Acad Sci       Date:  2007-03-08       Impact factor: 5.691

Review 3.  Decision theory, reinforcement learning, and the brain.

Authors:  Peter Dayan; Nathaniel D Daw
Journal:  Cogn Affect Behav Neurosci       Date:  2008-12       Impact factor: 3.282

4.  Learning the value of information in an uncertain world.

Authors:  Timothy E J Behrens; Mark W Woolrich; Mark E Walton; Matthew F S Rushworth
Journal:  Nat Neurosci       Date:  2007-08-05       Impact factor: 24.884

5.  Persistency of priors-induced bias in decision behavior and the FMRI signal.

Authors:  Kathleen A Hansen; Sarah F Hillenbrand; Leslie G Ungerleider
Journal:  Front Neurosci       Date:  2011-03-08       Impact factor: 4.677

6.  Reward sharpens orientation coding independently of attention.

Authors:  Stefano Baldassi; Claudio Simoncini
Journal:  Front Neurosci       Date:  2011-02-08       Impact factor: 4.677

7.  The value of foresight: how prospection affects decision-making.

Authors:  Giovanni Pezzulo; Francesco Rigoli
Journal:  Front Neurosci       Date:  2011-06-30       Impact factor: 4.677

8.  Effect of reinforcement history on hand choice in an unconstrained reaching task.

Authors:  Rebecca H Stoloff; Jordan A Taylor; Jing Xu; Arne Ridderikhoff; Richard B Ivry
Journal:  Front Neurosci       Date:  2011-03-23       Impact factor: 4.677

9.  Motor Planning under Unpredictable Reward: Modulations of Movement Vigor and Primate Striatum Activity.

Authors:  Ioan Opris; Mikhail Lebedev; Randall J Nelson
Journal:  Front Neurosci       Date:  2011-05-09       Impact factor: 4.677

10.  Investigating the role of the ventromedial prefrontal cortex in the assessment of brands.

Authors:  José Paulo Santos; Daniela Seixas; Sofia Brandão; Luiz Moutinho
Journal:  Front Neurosci       Date:  2011-06-03       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.