Literature DB >> 24463329

Multiplexing signals in reinforcement learning with internal models and dopamine.

Hiroyuki Nakahara1.   

Abstract

A fundamental challenge for computational and cognitive neuroscience is to understand how reward-based learning and decision-making are made and how accrued knowledge and internal models of the environment are incorporated. Remarkable progress has been made in the field, guided by the midbrain dopamine reward prediction error hypothesis and the underlying reinforcement learning framework, which does not involve internal models ('model-free'). Recent studies, however, have begun not only to address more complex decision-making processes that are integrated with model-free decision-making, but also to include internal models about environmental reward structures and the minds of other agents, including model-based reinforcement learning and using generalized prediction errors. Even dopamine, a classic model-free signal, may work as multiplexed signals using model-based information and contribute to representational learning of reward structure.
Copyright © 2014 Elsevier Ltd. All rights reserved.

Entities:  

Mesh:

Substances:

Year:  2014        PMID: 24463329     DOI: 10.1016/j.conb.2014.01.001

Source DB:  PubMed          Journal:  Curr Opin Neurobiol        ISSN: 0959-4388            Impact factor:   6.627


  12 in total

1.  Ventral striatal dopamine reflects behavioral and neural signatures of model-based control during sequential decision making.

Authors:  Lorenz Deserno; Quentin J M Huys; Rebecca Boehme; Ralph Buchert; Hans-Jochen Heinze; Anthony A Grace; Raymond J Dolan; Andreas Heinz; Florian Schlagenhauf
Journal:  Proc Natl Acad Sci U S A       Date:  2015-01-20       Impact factor: 11.205

2.  The receptive field is dead. Long live the receptive field?

Authors:  Adrienne Fairhall
Journal:  Curr Opin Neurobiol       Date:  2014-03-04       Impact factor: 6.627

Review 3.  Model-based predictions for dopamine.

Authors:  Angela J Langdon; Melissa J Sharpe; Geoffrey Schoenbaum; Yael Niv
Journal:  Curr Opin Neurobiol       Date:  2017-10-31       Impact factor: 6.627

4.  Reinforcement learning with associative or discriminative generalization across states and actions: fMRI at 3 T and 7 T.

Authors:  Jaron T Colas; Neil M Dundon; Raphael T Gerraty; Natalie M Saragosa-Harris; Karol P Szymula; Koranis Tanwisuth; J Michael Tyszka; Camilla van Geen; Harang Ju; Arthur W Toga; Joshua I Gold; Dani S Bassett; Catherine A Hartley; Daphna Shohamy; Scott T Grafton; John P O'Doherty
Journal:  Hum Brain Mapp       Date:  2022-07-21       Impact factor: 5.399

5.  Context-Dependent Multiplexing by Individual VTA Dopamine Neurons.

Authors:  Yves Kremer; Jérôme Flakowski; Clément Rohner; Christian Lüscher
Journal:  J Neurosci       Date:  2020-08-28       Impact factor: 6.167

Review 6.  Dopamine Does Double Duty in Motivating Cognitive Effort.

Authors:  Andrew Westbrook; Todd S Braver
Journal:  Neuron       Date:  2016-02-17       Impact factor: 17.173

7.  Dual reward prediction components yield Pavlovian sign- and goal-tracking.

Authors:  Sivaramakrishnan Kaveri; Hiroyuki Nakahara
Journal:  PLoS One       Date:  2014-10-13       Impact factor: 3.240

8.  Predictive representations can link model-based reinforcement learning to model-free mechanisms.

Authors:  Evan M Russek; Ida Momennejad; Matthew M Botvinick; Samuel J Gershman; Nathaniel D Daw
Journal:  PLoS Comput Biol       Date:  2017-09-25       Impact factor: 4.475

9.  Dopamine transients are sufficient and necessary for acquisition of model-based associations.

Authors:  Melissa J Sharpe; Chun Yun Chang; Melissa A Liu; Hannah M Batchelor; Lauren E Mueller; Joshua L Jones; Yael Niv; Geoffrey Schoenbaum
Journal:  Nat Neurosci       Date:  2017-04-03       Impact factor: 24.884

Review 10.  The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning.

Authors:  Helen M Nasser; Donna J Calu; Geoffrey Schoenbaum; Melissa J Sharpe
Journal:  Front Psychol       Date:  2017-02-22
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.