Literature DB >> 35668173

Efficient coding of cognitive variables underlies dopamine response and choice behavior.

Joseph J Paton1, Christian K Machens2, Asma Motiwala3,4, Sofia Soares5,6, Bassam V Atallah5.   

Abstract

Reward expectations based on internal knowledge of the external environment are a core component of adaptive behavior. However, internal knowledge may be inaccurate or incomplete due to errors in sensory measurements. Some features of the environment may also be encoded inaccurately to minimize representational costs associated with their processing. In this study, we investigated how reward expectations are affected by features of internal representations by studying behavior and dopaminergic activity while mice make time-based decisions. We show that several possible representations allow a reinforcement learning agent to model animals' overall performance during the task. However, only a small subset of highly compressed representations simultaneously reproduced the co-variability in animals' choice behavior and dopaminergic activity. Strikingly, these representations predict an unusual distribution of response times that closely match animals' behavior. These results inform how constraints of representational efficiency may be expressed in encoding representations of dynamic cognitive variables used for reward-based computations.
© 2022. The Author(s), under exclusive licence to Springer Nature America, Inc.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35668173     DOI: 10.1038/s41593-022-01085-7

Source DB:  PubMed          Journal:  Nat Neurosci        ISSN: 1097-6256            Impact factor:   28.771


  41 in total

1.  Discrete coding of reward probability and uncertainty by dopamine neurons.

Authors:  Christopher D Fiorillo; Philippe N Tobler; Wolfram Schultz
Journal:  Science       Date:  2003-03-21       Impact factor: 47.728

2.  Midbrain dopamine neurons encode a quantitative reward prediction error signal.

Authors:  Hannah M Bayer; Paul W Glimcher
Journal:  Neuron       Date:  2005-07-07       Impact factor: 17.173

3.  Representation and timing in theories of the dopamine system.

Authors:  Nathaniel D Daw; Aaron C Courville; David S Tourtezky; David S Touretzky
Journal:  Neural Comput       Date:  2006-07       Impact factor: 2.026

4.  Stimulus representation and the timing of reward-prediction errors in models of the dopamine system.

Authors:  Elliot A Ludvig; Richard S Sutton; E James Kehoe
Journal:  Neural Comput       Date:  2008-12       Impact factor: 2.026

Review 5.  A neural substrate of prediction and reward.

Authors:  W Schultz; P Dayan; P R Montague
Journal:  Science       Date:  1997-03-14       Impact factor: 47.728

Review 6.  Neural Circuitry of Reward Prediction Error.

Authors:  Mitsuko Watabe-Uchida; Neir Eshel; Naoshige Uchida
Journal:  Annu Rev Neurosci       Date:  2017-04-24       Impact factor: 12.449

7.  Reinforcement learning with Marr.

Authors:  Yael Niv; Angela Langdon
Journal:  Curr Opin Behav Sci       Date:  2016-10

8.  A cellular mechanism of reward-related learning.

Authors:  J N Reynolds; B I Hyland; J R Wickens
Journal:  Nature       Date:  2001-09-06       Impact factor: 49.962

9.  A causal link between prediction errors, dopamine neurons and learning.

Authors:  Elizabeth E Steinberg; Ronald Keiflin; Josiah R Boivin; Ilana B Witten; Karl Deisseroth; Patricia H Janak
Journal:  Nat Neurosci       Date:  2013-05-26       Impact factor: 24.884

10.  Dopamine reward prediction error responses reflect marginal utility.

Authors:  William R Stauffer; Armin Lak; Wolfram Schultz
Journal:  Curr Biol       Date:  2014-10-02       Impact factor: 10.834

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.