| Literature DB >> 30824227 |
Angela Radulescu1, Yael Niv1, Ian Ballard2.
Abstract
Compact representations of the environment allow humans to behave efficiently in a complex world. Reinforcement learning models capture many behavioral and neural effects but do not explain recent findings showing that structure in the environment influences learning. In parallel, Bayesian cognitive models predict how humans learn structured knowledge but do not have a clear neurobiological implementation. We propose an integration of these two model classes in which structured knowledge learned via approximate Bayesian inference acts as a source of selective attention. In turn, selective attention biases reinforcement learning towards relevant dimensions of the environment. An understanding of structure learning will help to resolve the fundamental challenge in decision science: explaining why people make the decisions they do.Entities:
Keywords: Bayesian inference; approximate inference; category learning; corticostriatal circuits; dopamine; representation learning; rule learning; striatum
Year: 2019 PMID: 30824227 PMCID: PMC6472955 DOI: 10.1016/j.tics.2019.01.010
Source DB: PubMed Journal: Trends Cogn Sci ISSN: 1364-6613 Impact factor: 20.229