| Literature DB >> 33058757 |
Silvia Bernardi1, Marcus K Benna2, Mattia Rigotti3, Jérôme Munuera4, Stefano Fusi5, C Daniel Salzman6.
Abstract
The curse of dimensionality plagues models of reinforcement learning and decision making. The process of abstraction solves this by constructing variables describing features shared by different instances, reducing dimensionality and enabling generalization in novel situations. Here, we characterized neural representations in monkeys performing a task described by different hidden and explicit variables. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training, which requires a particular geometry of neural representations. Neural ensembles in prefrontal cortex, hippocampus, and simulated neural networks simultaneously represented multiple variables in a geometry reflecting abstraction but that still allowed a linear classifier to decode a large number of other variables (high shattering dimensionality). Furthermore, this geometry changed in relation to task events and performance. These findings elucidate how the brain and artificial systems represent variables in an abstract format while preserving the advantages conferred by high shattering dimensionality.Keywords: abstraction; anterior cingulate cortex; artificial neural networks; dimensionality; disentangled representations; factorized representations; hippocampus; mixed selectivity; prefrontal cortex; representational geometry
Mesh:
Year: 2020 PMID: 33058757 DOI: 10.1016/j.cell.2020.09.031
Source DB: PubMed Journal: Cell ISSN: 0092-8674 Impact factor: 41.582