| Literature DB >> 33508698 |
Dhruva V Raman1, Timothy O'Leary2.
Abstract
Synapses and neural connectivity are plastic and shaped by experience. But to what extent does connectivity itself influence the ability of a neural circuit to learn? Insights from optimization theory and AI shed light on how learning can be implemented in neural circuits. Though abstract in their nature, learning algorithms provide a principled set of hypotheses on the necessary ingredients for learning in neural circuits. These include the kinds of signals and circuit motifs that enable learning from experience, as well as an appreciation of the constraints that make learning challenging in a biological setting. Remarkably, some simple connectivity patterns can boost the efficiency of relatively crude learning rules, showing how the brain can use anatomy to compensate for the biological constraints of known synaptic plasticity mechanisms. Modern connectomics provides rich data for exploring this principle, and may reveal how brain connectivity is constrained by the requirement to learn efficiently.Entities:
Year: 2021 PMID: 33508698 PMCID: PMC8202511 DOI: 10.1016/j.conb.2020.12.017
Source DB: PubMed Journal: Curr Opin Neurobiol ISSN: 0959-4388 Impact factor: 6.627
Figure 1(a) Schematic of a learning circuit with four behavioural outputs (denoted by colour) and feedback signals targeting connections that adapt during learning. Each row depicts a feedback signal with different degrees of coarseness. Top: A scalar (coarse) feedback signal provides information on overall behavioural performance to all connections. In this scenario the credit assignment problem is most onerous because the individual contributions of circuit connections to behavioral performance are hard to disentangle. Middle: A more detailed vector feedback signal specifies how changes in each of the behavioural outputs contributes to overall performance. Bottom: Separate subsets of the learning system inform separate behavioural outputs. Now vector feedback helps even a perturbation-based learning rule, as the number of synapses per behavioural output decreases by a factor of four. (b) Schematic wiring diagram of the extended MB circuit taken from [13••]. Separate compartments are innervated by separate neuromodulators encoding distinct forms of feedback on behavioural performance.
Figure 2(a) Embedding an input-output mapping into a higher dimensional space reduces the difficulty of feedback based learning. Noisy gradient descent, with a fixed signal:noise ratio, will learn faster and to higher steady state performance in a higher dimensional network. The effect is not limited to the particular linear network architecture shown, but also holds for multilayer, nonlinear networks. Figure reproduced from [5]. (b) A redundancy enhancing neural circuit motif studied in [8], and its correspondence with biological circuits in the Drosophila mushroom body and mammalian cerebellar cortex, respectively. Inhibitory neurons and sparse input connectivity both maximise the representational dimension of the network, allowing for more efficient pattern separation. Figure reproduced from [8].