Literature DB >> 33592040

Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks.

Paolo Muratore1, Cristiano Capone2, Pier Stanislao Paolucci2.   

Abstract

Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.

Entities:  

Year:  2021        PMID: 33592040      PMCID: PMC7886200          DOI: 10.1371/journal.pone.0247014

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


  35 in total

1.  Matching recall and storage in sequence learning with spiking neural networks.

Authors:  Johanni Brea; Walter Senn; Jean-Pascal Pfister
Journal:  J Neurosci       Date:  2013-06-05       Impact factor: 6.167

2.  Connectivity reflects coding: a model of voltage-based STDP with homeostasis.

Authors:  Claudia Clopath; Lars Büsing; Eleni Vasilaki; Wulfram Gerstner
Journal:  Nat Neurosci       Date:  2010-01-24       Impact factor: 24.884

3.  A cellular mechanism for cortical associations: an organizing principle for the cerebral cortex.

Authors:  Matthew Larkum
Journal:  Trends Neurosci       Date:  2012-12-25       Impact factor: 13.837

4.  Learning precisely timed spikes.

Authors:  Raoul-Martin Memmesheimer; Ran Rubin; Bence P Olveczky; Haim Sompolinsky
Journal:  Neuron       Date:  2014-04-24       Impact factor: 17.173

5.  Learning multiple variable-speed sequences in striatum via cortical tutoring.

Authors:  James M Murray; G Sean Escola
Journal:  Elife       Date:  2017-05-08       Impact factor: 8.140

6.  State-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons.

Authors:  Cristiano Capone; Matteo di Volo; Alberto Romagnoni; Maurizio Mattia; Alain Destexhe
Journal:  Phys Rev E       Date:  2019-12       Impact factor: 2.529

7.  A circuit for detection of interaural time differences in the brain stem of the barn owl.

Authors:  C E Carr; M Konishi
Journal:  J Neurosci       Date:  1990-10       Impact factor: 6.167

8.  A solution to the learning dilemma for recurrent networks of spiking neurons.

Authors:  Guillaume Bellec; Franz Scherr; Anand Subramoney; Elias Hajek; Darjan Salaj; Robert Legenstein; Wolfgang Maass
Journal:  Nat Commun       Date:  2020-07-17       Impact factor: 14.919

9.  SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

Authors:  Friedemann Zenke; Surya Ganguli
Journal:  Neural Comput       Date:  2018-04-13       Impact factor: 2.026

10.  Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model.

Authors:  Cristiano Capone; Elena Pastorelli; Bruno Golosio; Pier Stanislao Paolucci
Journal:  Sci Rep       Date:  2019-06-20       Impact factor: 4.379

View more
  3 in total

1.  Error-based or target-based? A unified framework for learning in recurrent spiking networks.

Authors:  Cristiano Capone; Paolo Muratore; Pier Stanislao Paolucci
Journal:  PLoS Comput Biol       Date:  2022-06-21       Impact factor: 4.779

2.  Linking Brain Structure, Activity, and Cognitive Function through Computation.

Authors:  Katrin Amunts; Javier DeFelipe; Cyriel Pennartz; Alain Destexhe; Michele Migliore; Philippe Ryvlin; Steve Furber; Alois Knoll; Lise Bitsch; Jan G Bjaalie; Yannis Ioannidis; Thomas Lippert; Maria V Sanchez-Vives; Rainer Goebel; Viktor Jirsa
Journal:  eNeuro       Date:  2022-03-11

3.  MAP-SNN: Mapping spike activities with multiplicity, adaptability, and plasticity into bio-plausible spiking neural networks.

Authors:  Chengting Yu; Yangkai Du; Mufeng Chen; Aili Wang; Gaoang Wang; Erping Li
Journal:  Front Neurosci       Date:  2022-09-20       Impact factor: 5.152

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.