Literature DB >> 16483409

Learning beyond finite memory in recurrent networks of spiking neurons.

Peter Tino1, Ashely J S Mills.   

Abstract

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.

Entities:  

Mesh:

Year:  2006        PMID: 16483409     DOI: 10.1162/089976606775623360

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  2 in total

Review 1.  Building functional networks of spiking model neurons.

Authors:  L F Abbott; Brian DePasquale; Raoul-Martin Memmesheimer
Journal:  Nat Neurosci       Date:  2016-03       Impact factor: 24.884

2.  The chronotron: a neuron that learns to fire temporally precise spike patterns.

Authors:  Răzvan V Florian
Journal:  PLoS One       Date:  2012-08-06       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.