Literature DB >> 11359644

Localist attractor networks.

R S Zemel1, M C Mozer.   

Abstract

Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).

Mesh:

Year:  2001        PMID: 11359644     DOI: 10.1162/08997660151134325

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  2 in total

1.  Flexible kernel memory.

Authors:  Dimitri Nowicki; Hava Siegelmann
Journal:  PLoS One       Date:  2010-06-11       Impact factor: 3.240

2.  Memory dynamics in attractor networks.

Authors:  Guoqi Li; Kiruthika Ramanathan; Ning Ning; Luping Shi; Changyun Wen
Journal:  Comput Intell Neurosci       Date:  2015-04-19
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.