| Literature DB >> 25180550 |
Vipin Srivastava1, Suchitra Sampath2, David J Parker3.
Abstract
Connectionist models of memory storage have been studied for many years, and aim to provide insight into potential mechanisms of memory storage by the brain. A problem faced by these systems is that as the number of items to be stored increases across a finite set of neurons/synapses, the cumulative changes in synaptic weight eventually lead to a sudden and dramatic loss of the stored information (catastrophic interference, CI) as the previous changes in synaptic weight are effectively lost. This effect does not occur in the brain, where information loss is gradual. Various attempts have been made to overcome the effects of CI, but these generally use schemes that impose restrictions on the system or its inputs rather than allowing the system to intrinsically cope with increasing storage demands. We show here that catastrophic interference occurs as a result of interference among patterns that lead to catastrophic effects when the number of patterns stored exceeds a critical limit. However, when Gram-Schmidt orthogonalization is combined with the Hebb-Hopfield model, the model attains the ability to eliminate CI. This approach differs from previous orthogonalisation schemes used in connectionist networks which essentially reflect sparse coding of the input. Here CI is avoided in a network of a fixed size without setting limits on the rate or number of patterns encoded, and without separating encoding and retrieval, thus offering the advantage of allowing associations between incoming and stored patterns. PACS Nos.: 87.10.+e, 87.18.Bb, 87.18.Sn, 87.19.La.Entities:
Mesh:
Year: 2014 PMID: 25180550 PMCID: PMC4152133 DOI: 10.1371/journal.pone.0105619
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 1Schematic representation of , the post-synaptic potential on an arbitrary site i when one of the learnt patterns, is presented to check for retrieval, versus , the noise term in eqn.4.
The shaded areas represent the domains where will be positive definite. The bounds on slide up and down with variations in p and N enabling, at least in principle, plasticity to control CI to some extent.
Figure 2Simulation results for a system of 1000 neurons.
(A) Hopfield network showing memory breakdown due to catastrophic interference amongst the stored patterns – the fraction of input patterns that is retrieved drops rapidly around the load parameter, p/N = 0.14. The results are shown for three sets of patterns and the inset shows the results averaged over 50 sets of patterns. (B) Hopfield network with Gram-Schmidt orthogonalization of the incoming patterns. All the learnt patterns are retrieved perfectly until p = N, when the retrieval fraction drops to zero abruptly. The inset shows magnification very close to the load parameter = 1 to highlight the abruptness of the drop. Note that the system does not learn the raw patterns as they are presented but their orthogonalized versions, whereas the retrieval is checked for the raw patterns.