| Literature DB >> 26500511 |
Rory Finnegan1, Suzanna Becker2.
Abstract
The hippocampus has been the focus of memory research for decades. While the functional role of this structure is not fully understood, it is widely recognized as being vital for rapid yet accurate encoding and retrieval of associative memories. Since the discovery of adult hippocampal neurogenesis in the dentate gyrus by Altman and Das in the 1960's, many theories and models have been put forward to explain the functional role it plays in learning and memory. These models postulate different ways in which new neurons are introduced into the dentate gyrus and their functional importance for learning and memory. Few if any previous models have incorporated the unique properties of young adult-born dentate granule cells and the developmental trajectory. In this paper, we propose a novel computational model of the dentate gyrus that incorporates the developmental trajectory of the adult-born dentate granule cells, including changes in synaptic plasticity, connectivity, excitability and lateral inhibition, using a modified version of the Restricted Boltzmann machine. Our results show superior performance on memory reconstruction tasks for both recent and distally learned items, when the unique characteristics of young dentate granule cells are taken into account. Even though the hyperexcitability of the young neurons generates more overlapping neural codes, reducing pattern separation, the unique properties of the young neurons nonetheless contribute to reducing retroactive and proactive interference, at both short and long time scales. The sparse connectivity is particularly important for generating distinct memory traces for highly overlapping patterns that are learned within the same context.Entities:
Keywords: computational modeling; dentate gyrus; neurogenesis; restricted Boltzmann machines; sparse coding
Year: 2015 PMID: 26500511 PMCID: PMC4593858 DOI: 10.3389/fnsys.2015.00136
Source DB: PubMed Journal: Front Syst Neurosci ISSN: 1662-5137
Figure 1Gompertz function where .
Figure 2Simulation 1: performance of the models with and without sparse coding on within-session pattern reconstruction tests. The models were trained sequentially on 11 groups of 90 patterns, and tested on noisy versions of these training patterns after each group to test proactive interference and after all groups had completed to test retroactive interference. (A) Shows proactive interference for input reconstruction accuracies during training. (B) Shows retroactive interference for input reconstruction accuracies on each group after training to test retroactive interference. (C) Shows the relationship between post training reconstruction accuracy with hidden unit activation overlap. (D) Shows the distribution of post training accuracy over all groups.
Figure 3Simulation 2: performance of the models with and without neurogenesis and sparse connectivity on within-session pattern reconstruction tests. The models were trained sequentially on 11 groups of 90 patterns, and tested on noisy versions of these training patterns after each group to test proactive interference and after all groups had completed to test retroactive interference. (A) Shows proactive interference for input reconstruction accuracies during training. (B) Shows retroactive interference for input reconstruction accuracies on each group after training to test retroactive interference. (C) Shows the relationship between post training reconstruction accuracy with hidden unit activation overlap. (D) Shows the distribution of post training accuracy over all groups.
Figure 4Simulation 2: performance of the models with and without neurogenesis and sparse connectivity on across-session pattern reconstruction tests. The models were trained sequentially on 11 groups of 90 patterns, and tested on noisy versions of these training patterns after each group to test proactive interference and after all groups had completed to test retroactive interference. (A) Shows proactive interference for input reconstruction accuracies during training. (B) Shows retroactive interference for input reconstruction accuracies on each group after training to test retroactive interference. (C) Shows the relationship between post training reconstruction accuracy with hidden unit activation overlap. (D) Shows the distribution of post training accuracy over all groups.
Post training summary statistics for the 3 simulations.
| RBM vs. SparseRBM | (0.844, 0.884) | (0.03, 0.054) | ||
| SparseRBM vs. Neurogenesis | (0.883, 0.938) | (0.035, 0.057) | ||
| SparseRBM vs. Neurogenesis sparsely connected | (0.883, 0.938) | (0.04, 0.065) | ||
| Neurogenesis vs. Neurogenesis sparsely connected | (0.93, 0.938) | (0.006, 0.01) | ||
| SparseRBM vs. Neurogenesis | (0.883, 0.934) | (0.04, 0.06) | ||
| SparseRBM vs. Neurogenesis sparsely connected | (0.883, 0.932) | (0.037, 0.058) | ||
| Neurogenesis vs. Neurogenesis sparsely connected | (0.934, 0.932) | (−0.004, 0.0) | ||
Mean accuracies of each pair of models and 99% bootstrapped confidence intervals around the difference between means are shown;
's indicate statistically significant differences (those with confidence intervals which do not include 0). The confidence intervals were generated by calculating the difference in mean performance of pairs of models across 20 repeated simulations with different randomly generated training and test sets. From these 20 repeated simulations, we generated 10,000 bootstrapped resamples, to obtain bootstrapped estimates of the distributions of the mean differences.