| Literature DB >> 28824408 |
Joel T Kaardal1,2, Frédéric E Theunissen3, Tatyana O Sharpee1,2.
Abstract
The signal transformations that take place in high-level sensory regions of the brain remain enigmatic because of the many nonlinear transformations that separate responses of these neurons from the input stimuli. One would like to have dimensionality reduction methods that can describe responses of such neurons in terms of operations on a large but still manageable set of relevant input features. A number of methods have been developed for this purpose, but often these methods rely on the expansion of the input space to capture as many relevant stimulus components as statistically possible. This expansion leads to a lower effective sampling thereby reducing the accuracy of the estimated components. Alternatively, so-called low-rank methods explicitly search for a small number of components in the hope of achieving higher estimation accuracy. Even with these methods, however, noise in the neural responses can force the models to estimate more components than necessary, again reducing the methods' accuracy. Here we describe how a flexible regularization procedure, together with an explicit rank constraint, can strongly improve the estimation accuracy compared to previous methods suitable for characterizing neural responses to natural stimuli. Applying the proposed low-rank method to responses of auditory neurons in the songbird brain, we find multiple relevant components making up the receptive field for each neuron and characterize their computations in terms of logical OR and AND computations. The results highlight potential differences in how invariances are constructed in visual and auditory systems.Entities:
Keywords: auditory cortex; computational neuroscience; dimensionality reduction; neural coding; receptive fields
Year: 2017 PMID: 28824408 PMCID: PMC5534486 DOI: 10.3389/fncom.2017.00068
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 2.380
Low-rank MNE block coordinate descent algorithm (globally optimal domain)
| 1: | |
| 2: | |
| 3: | |
| 4: | for |
| 5: | |
| 6: | |
| 7: | |
| 8: | λ |
| 9: | |
| 10: | ϵ |
| 11: | |
| 12: | interior-point method algorithm with inputs |
| 13: | ϵ |
| 14: | λ |
| 15: | |
| 16: | |
| 17: | a ← |
| 18: | |
| 19: | |
| 20: | |
| 21: | |
| 22: |
Low-rank MNE block coordinate descent algorithm (locally optimal domain)
| 1: | |
| 2: | |
| 3: | |
| 4: | |
| 5: | |
| 6: | |
| 7: | |
| 8: | |
| 9: | ϵ |
| 10: | |
| 11: | interior-point method algorithm with inputs |
| 12: | ϵ |
| 13: | |
| 14: | if |
| 15: | |
| 16: | |
| 17: | σ ← 0 |
| 18: | |
| 19: | |
| 20: | |
| 21: | |
| 22: | |
| 23: | σ ← σ + 1 |
| 24: | |
| 25: |
Figure 1(A) Two model neurons were generated from a synthetic receptive field, one with high SNR and another with low SNR. (B) Low-rank MNE, full-rank MNE, and STC models were optimized for both of the neurons and the mean top four largest magnitude components of J were plotted. (C) These mean components correspond to the largest variance eigenvalues in the eigenvalue spectra. The dashed lines in the eigenvalue spectra correspond to the eigenvalues of the ground truth, JGT. (D) Low-rank models with maximum rank ranging over r = 1…8 were trained on four different jackknives of the data set where for each jackknife the data set was split into a training and cross-validation set. The predictive power of each jackknife's trained models evaluated on its cross-validation set is shown to saturate when r ≥ ropt = 4. (E) Quantitative comparisons of the receptive field reconstructions from the three methods is compared based on each model's predictive power on the test sets (top) and the subspace overlap onto the ground truth (bottom).
Figure 2Low-rank MNE, full-rank MNE, and STC models were optimized on a dataset of 50 avian auditory forebrain neurons. Logical OR and logical AND FB models were fit using linear combinations of the subspace components of each method. Logical AND FB models from two example neurons are shown (A,B). The quality of each model is measured using the difference between the mean negative log-likelihood of the model and the linear MNE model evaluated on the test sets and plots summarize predictive ability across the population of neurons (C). A bar plot quantifies the number of neurons in the population best fit by each model (D). Note low-rank and linear MNE models outperform all STC and full-rank MNE models across the population.
Figure 3The difference between the negative log-likelihood of the best logical AND (LAND) and logical OR (LOR) models averaged across test sets is plotted against Tr(J) where here J is averaged across jackknives. The horizontal dashed line demarcates neurons best fit by logical OR models on top from neurons best fit by logical AND models below. The vertical dashed line separates neurons where the eigenvalue spectra of mean J are dominated by negative variance on the left and positive variance on the right.
Figure 4The data samples for each neuron are divided into 70% training (green), 20% cross-validation (blue), and 10% test (red) sets. The samples that appear in each set are varied between 4 jackknives.
Summary of parameter values used in our application of Algorithm 2.
| Varies | Maximum rank of | |
| π1, … , π | Varies | Constraint signs |
| ϵmax | 0.5 | Maximum value of the regularization parameters |
| 501 | Number of different values the regularization parameter can assume forming a uniform grid from 0 to ϵmax | |
| 70% of samples | Indices of data samples that form the training set | |
| 20% of samples | Indices of data samples that form the cross-validation set | |
| 20 | Maximum number of iterations of the block coordinate descent algorithm | |
| δp | 0 (machine precision) | Convergence precision |
| σmax | 3 | Number of allowed failures to improve cross-validation performance |
Since the convergence precision, δ.
Statistical approach for choosing 〈ropt〉
| 1: | |
| 2: | |
| 3: | |
| 4: | |
| 5: | |
| 6: | |
| 7: | |
| 8: | |
| 9: | |
| 10: | |
| 11: | |
| 12: | |
| 13: | |
| 14: | 〈 |
| 15: | |
| 16: |