Stefano Recanatesi1, Serena Bradde2,3, Vijay Balasubramanian2, Nicholas A Steinmetz4, Eric Shea-Brown1,5. 1. Center for Computational Neuroscience, University of Washington, Seattle, WA 98195, USA. 2. David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, PA 19104, USA. 3. American Physical Society, Ridge, NY 11709, USA. 4. Department of Biological Structure, University of Washington, Seattle, WA 98195, USA. 5. Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA.
Abstract
A fundamental problem in science is uncovering the effective number of degrees of freedom in a complex system: its dimensionality. A system's dimensionality depends on its spatiotemporal scale. Here, we introduce a scale-dependent generalization of a classic enumeration of latent variables, the participation ratio. We demonstrate how the scale-dependent participation ratio identifies the appropriate dimension at local, intermediate, and global scales in several systems such as the Lorenz attractor, hidden Markov models, and switching linear dynamical systems. We show analytically how, at different limiting scales, the scale-dependent participation ratio relates to well-established measures of dimensionality. This measure applied in neural population recordings across multiple brain areas and brain states shows fundamental trends in the dimensionality of neural activity-for example, in behaviorally engaged versus spontaneous states. Our novel method unifies widely used measures of dimensionality and applies broadly to multivariate data across several fields of science.
A fundamental problem in science is uncovering the effective number of degrees of freedom in a complex system: its dimensionality. A system's dimensionality depends on its spatiotemporal scale. Here, we introduce a scale-dependent generalization of a classic enumeration of latent variables, the participation ratio. We demonstrate how the scale-dependent participation ratio identifies the appropriate dimension at local, intermediate, and global scales in several systems such as the Lorenz attractor, hidden Markov models, and switching linear dynamical systems. We show analytically how, at different limiting scales, the scale-dependent participation ratio relates to well-established measures of dimensionality. This measure applied in neural population recordings across multiple brain areas and brain states shows fundamental trends in the dimensionality of neural activity-for example, in behaviorally engaged versus spontaneous states. Our novel method unifies widely used measures of dimensionality and applies broadly to multivariate data across several fields of science.
In many branches of science, complex systems are characterized by simultaneous values of many observables evolving over time. For example, the operational state of a living cell may be summarized by the expression levels of myriad proteins. Likewise, the instantaneous activity levels of the many neurons in a brain region summarize its state. The dynamics of these systems can be much lower dimensional. For example, at the coarsest scale, the overall dynamics of a brain area may be described just by slow fluctuations in the mean neural firing rate, i.e., a single dynamical variable., At an intermediate scale, the same dynamics could consist of several characteristic firing patterns evolving smoothly on a fixed d-dimensional manifold embedded in the state space. If we knew the underlying dynamical system, we could derive the relevant manifold at each scale from first principles. But in many of the most exciting complex systems becoming accessible to experimental study, our goal is to discover the dynamical system, a task that starts by determining the number of effective latent variables, i.e., the dimensionality of the system.One approach to this problem has roots in point-set topology: a manifold is d dimensional if the number of uniformly sampled points in a region of characteristic length L scales as . This fact leads to definition of the capacity dimension in terms of the number n of Euclidean boxes of side length ϵ needed to cover the system’s trajectory in its embedding space: . This intuitive quantity is difficult to compute in more than three dimensions, while sampling of dynamical systems is often too coarse to directly estimate the limit.6, 7, 8 A variation on this idea, easier to estimate in high dimensions, is the correlation dimension , determined from the scaling of the number of pairs of data points with separations less than r, in the limit. In this way, the capacity dimension and the correlation dimension both give local, fine-scale measures of dimension.,A second class of approaches starts with the correlation matrix between observations. For example, techniques related to principal-component analysis define the effective dimension as the number of eigenmodes of the correlation matrix that capture most of the variance. Substantial literature analyzes how to choose the threshold that defines “most,”, but the resulting arbitrariness makes it challenging to define the dimension associated to different scales of observation., A dimension can be defined more naturally from the participation ratio (PR),, which counts the effective dimensions along which data are spread as a ratio of the square of the first moment and the second moment of the eigenvalue probability density function. Here, we generalize this notion to a measure of effective dimension at different observation scales and show that it interpolates between correlation dimension at small scales and PR dimension globally. We show that the new quantity has intuitive meanings when applied to dynamical systems including the Lorenz attractor and clustered systems like hidden Markov models. We then apply our method to elucidate the structure of neural population activity in different brain areas and states.
Results
Generalizing the notion of dimension to all scales
Consider T observations of N observables , sampled from a data distribution . For example, could be generated by a dynamical process with specified initial conditions. The empirical covariance matrix over is , an matrix with . The eigenvalues of are , and the associated spectral density is . The PR dimension, , is defined as the ratio between the second and the first moments of the spectral density: measures the concentration of the eigenvalue distribution and quantifies how many eigenmodes are needed to substantially capture the data distribution, a similar notion to counting eigenmodes (or principal components) that capture most of the variance.To extend to a scale-dependent measure of dimension, we first consider a ball of radius r, , around a point . The local covariance matrix of points within this ball is , where M counts points in and is their average. In local principal-component analysis, the dominant eigenvectors of this matrix determine the local subspace in which the distribution is localized. Likewise, computing the PR dimension of this covariance measures a local dimension. Averaging over all starting points yields the scale-dependent PR (sdPR)an effective dimension up to scale r. Evaluating for the Lorenz attractor and the noisy two-dimensional (2D) spiral illustrates its scale-dependent properties. For the Lorenz attractor, is roughly 2 across scales (Figure 1B), reflecting dense coverage of a 2-manifold by this chaotic system and agreeing at small scales with the Lyapunov dimension arising from its dynamics. For the spiral, starts from 2 for small r, reflecting the spread of local noise, and then dips to approach 1, reflecting the line making the spiral, and returns to 2 at large scales r, reflecting the overall embedding of the spiral (Figure 1E).
Figure 1
The scale-dependent dimensionality
(A) Lorenz attractor: 105 points sampled from the attractor.
(B) The scale-dependent PR (sdPR) dimension is stable across scales.
(C) The scale-dependent correlation dimension matches the Lyapunov dimension 2.05 at small scales but vanishes at large scales.
(D) Noisy spiral: 105 points sampled from the 2D spiral with local noise.
(E) The sdPR dimension is 2 at small scales reflecting spread due to noise, approaches 1 at intermediate scales reflecting the line, and 2 at large scales reflecting the coarse-grained spiral.
(F) The scale-dependent correlation dimension interpolates non-montonically between 2 and 0.
Details of the sdPR are in Section S1.
The scale-dependent dimensionality(A) Lorenz attractor: 105 points sampled from the attractor.(B) The scale-dependent PR (sdPR) dimension is stable across scales.(C) The scale-dependent correlation dimension matches the Lyapunov dimension 2.05 at small scales but vanishes at large scales.(D) Noisy spiral: 105 points sampled from the 2D spiral with local noise.(E) The sdPR dimension is 2 at small scales reflecting spread due to noise, approaches 1 at intermediate scales reflecting the line, and 2 at large scales reflecting the coarse-grained spiral.(F) The scale-dependent correlation dimension interpolates non-montonically between 2 and 0.Details of the sdPR are in Section S1.Alternatively, the correlation dimension can be generalized across scales., Let be distances between samples, and define the correlation integral at distance r as . Here, H is the Heaviside step function, and is the distribution of pairwise distances. We expect that , for small r, where d is the dimension of the manifold supporting the data . Then, is defined as . Although is defined as , it is sometimes extended to general r as the log-log derivative where . To overcome sampling constraints as , one seeks a plateau in at small but finite r, a valid approach if the dimension is relatively stable across a range of scales. This method effectively treats low-dimensional strange attractors, but sampling remains a challenge for high-dimensional manifolds, and the problem is compounded if the effective dimension varies with scale.,, Likewise, for any bounded dataset, at large r simply because the data look point-like at large scales. (Technically, the correlation integral by construction at large r, so its derivative vanishes.) Indeed, applying to the Lorenz attractor, we recover the expected dimension just bigger than 2 at small scales, but the value declines to zero at large scales at which the compact attractor is effectively point-like (Figure 1C). By contrast, (Figure 1B) remains roughly constant across scales, as it performs a kind of adaptive rescaling (via the denominator in Equation 2.1). Similarly, declines to zero at large scales for the noisy spiral (Figure 1F), while the PR dimension (Figure 1E) stably captures the essentially 2D large-scale structure. Note that also produces difficult-to-interpret oscillations at intermediate scales.At small scales, we can derive a universal expression for the PR via a tangent space approximation to the data manifold that enables a new link between the PR and the correlation dimension. Approximating the local distribution as a Gaussian, , where is the mean and is the covariance. Rotating and centering the data so that and is diagonal, each component of the sampled vectors will be normally distributed: , where is an eigenvalue of . Thus, in this limit, .In the same limit, the correlation dimension is determined by the distribution of squared Euclidean distances between sampled points. For any pair, the distance squared is a sum of squared separations in each coordinate. Since the coordinates are Gaussian distributed, the squared separations are Gamma distributed: , with where and for the ath coordinate direction. The convolution of independent Gamma distributions for each coordinate gives the overall distribution of squared Euclidean distances between sampled points as an approximate Gamma distribution with parameters given by the Welch-Satterthwaite equation20, 21, 22, 23 simplifying here to . Exploiting this formula to compute the correlation integral in the limit (cf. Section S3) yieldsThus, the sdPR coincides with the correlation dimension, a well-accepted local notion of dimension, at small scales. In Section S2 and Figure S1, we also relate sdPR to the Renyi dimension for small scales. We will next show in several tractable models that the sdPR—unlike the correlation and Renyi dimensions—generalizes naturally across scales.
Dimensionality across scales
We numerically computed and for isotropic multidimensional Gaussians. As shown in Figures 2A and S2A, these examples illustrate the relationship between sdPR and the correlation dimension. While decreases at larger scales, remains constant. Furthermore, we see how, limiting the sampled statistics to 50,000 points, it is not possible to achieve a plateau in at small scales, even for —this means that capturing the dimensionality of the system from the correlation dimension in Figure 2A (left) is difficult, if not impossible. In the case of 2D Gaussians with increased elongation along one of the coordinate axes (Figures 2B and S2B), the correlation dimension accurately quantifies the local two dimensions in the regime but fails to quantify the distribution’s skewness, which effectively makes it 1D at large scales. In comparison, quantifies both the local two dimensions and the skewness, which results in a reduced dimension at larger scales. The correlation dimension captures well the local two dimensions in the regime but fails to quantify the skewness of the distribution, which makes it effectively 1D at large scales. By contrast, quantifies both the local two dimensions and the skewness, which induces a lower dimension at larger scales.
Figure 2
The correlation dimension versus the PR dimension
(A) (Left) Scale-dependent correlation dimension and (right) sdPR of the multidimensional isotropic d-dimensional Gaussian distribution. Different lines correspond to increasing dimension d, cf. legend.
(B) (Left) Scale-dependent correlation dimension and (right) sdPR for the 2D skewed Gaussian distribution. The first eigenvalue of the diagonalized covariance matrix of this distribution is , while the second varies according to the legend, determining the elongation of the distribution.
(C) (Left) Scale-dependent correlation dimension and (right) sdPR of the d-dimensional Gaussian distribution with scale-free power law spectrum. Different lines correspond to increasing dimension d, cf. legend. In this case, the dimensions have eigenvalues that are distributed according to a power law with so that the eigenvalue is . For each case, 50,000 points were randomly sampled. Dashed lines are extrapolations of the plotted curves in the left panel (right panel extrapolations not shown).
(C) (Right inset) PR dimension as a function of the exponent α for a scale-free spectrum of the covariance eigenvalues: , where a is the index of the sorted eigenvalues. Then, at large scales converges to .
The correlation dimension versus the PR dimension(A) (Left) Scale-dependent correlation dimension and (right) sdPR of the multidimensional isotropic d-dimensional Gaussian distribution. Different lines correspond to increasing dimension d, cf. legend.(B) (Left) Scale-dependent correlation dimension and (right) sdPR for the 2D skewed Gaussian distribution. The first eigenvalue of the diagonalized covariance matrix of this distribution is , while the second varies according to the legend, determining the elongation of the distribution.(C) (Left) Scale-dependent correlation dimension and (right) sdPR of the d-dimensional Gaussian distribution with scale-free power law spectrum. Different lines correspond to increasing dimension d, cf. legend. In this case, the dimensions have eigenvalues that are distributed according to a power law with so that the eigenvalue is . For each case, 50,000 points were randomly sampled. Dashed lines are extrapolations of the plotted curves in the left panel (right panel extrapolations not shown).(C) (Right inset) PR dimension as a function of the exponent α for a scale-free spectrum of the covariance eigenvalues: , where a is the index of the sorted eigenvalues. Then, at large scales converges to .A key further test of the sdPR is to understand how it behaves in scale-free systems. Thus, we considered multidimensional Gaussians whose covariance matrices have power-law-distributed eigenvalues, resembling structures near criticality. In these cases, sdPR equals the full rank of the covariance matrix at small scales and declines at large scales (Figures 2C and S2C). The global PR (Equation 2.1), at large scales, can be quantitatively captured; with a scale-free spectrum , we find where is the Riemann zeta function. This global dimension declines monotonically with the scaling exponent α: larger α creates a more skewed distribution that is effectively lower dimensional when viewed at large scales (Figure 2C, right, inset). In contrast to this consistent behavior of the sdPR, the correlation dimension declines to zero at large scales, failing to capture the data dimension (Figure 2C, left). Section S3 of the supplemental experimental procedures contains another scale-free example, as well as an example based on receptive field maps of neural populations (see also Figure S3).
Limitations of the study
The comparison between sdPR and correlation dimension highlighted advantages of sdPR over the correlation dimension. However, there also some limitations in evaluating sdPR, especially concerning its computational complexity.The primary advantage of sdPR over correlation dimension is that it gives an estimate of the dimensionality across all scales. The disadvantages stem from the fact that is a local point-wise dimensionality estimator: is calculated as the average of multiple data points surrounding each ball of radius r (see Equation 2.2). Both the sdPR and correlation dimensions require the computation of second order statistics: the covariance matrix for the sdPR and the matrix of squared distances for the correlation dimension. These two quantities have the same computational cost since they are related: . Their computational cost scales as , where T is the number of data points and N is the dimensionality of the data. Thus, the increased computational cost of sdPR coming from the average over points is the main difference between the costs of the correlation dimension’s and sdPR’s.However, the computational cost of sdPR, arising from the point-wise average in its definition, can be limited in practice, as the estimation around independent points need not be carried out over all points but just on a relatively small subset of repeats (we use ). An analysis of the bias over independent repetitions yields a fast decay of the estimation bias as (cf. Section S3.4 and Dahmen et al.). Given this rapid convergence, it is enough to select when T is in the order of thousands or higher.A second consequence of sdPR arising from a local point-wise estimator is that the value of the minimum scale accessible to sdPR is higher than for the correlation dimension. The minimum distance for which the correlation dimension can be computed is . Meanwhile, computes the distance only within a ball of radius r around a subset of initial points and then averages over the initial points. Therefore, the minimum scale for sdPR is given by . Thus, the correlation dimension is more granular than the sdPR dimension, . Despite this apparent limitation, however, in practice, sdPR often appears to have a faster convergence to the true dimensionality at small scales, especially for higher-dimensional manifolds (cf. Figure 2A).
A measure of the local and global dimension
We next show how the sdPR can separate dimensionality driven by local dynamics that are partly influenced by noise and global dynamics that are dominated by the latent structure of a dynamical system. To illustrate, we considered data generated by an underlying hidden Markov model (HMM) with dynamics hopping between latent states (Figures 3A and S4; Section S4). The system is observed through measurements of noisy “emissions” distributed around well-separated means associated with each latent state. Locally, the dimension should be determined by the statistical spread of the data and hence equal the dimension of the emission noise in the observation space. Globally, the dimension should be related to the number of latent states and the separation of their emissions, relative to the magnitude and structure of the noise. The sdPR uncovers precisely this structure (example in Figure 3B; further details and examples in Section S4) and indicates a characteristic scale for transitioning between local and global dimensions, set by separability of the emission distributions.
Figure 3
Scale-dependent dimensionality of hidden Markov models and switching linear dynamical systems
(A) Example of a hidden Markov model (HMM) with five hidden states. Right: latent space states diagram.
(B) sdPR for an HMM with 2 and 10 states, with observations made in a 30-dimensional observation space. At small versus large scales, the dimension is related to the structures of the observation space (where the noisy observations are high dimensional) and the state space (where the number of clusters influences the dimensionality).
(C) Example of time course of 2D latent space of switching linear dynamical system (SLDS) with two states (red and blue). Right: latent space dynamics.
(D) sdPR of the dynamics of one state in the SLDS (a rotating latent dynamical system). Observations have added Gaussian noise, and therefore the local dimensionality is higher, decays to 1 (the dimensionality of a single trajectory), and finally grows to 2, capturing the 2D circular geometry of the latent dynamics.
Figures S4 and S5 generalize the analysis of HMM and SLDS to consider different latent and observation space dimensions, noise distributions, and numbers of states.
Scale-dependent dimensionality of hidden Markov models and switching linear dynamical systems(A) Example of a hidden Markov model (HMM) with five hidden states. Right: latent space states diagram.(B) sdPR for an HMM with 2 and 10 states, with observations made in a 30-dimensional observation space. At small versus large scales, the dimension is related to the structures of the observation space (where the noisy observations are high dimensional) and the state space (where the number of clusters influences the dimensionality).(C) Example of time course of 2D latent space of switching linear dynamical system (SLDS) with two states (red and blue). Right: latent space dynamics.(D) sdPR of the dynamics of one state in the SLDS (a rotating latent dynamical system). Observations have added Gaussian noise, and therefore the local dimensionality is higher, decays to 1 (the dimensionality of a single trajectory), and finally grows to 2, capturing the 2D circular geometry of the latent dynamics.Figures S4 and S5 generalize the analysis of HMM and SLDS to consider different latent and observation space dimensions, noise distributions, and numbers of states.For many applications, we must contend with the challenge that a “state” of the system may itself involve a characteristic dynamical trajectory with changing internal variables. Thus, we studied the sdPR of switching linear dynamical systems (SLDSs) (Figures 3C and S5; Section S4), in which the latent states of an HMM describe dynamical trajectories following an ordinary differential equation (ODE). We found that, for intermediate scales r, identifies the system as 1D, which reflects the fact that trajectories in each state follow deterministic low-dimensional dynamics. Figure 3D demonstrates this for the case of switching rotating dynamics; the increase in dimensionality at the very smallest scales reflects the addition of emission noise. Thus, in this case, similar to Figures 1D and 1E, the local dimension is driven by stochastic observations, the global dimension is driven by the overall geometry of the dynamical system, and at intermediate scales, the structure of dynamical trajectories is revealed.
The dimensionality of neuronal data
We next applied our sdPR method to recordings of the simultaneous activity of thousands of neurons, made possible by recent technological advances., Others have examined the global dimension of the underlying systems;,, here, we use sdPR to reveal differences in latent dynamics across scales, in different brain areas and states. Thus, we analyzed 37 Neuropixels recordings in mice from two sensory areas (visual thalamus and cortex) and two decision areas (frontal cortex and midbrain).,, Animals were either in a “spontaneous” state (awake, no task) or in an “engaged” state (performing two alternative forced-choice tasks; Section S5).We binned neural spike counts in 100 ms windows and subdivided neurons into groups of 100 to compare across sessions sampling variable numbers of units. Intriguingly, we first found that the sdPR depended systematically on scale in the visual cortex, midbrain, and frontal cortex—decreasing at larger scales ( = 1.63 0.05 mean SEM)—but was roughly scale invariant in the thalamus (Figure 4A). Here, distances between neural response vectors quantify scale in the functional space of neural activity, rather than physically on the cortical sheet, suggesting that activity in deeper areas is structured in groups of similar patterns spread broadly over lower-dimensional functional manifolds, like our models with skewed or scale-invariant data covariance. The thalamus, a peripheral sensory area directly reflecting visual input, instead shows scale-invariant dimensionality, like our models with isotropic data covariance, and recalling the spatial scale invariance of natural images. Moreover, the closeness of the thalamus to the input may explain its significantly higher dimension than the visual cortex at every scale (see Section S5 and Figures S6 and S7), despite massive expansion in the number of neurons involved in cortical as opposed to thalamic representation. Interestingly, the frontal cortex, a key player in planning and executive control, had higher dimensionality than the midbrain in engaged, but not spontaneous, conditions—prominently at smaller scales accessible to our analysis despite limited data, as shown in Figure 4B.
Figure 4
Scale-dependent dimensionality of neural activity
(A) sdPR across regions (scale normalized from 0 to 1; Section S5).
(B) Comparison between regions with 95% confidence intervals.
(C) Difference in dimensionality between engaged and spontaneous (green) and between spontaneous and passive conditions (yellow). Dimensionality modulation between conditions. Shading = 95% confidence interval (CI).
Extended analysis in Figures S6 and S7.
Scale-dependent dimensionality of neural activity(A) sdPR across regions (scale normalized from 0 to 1; Section S5).(B) Comparison between regions with 95% confidence intervals.(C) Difference in dimensionality between engaged and spontaneous (green) and between spontaneous and passive conditions (yellow). Dimensionality modulation between conditions. Shading = 95% confidence interval (CI).Extended analysis in Figures S6 and S7.To assess dimensionality changes within brain areas but across behavioral states, we computed differences between engaged and spontaneous conditions (Figures 4C, S6, and S7), finding significant deviation in the frontal cortex and midbrain, which are involved in decision making, but not in sensory areas. Specifically, task-driven neural activity was lower dimensional, especially at small and intermediate scales, recalling Mazzucato et al. We also considered a “passive” condition in which decision task stimuli were presented in randomized order without eliciting behaviors. The difference between spontaneous and passive was not significantly different from zero (Figure 4C, green), demonstrating that task engagement influences response dimension.Interpreting our results on neural data (Figure 4) in light of the dynamical model results in Figure 3 leads to hypotheses about the underlying system dynamics. For example, in the engaged condition, neurons in the frontal cortex and midbrain display reduced dimensionality at intermediate scales. This suggests reduced complexity of the underlying latent dynamics (number of states or latent space dimensionality in HMM and SLDS models) in task-driven conditions, aligning with intuitive notions of focus on a prescribed sensorimotor behavior.
Discussion
Many fields employ analogs of the global PR as a measure of effective intrinsic dimension. In physics, this ratio was first introduced in atomic spectroscopy, and then used as a measure of localization in condensed matter. In quantum information, a similar quantity is called “purity” and measures the degree of mixedness of states. In economics and sociology, the Herfindahl-Hirschman Index measures market concentration of an industrial sector., In sociology, the related Simpson Index quantifies diversity, while in politics, it is a measure of the effective number of parties. In machine learning, the same quantity serves as a metric of expressivity for learning kernels, and in neuroscience, it measures the global dimension of neural activity.,,Thus, the PR is used to quantify dimension in a remarkably wide variety of disciplines. However, complex systems behave differently at different scales—and thus their dimension is not necessarily characterized by a single number. Our measures a “running dimension,” capturing the effective number of latent degrees of freedom required to summarize observables at different scales. Importantly, it also approaches the well-known correlation dimension at the smallest scales. Moreover, the sdPR dimension can be computed as a simple, exact functional of a system’s second-order statistics, and can be derived analytically in many cases (e.g., Dahmen et al. and Hu and Sompolinsky). Overall, our approach may be used to analyze multivariate data in a range of domains, and we expect that it will reveal new geometrical aspects of data across many fields.
Experimental procedures
Resource availability
Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Stefano Recanatesi (stefano.recanatesi@gmail.com).
Materials availability
This study did not generate new unique reagents or materials.
Data and code availability
Original code has been deposited at https://github.com/ubcbraincircuits/accelnet-jupyternotebooks-dimensionality and is publicly available as of the date of publication. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. This paper analyzes existing, publicly available electrophysiology data deposited in Neurodata Without Borders (NWB) format at https://figshare.com/articles/steinmetz/9598406.
Authors: James J Jun; Nicholas A Steinmetz; Joshua H Siegle; Daniel J Denman; Marius Bauza; Brian Barbarits; Albert K Lee; Costas A Anastassiou; Alexandru Andrei; Çağatay Aydın; Mladen Barbic; Timothy J Blanche; Vincent Bonin; João Couto; Barundeb Dutta; Sergey L Gratiy; Diego A Gutnisky; Michael Häusser; Bill Karsh; Peter Ledochowitsch; Carolina Mora Lopez; Catalin Mitelut; Silke Musa; Michael Okun; Marius Pachitariu; Jan Putzeys; P Dylan Rich; Cyrille Rossant; Wei-Lung Sun; Karel Svoboda; Matteo Carandini; Kenneth D Harris; Christof Koch; John O'Keefe; Timothy D Harris Journal: Nature Date: 2017-11-08 Impact factor: 49.962
Authors: Matteo Carandini; Kenneth D Harris; Michael Okun; Nicholas Steinmetz; Lee Cossell; M Florencia Iacaruso; Ho Ko; Péter Barthó; Tirin Moore; Sonja B Hofer; Thomas D Mrsic-Flogel Journal: Nature Date: 2015-04-06 Impact factor: 49.962