Literature DB >> 35110401

Hierarchical timescales in the neocortex: Mathematical mechanism and biological insights.

Songting Li1,2,3, Xiao-Jing Wang4.   

Abstract

A cardinal feature of the neocortex is the progressive increase of the spatial receptive fields along the cortical hierarchy. Recently, theoretical and experimental findings have shown that the temporal response windows also gradually enlarge, so that early sensory neural circuits operate on short timescales whereas higher-association areas are capable of integrating information over a long period of time. While an increased receptive field is accounted for by spatial summation of inputs from neurons in an upstream area, the emergence of timescale hierarchy cannot be readily explained, especially given the dense interareal cortical connectivity known in the modern connectome. To uncover the required neurobiological properties, we carried out a rigorous analysis of an anatomically based large-scale cortex model of macaque monkeys. Using a perturbation method, we show that the segregation of disparate timescales is defined in terms of the localization of eigenvectors of the connectivity matrix, which depends on three circuit properties: 1) a macroscopic gradient of synaptic excitation, 2) distinct electrophysiological properties between excitatory and inhibitory neuronal populations, and 3) a detailed balance between long-range excitatory inputs and local inhibitory inputs for each area-to-area pathway. Our work thus provides a quantitative understanding of the mechanism underlying the emergence of timescale hierarchy in large-scale primate cortical networks.
Copyright © 2022 the Author(s). Published by PNAS.

Entities:  

Keywords:  detailed excitation–inhibition balance of long-range cortical connections; eigenvector localization; interareal heterogeneity; large-scale cortical network; timescale hierarchy

Mesh:

Year:  2022        PMID: 35110401      PMCID: PMC8832993          DOI: 10.1073/pnas.2110274119

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


The brain is organized with a delicate structure to integrate and process both spatial and temporal information received from the external world. For spatial information processing, neurons along cortical visual pathways possess increasingly large spatial receptive fields, and its underlying mechanism has been understood as neurons in higher-level visual areas receive input from many neurons with smaller receptive fields in lower-level visual areas, thereby aggregating information across space (1). More recently, a computational model (2) revealed that the timescale over which neural integration occurs also gradually increases from area to area along the cortical hierarchy. The model was based on the anatomically measured directed- and weighted-interareal connectivity of the macaque cortex (3) and incorporated heterogeneity of synaptic excitation calibrated by spine count per pyramidal neuron (4). It has been observed that the decay times increased progressively along the cortical hierarchy when signals propagate in the network, and the temporal hierarchy could change dynamically in response to different types of sensory inputs (e.g., different hierarchy of timescales for somatosensory input versus visual input) (2). By manipulating parameters of the model, simulation results further demonstrated that both within and between regions of anatomical properties could affect the hierarchy of timescales in neuronal population activity (2). A hierarchy of temporal receptive windows is functionally desirable, so that the circuit dynamics operate on short timescales in early sensory areas to encode and process rapidly changing external stimuli, whereas parietal and frontal areas can accumulate information over a relatively long period of time in decision-making and other cognitive processes (5, 6). Despite the accumulating evidence in support of timescale hierarchy across cortical areas in mice (7, 8), monkeys (9–15), and humans (16–23), its underlying mechanism remains unclear. In particular, since interareal connections are dense, with roughly 65% of all possible connections present in the macaque cortex (3) and even higher connection density in the mouse cortex (24), what circuit properties are required to ensure that dynamical modes with disparate time constants are spatially localized? How do intraareal anatomical properties determine the intrinsic timescale of each area, and how do these intrinsic timescales remain to be segregated rather than mixed up in the presence of dense interareal connections? In this work, we addressed these questions by a mathematical analysis of the model (2). Using a perturbation method, we identified key required conditions, in particular a detailed excitation–inhibition balance for long-distance interareal connections that is experimentally testable.

The Multiareal Model and Hierarchical Timescales Phenomenon

We first review the mathematical form of the multiareal model of the macaque cortex and the hierarchical timescales phenomenon captured by this model (2). The macaque cortical network model contains a subnet of 29 areas widely distributed from sensory to association areas in the macaque cortex, and each area includes both excitatory and inhibitory neuronal populations. The neuronal population dynamics in the ith area are described aswhere and are the firing rate of the excitatory and inhibitory populations in the ith area, respectively; τ and τ are their time constants, respectively; and β and β are the slope of the frequency–current (f-I) curve for the excitatory and inhibitory populations, respectively. The f-I curve takes the form of a rectified linear function with . In addition, and are the external currents, and and are the synaptic currents that followwhere w, is the local coupling strength from the q population to the p population within each area. FLN is the fraction of labeled neurons (FLN) from area j to area i reflecting the strengths of long-range input (3), and μ and μ are scaling parameters that control the strengths of long-range input to the excitatory and inhibitory populations, respectively. Both local and long-range excitatory inputs to an area are scaled by its position in the hierarchy quantified by h (a value normalized between 0 and 1), based on the observation that the hierarchical position of an area highly correlates with the number of spines on pyramidal neurons in that area (2, 4). A constant η maps the hierarchy h into excitatory connection strengths. Note that both local and long-range projections are scaled by hierarchy, rather than just local projections, following the observation that the proportion of local to long-range connections is approximately conserved across areas (25). The values of all the model parameters are specified in . By simulating the model, it has been observed in ref. 2 that the decay time of neuronal response in each area increases progressively along the visual cortical hierarchy when a pulse input is given to area V1, as shown here in Fig. 1. Early visual areas show fast and transient responses while prefrontal areas show slower responses and longer integration times with traces lasting for several seconds after the stimulation. In addition, white-noise input to V1 is also integrated with a hierarchy of timescales by computing the autocorrelation of neuronal activity at each area (2). As shown in Fig. 1, the activity of early sensory areas shows rapid decay of autocorrelation with time lag while that of association areas shows slow decay. In Fig. 1, by fitting single or double exponentials to the decay of the autocorrelation curves (2), the dominant timescale of each area tends to increase along the hierarchy approximately, and thus a hierarchy of widely disparate timescales emerges from this model. It is worth noting, however, that the timescale does not change monotonically with the anatomically defined hierarchy (x axis); the precise pattern is sculpted by the measured interareal wiring properties.
Fig. 1.

The hierarchical timescales phenomenon simulated in the macaque multiareal model. (A) A pulse of input to area V1 is propagated along the hierarchy, displaying increasing decay times as it proceeds. (B) Autocorrelation of area activity in response to white-noise input to V1. (C) The dominant time constants in all areas, extracted by fitting single or double exponentials to the autocorrelation curves (2). In A–C, areas are arranged and colored by position in the anatomical hierarchy.

The hierarchical timescales phenomenon simulated in the macaque multiareal model. (A) A pulse of input to area V1 is propagated along the hierarchy, displaying increasing decay times as it proceeds. (B) Autocorrelation of area activity in response to white-noise input to V1. (C) The dominant time constants in all areas, extracted by fitting single or double exponentials to the autocorrelation curves (2). In A–C, areas are arranged and colored by position in the anatomical hierarchy. Note that, although the multiareal model (Eqs. and ) is nonlinear by taking into account a rectified linear f-I curve, the stimuli in our simulations drive all neuronal population activities above the firing threshold with positive input currents to all areas. Therefore, the stimuli essentially drive the network dynamics into the linear regime. Before we perform mathematical analysis to understand the mechanism underlying the emergence of hierarchical timescales in the simulations, to simplify the notation, we rewrite the network dynamics Eqs. and in the linear regime in the formwherewith n = 29, and D, D, D, D being four diagonal matrices whose ith element on their diagonal line isrespectively, and matrices F and F being two nondiagonal matrices whose ith-row–jth-column element isrespectively. Note that matrices D, D, D, D reflect local intraareal interactions, while matrices F and F reflect long-range interareal interactions. In addition, elements in D, D, F, and F depend on area hierarchy h while elements in D and D are constant. Finally, the external input vector is Denoting the eigenvalues and eigenvectors of the connectivity matrix W as λ and (), respectively, i.e., , the analytical solution of Eq. can be obtained aswhere and are the ith element in and , respectively, and and are the coefficients for the initial condition and the external input, respectively, represented in the coordinate system of the eigenvectors . Note that, from Eq. , each area integrates input current with the same set of time constants determined by the real part of the eigenvalues, i.e., . Therefore, the characteristic timescale of each area across the network is expected to be similar in the general case. To obtain distinct timescales at each area, it requires 1) the localization of eigenvectors , i.e., most of the elements in are close to zero, and 2) the orthogonality of all pairs of eigenvectors, i.e., the nonzero elements nearly nonoverlap for different . By computing the eigenvalues and eigenvectors of matrix W, as shown in Fig. 2, the timescale pool derived from eigenvalues can be classified into two groups; one group shows a quite fast timescale of about 2 ms, and the other group includes relatively slow timescales ranging from tens to hundreds of milliseconds. In addition, we are particularly interested in the excitatory population because the majority of neurons in the cortex are excitatory neurons. We observe that the magnitude of the eigenvectors corresponding to the fast timescale is nearly zero for the excitatory population at each area, while that corresponding to the slow timescale is weakly localized and weakly orthogonal, i.e., each eigenvector has a few nonzero elements that almost do not overlap with other eigenvectors’ nonzero elements. According to Eq. , the pattern of eigenvectors gives rise to the disparate timescales for the excitatory neuronal population at each cortical area. We next perform mathematical analysis to investigate the sufficient conditions for 1) vanishing magnitude of the excitatory component of fast-eigenmode eigenvectors and 2) weak localization and orthogonality of the excitatory component of slow-eigenmode eigenvectors in this network system.
Fig. 2.

Eigenvectors of the network connectivity matrix and their approximations from the perturbation analysis. (A) Eigenvectors of the network connectivity matrix W. Each column shows the amplitude of an eigenvector at the 29 areas, with corresponding timescale labeled below. (B) Eigenvectors of W calculated from the first-order perturbation analysis. (C) Similarity measure defined as the inner product of the corresponding eigenvectors in A and B.

Eigenvectors of the network connectivity matrix and their approximations from the perturbation analysis. (A) Eigenvectors of the network connectivity matrix W. Each column shows the amplitude of an eigenvector at the 29 areas, with corresponding timescale labeled below. (B) Eigenvectors of W calculated from the first-order perturbation analysis. (C) Similarity measure defined as the inner product of the corresponding eigenvectors in A and B.

Perturbation Analysis of the Model

We note that the parameters of the model (specified in ) givewhich can be viewed as two small parameters to allow us to perform perturbation analysis below. We first study the network in the absence of the long-range interactions among areas. In this scenario, we study the 2 × 2 block matrix in which each block is a diagonal matrix defined above. By viewing as a small parameter, we havefrom their definitions. Accordingly, we can prove the following proposition: If , then D has n eigenvalues being and n eigenvalues being . It is straightforward to prove that matrix D can be diagonalized by matrix P; i.e.,where I is the identity matrix, and diagonal matrix A satisfies and diagonal matrix B satisfies We solve the equation of A and choose one of the two solutions of A aswhere the square root of a diagonal matrix is defined as taking the square root of its elements. Due to the fact that , we have , and . Accordingly, we have■ From , the eigenvalues of D have two separated scales belonging to Λ and Λ, respectively. As the timescales of the network system are given by (λ is the ith diagonal element in matrix Λ), the separation of scales for eigenvalues in Λ and Λ explains that the intrinsic timescale pool can be classified into two groups with a separation of scales, which is mainly determined by the distinct electrophysiological properties between excitatory and inhibitory neuronal populations within each area described by ϵ. In addition, from the analysis, the eigenvalues in Λ with large magnitude (fast timescale) are less sensitive to the hierarchy level because , and the elements in D do not depend on h. Therefore, the gradient of h across areas barely affects the fast-timescale pool. In contrast, the eigenvalues in Λ with small magnitude (slow timescale) are more sensitive to the hierarchy level because , and both the elements in D and D depend on h. Therefore, the gradient of h across areas increases the range of the slow-timescale pool. Further, the slow timescales of each area in this disconnected network are segregated and follow the hierarchical order h as the corresponding eigenvectors are perfectly localized and orthogonal to each other. Now we consider the multiareal network in the presence of long-range interactions. Adding long-range connectivity to local connectivity matrix D changes the eigenvalues and eigenvectors of matrix D, which can be analyzed in the following. By multiplying P and (given in the ) on both sides of W, we havewherewith and with Λ, A, and B defined in the . Denoting one of the eigenvectors of matrix Γ as and the corresponding eigenvalue as λ, we have According to the definition of and and , we have , and accordingly, As is a higher-order term compared with Λ, Λ, Σ, Σ, and Σ, it can be dropped out in Eq. and the error of eigenvalue and eigenvector is at most (see for a detailed proof). Consequently, we can obtain two equations from Eq. in the vector form, To describe the eigenvector property of Eqs. and , we first introduce the definitions of weak localization and weak orthogonality as follows: A vector is weakly localized if it can be represented as for some k, where is a constant number, is a constant vector, δ is a small parameter, and represents the natural basis with only the kth element being 1 and others being zero, i.e., . Two vectors and are weakly orthogonal to each other if their inner product , where δ is a small parameter. With the concept of weak localization and weak orthogonality defined above, we introduce the following proposition that describes the property of in the system of Eqs. and : In the system described by and , if all matrices are analytic with respect to ϵ and δ; and if ; and if Λ; then there exist n eigenvectors in which , with correspondingly, and there exist n eigenvectors in which is weakly localized and weakly orthogonal to each other, with correspondingly. It is noted that is a trivial solution of Eq. . By defining , Eq. becomesin which is the eigenvector of matrix . By viewing as a perturbation matrix to Λ, then the leading order of λ shall be the same as that of n elements in the diagonal line of Λ, which takes the order of . In Eq. , if , by defining , and , we have Therefore, is also the eigenvector of matrix , and is the corresponding eigenvalue. As Λ has n simple eigenvalues, so does , and then and are analytic with respect to the perturbation parameter δ (26), i.e., , and for δ near zero. Therefore, to the leading order, we havein which μ0 is the eigenvalue of the diagonal matrix , and is the corresponding eigenvector. Accordingly, , and thereafterwhere represents the kth natural basis, and the leading order of is . It is straightforward to verify that are weakly localized and weakly orthogonal to each other. ■ If we denote the unit-length eigenvector of the connectivity matrix W as , and denote the corresponding eigenvalue as λ (the same as that of matrix Γ after the similarity transform), then from and we have the following: Under the same conditions in and , the unit-length eigenvector of the connectivity matrix W has the following properties: For eigenvalue , the corresponding is weakly localized and weakly orthogonal to each other, and for eigenvalue , the corresponding . We first consider with nonunit length. According to the similarity transform, we have the following linear relation between and :i.e., , and From , we have , and . From , we have for . Accordingly, can be solved as , which gives , and . Therefore, the length of denoted by c is order . By normalizing the length of to be unity, we have being weakly localized and weakly orthogonal to each other. From , we have for . Accordingly, can be solved as the eigenvector of matrix with unit length. Therefore, , and the length of denoted by c is order . By normalizing the length of to be unity, we have .■ Note that and hold for sufficiently small ϵ and δ near zero. However, the convergence radius of the power series of in is not specified yet. Although difficult to calculate the convergence radius, we can compute the analytical expression of in the power series in to obtain the first-order perturbation solution of and thereby , which could help us gain insight about when weak localization and orthogonality of and will break down approximately. To the order of δ in Eq. , we have Without loss of generality, we assume , and accordingly, is the kth element in the diagonal line of matrix . In addition, we normalize to make the kth element in denoted by to be unity, and correspondingly for . By left multiplying () to Eq. , we havewhere is the element in the jth row and kth column in matrix . To make the first-order perturbation solution valid, we expect that is small compared with ; otherwise the separation of orders will no longer hold in the power series (i.e., first-order term becomes larger than the zeroth-order term ). In such a case, the spectral gap shall be large enough compared with elements in . The spectral gap of matrix attributes to the gradient of excitation across areas or simply h. In Fig. 2 , we show that the eigenvector , which is solved using the perturbation theory in and to the first-order accuracy (Eq. ), agrees well with the original eigenvector in most cases. However, some eigenvectors show less similarity to the original eigenvectors when the first-order perturbation theory breaks down for the reason discussed above.

Biological Interpretations of the Three Requirements

From the above analysis, three conditions are required to obtain weakly localized and orthogonal eigenvectors to maintain the hierarchy of timescales: 1) small ϵ, 2) small δ, and 3) the gradient of h across areas. We briefly summarize the important roles of the three conditions in proving eigenvector localization and orthogonality in the perturbation analysis illustrated in Fig. 3. As shown in Fig. 3 , to remove intraareal interactions between the excitatory and inhibitory populations within each area, we first change the coordinate system from to with a transform matrix P given in . In the new coordinate system, there is no local interaction between the dynamical variables and . Furthermore, considering a directed long-range projection from area j to area i in Fig. 3, it has been shown that small δ leads to weak interaction from u to u, and small ϵ additionally leads to even weaker interaction from v to u that can be removed with an ignorable error (). And a gradient of h leads to a nonzero spectral gap between area i and area j. All three conditions result in the weak localization and orthogonality of the component in the coordinate system, as proved by and Eq. . Finally, as shown in Fig. 3, small ϵ gives rise to the fact that the leading orders of and are the same, and thus is also weakly localized and orthogonalized similar to , as proved by .
Fig. 3.

Schematic illustration for the steps to prove weakly localized and orthogonal eigenvectors of the connectivity matrix W. (A) Directed interaction from area j to area i in the original model (Eqs. and ). (B) One-way interaction from area j to area i after changing the coordinate system from to . (C) Small δ leads to weak interaction from u to u, small ϵ additionally leads to even weaker interaction from v to u that is ignorable (proved in ), and a gradient of h leads to a nonzero spectral gap between area i and area j. Accordingly, they together lead to the weak localization and orthogonality of the component in the coordinate system (proved by and Eq. ). (D) One-way interaction from area j to area i after changing the coordinate system from back to . To the leading order, one has . In this step, small ϵ ensures that the leading orders of and are identical, and so are their localization and orthogonality properties (proved by ). In A–D, the width of lines codes the interaction strength, and light-colored lines and nodes are not important in the proofs.

Schematic illustration for the steps to prove weakly localized and orthogonal eigenvectors of the connectivity matrix W. (A) Directed interaction from area j to area i in the original model (Eqs. and ). (B) One-way interaction from area j to area i after changing the coordinate system from to . (C) Small δ leads to weak interaction from u to u, small ϵ additionally leads to even weaker interaction from v to u that is ignorable (proved in ), and a gradient of h leads to a nonzero spectral gap between area i and area j. Accordingly, they together lead to the weak localization and orthogonality of the component in the coordinate system (proved by and Eq. ). (D) One-way interaction from area j to area i after changing the coordinate system from back to . To the leading order, one has . In this step, small ϵ ensures that the leading orders of and are identical, and so are their localization and orthogonality properties (proved by ). In A–D, the width of lines codes the interaction strength, and light-colored lines and nodes are not important in the proofs. We next discuss the biological interpretation of the three conditions. First, according to the definition of , small ϵ indicates that the electrophysiological properties of excitatory and inhibitory neurons are different, in particular, their membrane time constant and the slope of the gain function. The substantial difference of electrophysiological properties between the excitatory and inhibitory neurons has been supported by experimental evidence, i.e., inhibitory neurons have larger slope of the gain function and smaller membrane time constant (27–30). Second, small indicates the balanced condition between the interareal excitatory and intraareal inhibitory inputs. When the presynaptic excitatory input from the jth area is increased by , its influence on the excitatory population activity in the ith area in the steady state can be calculated in a straightforward way as , in which . This indicates that the signal from the jth area has a small influence on the activity of the excitatory population in the ith area, because the global long-range excitatory input is balanced with and canceled by the local inhibitory synaptic input, leading to small net inputs in each signal pathway, as shown in Fig. 4. This condition corresponds to a detailed balance of excitation and inhibition that may benefit signal control and gating, as proposed in previous studies (31). The importance of excitation–inhibition balance on timescale hierarchy is supported by a recent study showing that the imbalance of excitation and inhibition could have a substantial effect on the change of intrinsic timescales across brain areas, which is a manifestation of psychosis such as hallucination and delusion (32).
Fig. 4.

The illustration of detailed balance between interareal excitation and intraareal inhibition. The projection from V1 to V4 is shown as an example. (A) One-way interaction from V1 to V4. V1 receives external Gaussian input. The excitatory population in V4 receives balanced excitatory interareal inputs from V1 (dark red) and intraareal inhibitory inputs from the inhibitory population in V4 (dark blue). Other excitatory and inhibitory interactions in this circuit are colored by light red and blue, respectively. (B) Simulation of the synaptic currents received by the V4 excitatory population induced by V1 activity. The interareal excitatory inputs (red) are balanced with the intraareal inhibitory inputs (blue), leading to small net inputs (black).

The illustration of detailed balance between interareal excitation and intraareal inhibition. The projection from V1 to V4 is shown as an example. (A) One-way interaction from V1 to V4. V1 receives external Gaussian input. The excitatory population in V4 receives balanced excitatory interareal inputs from V1 (dark red) and intraareal inhibitory inputs from the inhibitory population in V4 (dark blue). Other excitatory and inhibitory interactions in this circuit are colored by light red and blue, respectively. (B) Simulation of the synaptic currents received by the V4 excitatory population induced by V1 activity. The interareal excitatory inputs (red) are balanced with the intraareal inhibitory inputs (blue), leading to small net inputs (black). Third, the gradient of h parameterizes the gradient of synaptic excitation across areas in the model, supported by the fact that h is proportional to the spine count per pyramidal neuron across areas (2, 4) in the form of a macroscopic gradient (33). The gradient of synaptic excitation leads to two consequences: 1) It gives rise to the hierarchy of intrinsic timescale for each area while being disconnected to other parts of the cortex and 2) it stabilizes the localization of intrinsic timescale for each area in the presence of long-range connections. From the perturbation analysis and Eq. , the degree of eigenvector localization is determined by the competition between the strength of long-range connections encoded in matrix and the spectral gap of matrix . Therefore, the long-range connections tend to delocalize eigenvectors and thus break the timescale hierarchy, but the heterogeneity of local recurrent excitation level weakens its effect on eigenvector delocalization in a divisive fashion. In fact, the heterogeneity or randomness in local node properties has been shown to give localized eigenvectors in models of a physical system, for instance, a phenomenon known as Anderson localization (34) that describes the transition from a conducting medium (corresponding to delocalized eigenvectors) to an insulating medium (corresponding to localized eigenvectors). A similar mechanism has been identified in studying the eigenvector localization of an idealized neural network with simple nodes in each cortical area (35).

Discussion

In this work, we investigated the requirements for the emergence of a hierarchy of temporal response windows in a multiareal model of the macaque cortex (2). The original model is a nonlinear dynamical system by including a rectified linear f-I curve, and it becomes essentially linear when neural population activities are all above the firing threshold, as happened in our simulations of a hierarchical timescale phenomenon. This fact enabled us to define the time constants precisely from the eigenmodes of the connectivity matrix and carry out a detailed mathematical analysis to identify biologically interpretable conditions. (Rectified) linear models have been broadly used in theoretical and experimental neuroscience studies (36–38). Although microscopic neural activity is nonlinear in general, it has been shown in a recent study (39) that linear models can capture macroscopic cortical dynamics in the resting state more accurately than nonlinear model families, including neural field models for describing the spatiotemporal average of individual neuronal activities. Nonlinear models are more general for capturing neural circuits. However, for a nonlinear model, the time constants of the system are not uniquely defined. A linear model can be understood as a linearization of a nonlinear dynamical system around an internal state such as the resting state of the brain. In contrast to previous computational models studying the emergence of timescales (35), the model we studied is anatomically more realistic as it incorporates 1) experimental measurements of directed and weighted anatomical connectivity, 2) a gradient of synaptic excitation reflected by spine counts in pyramidal neurons across areas, and 3) both excitatory and inhibitory neuronal populations. By performing rigorous perturbation analysis, we show that the segregation of timescales is attributable to the localization of eigenvectors of the connectivity matrix, and the parameter regime that makes this happen has three crucial properties: 1) a macroscopic gradient of synaptic excitation, 2) distinct electrophysiological properties between excitatory and inhibitory neuronal populations, and 3) a detailed balance between long-range excitatory inputs and local inhibitory inputs for each area-to-area pathway. The theoretically identified biological conditions for the segregation of timescales enable us to make experimentally testable predictions. First, the condition of the macroscopic gradient of synaptic excitation suggests that a shallower gradient of synaptic excitation shall lead to less localized eigenvectors. Consequently, the difference of time constants for a pair of areas shall be larger if their synaptic excitation or hierarchical levels are less similar, which can be directly tested in experiments. Second, the condition of distinct electrophysiological properties between excitatory and inhibitory neuronal populations suggests that the change of neuronal physiology will affect the segregation of timescales. This condition can be tested experimentally by using genetic tools to knock down or knock out specific genes to change the firing properties of neurons (40). Third, the condition of detailed balance of excitation and inhibition suggests that areas with unbalanced excitation and inhibition could also alter hierarchical time constants. With growing evidence for excitation/inhibition imbalance in schizophrenia (41, 42), this condition is supported by recent experiments showing that the intrinsic time constants of schizophrenia patients have been substantially changed (32). And this condition can be further tested in animal models with genetic tools to disturb the excitation–inhibition balance (43). It is worth mentioning that, although the specific pattern of interareal connectivity does not affect the eigenvector localization substantially based on the perturbation analysis, it shapes the timescale hierarchy qualitatively. In particular, the timescale hierarchy does not exactly follow monotonically the areal anatomical hierarchy in the presence of long-range connections, as shown in Fig. 1. Furthermore, within a brain region time constants are heterogeneous across individual neurons (44, 45). To better relate the model with experimentally observed timescales in various specific cortical areas, the roles of long-range connections, cell types, and other circuit properties require further elucidation. It has been noted that the neuronal activity propagates along the hierarchy with significant attenuation in the model in ref. 2. The attenuation can be alleviated by tuning the model parameters to the regime of strong global balanced amplification (GBA) (46) (parameters in ). Balanced amplification was originally introduced for a local neural network, associated with strong nonnormality of the system where eigenmodes are far from being orthogonal with each other (47). A quantity called κ measures the degree of nonnormality of a matrix (48) (κ = 1 for a normal matrix; the larger the κ value, the more nonnormal the system). We have for the original model (2), which is thus only slightly nonnormal. By contrast, for the model in the strong GBA regime. Therefore, the enhancement of signal propagation in the model correlates with the increase of the nonorthogonality of the eigenvectors or the nonnormality of the connectivity matrix. In the strong GBA regime, , which is 10 times larger than its original value, suggesting that the detailed balance condition is less well satisfied. In such a case, the localization of timescales may no longer exist in this linear model. The situation is different in nonlinear models (2, 49, 50), where inputs may be amplified by strongly recurrent circuit dynamics to enhance signal propagation or routing of information is selectively gated (for a subset of connection pathways in a goal-directed manner) (31, 51), while the conditions for a timescale hierarchy are satisfied. For a nonlinear system, however, eigenmodes can be defined only with respect to a particular network state. Consequently, the time constants observed in single neurons are no longer unique and may differ, for instance, when the brain is at rest or during a cognitive process. It remains to be seen to what extent the conditions identified here hold in the brain’s various internal states, while the precise pattern of timescales can be flexibly varied to meet behavioral demands.

Materials and Methods

Model Parameters.

In the macaque cortical network model, we set ms, ms, Hz/pA, Hz/pA, pA/Hz, pA/Hz, w = 19.7 pA/Hz, pA/Hz, pA/Hz, pA/Hz, and . We set pA/Hz and pA/Hz for the strong balanced amplification regime (46) introduced in Discussion. Some of the parameters are derived from experimental measurements of primary visual cortex (37). The FLN values are obtained from the experimental measurements of macaque cortical connectivity (3). The hierarchy values h of each cortical area are obtained by fitting a generalized linear model that assigns hierarchical values to areas (2) such that the differences in hierarchical values predict the supragranular layer neurons (SLNs) measured in experiment (52).
  44 in total

1.  Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks.

Authors:  Alex Roxin; Nicolas Brunel; David Hansel
Journal:  Phys Rev Lett       Date:  2005-06-16       Impact factor: 9.161

2.  A place for time: the spatiotemporal structure of neural dynamics during natural audition.

Authors:  Greg J Stephens; Christopher J Honey; Uri Hasson
Journal:  J Neurophysiol       Date:  2013-08-07       Impact factor: 2.714

Review 3.  Macroscopic gradients of synaptic excitation and inhibition in the neocortex.

Authors:  Xiao-Jing Wang
Journal:  Nat Rev Neurosci       Date:  2020-02-06       Impact factor: 34.870

4.  A Large-Scale Circuit Mechanism for Hierarchical Dynamical Processing in the Primate Cortex.

Authors:  Rishidev Chaudhuri; Kenneth Knoblauch; Marie-Alice Gariel; Henry Kennedy; Xiao-Jing Wang
Journal:  Neuron       Date:  2015-10-01       Impact factor: 17.173

5.  Inter-areal Balanced Amplification Enhances Signal Propagation in a Large-Scale Circuit Model of the Primate Cortex.

Authors:  Madhura R Joglekar; Jorge F Mejias; Guangyu Robert Yang; Xiao-Jing Wang
Journal:  Neuron       Date:  2018-03-22       Impact factor: 17.173

6.  Amplification of local changes along the timescale processing hierarchy.

Authors:  Yaara Yeshurun; Mai Nguyen; Uri Hasson
Journal:  Proc Natl Acad Sci U S A       Date:  2017-08-15       Impact factor: 11.205

7.  Topographic mapping of a hierarchy of temporal receptive windows using a narrated story.

Authors:  Yulia Lerner; Christopher J Honey; Lauren J Silbert; Uri Hasson
Journal:  J Neurosci       Date:  2011-02-23       Impact factor: 6.167

8.  Weight consistency specifies regularities of macaque cortical networks.

Authors:  N T Markov; P Misery; A Falchier; C Lamy; J Vezoli; R Quilodran; M A Gariel; P Giroud; M Ercsey-Ravasz; L J Pilaz; C Huissoud; P Barone; C Dehay; Z Toroczkai; D C Van Essen; H Kennedy; K Knoblauch
Journal:  Cereb Cortex       Date:  2010-11-02       Impact factor: 5.357

9.  Genetic controls balancing excitatory and inhibitory synaptogenesis in neurodevelopmental disorder models.

Authors:  Cheryl L Gatto; Kendal Broadie
Journal:  Front Synaptic Neurosci       Date:  2010-06-07

10.  A hierarchy of intrinsic timescales across primate cortex.

Authors:  John D Murray; Alberto Bernacchia; David J Freedman; Ranulfo Romo; Jonathan D Wallis; Xinying Cai; Camillo Padoa-Schioppa; Tatiana Pasternak; Hyojung Seo; Daeyeol Lee; Xiao-Jing Wang
Journal:  Nat Neurosci       Date:  2014-11-10       Impact factor: 24.884

View more
  1 in total

Review 1.  More than the end: OFF response plasticity as a mnemonic signature of a sound's behavioral salience.

Authors:  Dakshitha B Anandakumar; Robert C Liu
Journal:  Front Comput Neurosci       Date:  2022-09-06       Impact factor: 3.387

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.