Literature DB >> 21079740

Irregular dynamics in up and down cortical states.

Jorge F Mejias1, Hilbert J Kappen, Joaquin J Torres.   

Abstract

Complex coherent dynamics is present in a wide variety of neural systems. A typical example is the voltage transitions between up and down states observed in cortical areas in the brain. In this work, we study this phenomenon via a biologically motivated stochastic model of up and down transitions. The model is constituted by a simple bistable rate dynamics, where the synaptic current is modulated by short-term synaptic processes which introduce stochasticity and temporal correlations. A complete analysis of our model, both with mean-field approaches and numerical simulations, shows the appearance of complex transitions between high (up) and low (down) neural activity states, driven by the synaptic noise, with permanence times in the up state distributed according to a power-law. We show that the experimentally observed large fluctuation in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise.

Entities:  

Mesh:

Year:  2010        PMID: 21079740      PMCID: PMC2975677          DOI: 10.1371/journal.pone.0013651

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Neural systems, even in the absence of external stimuli, can exhibit a wide variety of coherent collective behaviors, as in vivo and in vitro experiments show [1]–[3]. One of the most prominent examples is the spontaneous transition between two different voltage states, namely up and down states, observed in simultaneous individual single neuron recordings as well as in local field measures. Such behavior, which is generated within the cortex, may provide a framework for neural computations [4], and could also coordinate some sleep rhythms into a coherent rhythmic sequence of recurring cortical and thalamocortical activities [3], [5], [6]. The phenomenon of up and down transitions has been measured in a number of situations, such as in the primary visual cortex of anesthetized animals [7], [8], during slow-wave sleep [1], [5], [6], in the somatosensory cortex of awake animals [9], or in slice preparation under different experimental protocols [3], [10], [11], to name a few. The origin of such structured neuronal activity is still unclear, although several studies have shown that both intrinsic cell properties [12]–[14] and the high level of recurrency present in actual neural circuits [3], [15], [16] may contribute to the generation of up and down transitions. In particular, the contribution that reverberations in recurrent neural networks may have in the appearance of these transitions could depend strongly on synaptic properties. It is known, for instance, that excitatory synapses with a slow dynamics (such as synapses mediated by NMDA receptors) may play a relevant role in the generation of persistent activity or up cortical states [17]. On the other hand, several modeling studies indicate that activity-dependent synaptic mechanisms, such as short-term synaptic depression and facilitation, can induce voltage transitions between up and down neural states as well [16], [18]–[20]. Many crucial points about the understanding of up and down transitions are, however, still lacking. For instance, in vivo experiments in the cat visual cortex show that the permanence times in the depolarized or up state present a high variability, and can range from a scale of milliseconds to seconds [7]. A similar level of irregularity has also been recently found in in vivo recordings of up-down transitions in the rat auditory cortex [21], as well as in sleep-wake transitions [15], [22], [23], where power-law distributions in the duration of wake states have been measured. Such complexity in the time series of the neuron membrane potentials remains far to be explained, and could reflect scale invariance in permanence times, which could in turn be a (preliminary) indicative of criticality. In fact, there are many recent studies that have shown criticality in different contexts in the brain [24], [25], as well as in neural network models which present self-organization and criticality properties [26]–[28], and even it has been reported to occur in sleep-wake transitions in in vivo conditions [22], [23]. Although it is worth noting that the irregularity of the dynamics of up and down states is not a sufficient condition for criticality, a concrete characterization of such irregularity may be a convenient starting point for future works on this topic. To study in detail the relevant issue of irregular up and down cortical dynamics, we propose in this work a minimal model for up and down transitions in neural media. We consider a simple bistable rate model whose stable solutions represent two possible voltage states of the mean membrane potential of the network. More precisely, such states correspond, respectively, to high and low levels of activity in the network (that is, the up and down cortical states). In addition, we consider that the synaptic connections between neurons of the network present short-term synaptic depression (STD) mechanisms, which introduce temporal correlations, as well as synaptic stochasticity, in the dynamics of the system [29]–[32]. A complete analysis of this simple mathematical model depicts, both numerically and within a theoretical probabilistic approach, the appearance of power-law dependences in the distribution of permanence times in the up state. Our results show that the appearance of such scale free distributions is due to the complex interplay between several factors including synaptic stochasticity and the temporal correlations introduced by STD. The emergence of power-law dependences could, indeed, explain the high variability in permanence times in the up state suggested by experiments [7], [21].

Methods

Our starting point is a bistable rate model, which mimics the dynamics of the electrical activity of a population of interconnected excitatory neurons (although it can be easily extended to other situations) with two stable levels of activity. The model has the form [33] where is the mean firing rate of the (homogeneous) neural population, is the maximum level of activity which can be reached by the population (in absence of noise), is the synaptic coupling strength in absence of STD, and is the firing threshold of the neurons in the population. The variable is a Gaussian white noise of zero mean and standard deviation , which takes into account the inner stochasticity of the neural population (caused by other sources of uncontrolled noise in the system). The parameter is the population time constant, which may be assumed to be around the duration of the synaptic current pulse [34], [35]. For generality purposes, we set , and therefore time and frequency are given in units of and , respectively. The term represents the transduction function, which gives the nonlinear effect that the mean postsynaptic current (coming from recurrent connections of the neural population) induces in the network mean firing rate. Employing this form for , the up and down stable levels of activity correspond to and , respectively. On the other hand, the variable in equation (1) takes into account the dynamical modification of the strength of the synaptic connections during short time scales due to high network activity, and it is usually named short-term synaptic plasticity. Based on the model proposed in [29], [36] for short-term depression, and following previous studies concerning the dynamics of neural populations [16], we assume that evolves according towhere is the characteristic time scale of the STD mechanism, and is a parameter related with the reliability of the synaptic transmission. According to experimental measurements for these parameters in the somatosensory cortex of the rat [36], we set and unless specified otherwise. Assuming, for instance, a population time constant of , which would approximately correspond to the duration of a fast synaptic current pulse mediated by AMPA receptors, we obtain , which is within the physiological range measured in [36]. The last term on the right hand side of equation (2) is added to the original model in [36] to include some level of stochasticity in this, otherwise, deterministic description of synaptic transmission. The inclusion of such term constitutes a simple manner of considering the stochasticity due, for instance, to the unreliability of synaptic transmission [31], [32], the stochastic properties of receptor-transmitter interactions [37], the sparse connectivity of cortical circuits [38], [39], or other sources of noise not yet considered (see the Discussion Section for more details). The parameter controls the strength of this fluctuating term, and is a Gaussian white noise with zero mean and variance one. Equations (1) and (2) constitute our minimal model of an excitatory neural network with stochastic depressing synapses. The simplifications assumed by such model allows to obtain some analytical derivations for the quantities of interest, and concretely for the probability distributions of permanence times in the up state, denoted by . Bistable systems in the presence of different sources of noise have been theoretically studied in detail in many works [40]–[44]. Here, however, we have employed a probabilistic approach which is very appropriate for the computation of the distribution of permanence times. In the following, we will derive an approximate expression for within this approach. First, we obtain the potential function and the conditions in which the dynamics of the system is driven by the variable . After that, we compute the probability distribution of ruin times of which, as we will see, leads to the probability distribution of permanence times in the up state, namely .

A. The potential function

In order to compute the potential function of the dynamics (1,2) (namely ) one can see that, for realistic values of , the dynamics of is very slow compared to that of . We therefore can write equation (1) aswhere we have adiabatically eliminated from the dynamics of . The extrema of are given by the solutions of the equationIn the following, we choose , with and . With this choice, one can easily check from equation (3) that the potential becomes symmetric in around when . Equation (4) may have one or three solutions, depending on the slope of the hyperbolic tangent and on the value of . In order to obtain three solutions of (4) (that is, the bistable regime) the maximal slope of the hyperbolic tangent must be large enough, concretely the condition must be fulfilled. In addition, the threshold term must be not too small or too large so that has three crossing points with the straight line rather than one. This last condition can be written, as a first approach, as and , where are the values where the curvature of the hyperbolic tangent is maximal and minimal, respectively. The points can be easily computed from the third derivative of :By setting we obtainUsing now these values for , the conditions and can be written aswhich implies that, in order to have one maxima and two minima in , the variable must be in the range , whereFrom equation (8), one can see that the range of that allows to have three extrema in the potential isThe condition implies which is, therefore, a sufficient condition to obtain a double well potential for some value of . One can find, however, a small discrepancy between this approximate prediction and the actual properties of . The discrepancy appears because we have assumed that a sufficient condition for the existence of the three fixed point solutions of equation (4) is that and , and such assumption is only approximately correct. Plotting directly the potential as a function of reveals that the condition to obtain a double well potential for is , rather than . Assuming that the above condition () is satisfied, three different shapes for the potential function can be found, as the figure 1A illustrates. When the potential function presents only one minimum, located around . Similarly, for the potential presents also a single minimum, but now located around . Finally, for the potential will take a double well shape, with the maximum being located around and the minima located around and , respectively.
Figure 1

Considerations for the mean-field approach.

(A) Potential function , as a function of the mean firing rate and for different values of . One can appreciate the different regimes explained in the main text. Other parameters are and . (B) An Ornstein-Uhlenbeck (OU) process (see equation 10) with and . A typical return event (with return time ) and a first passage event (with first passage time ) are indicated for illustrative purposes. For the first passage time, the threshold (depicted as a blue dashed line) was fixed to 0.15.

Considerations for the mean-field approach.

(A) Potential function , as a function of the mean firing rate and for different values of . One can appreciate the different regimes explained in the main text. Other parameters are and . (B) An Ornstein-Uhlenbeck (OU) process (see equation 10) with and . A typical return event (with return time ) and a first passage event (with first passage time ) are indicated for illustrative purposes. For the first passage time, the threshold (depicted as a blue dashed line) was fixed to 0.15. It is worth noting that , with being the mean value of . Due to this, if the range is small compared with the fluctuations of , namely , the potential function will spend most of the time in the regimes and , with the double well regime appearing only when the system tries to jump from one of these regimes to the other (that is, when ). A direct consequence of this is that the mean firing rate will basically be switching between the up and down states (that is, and ), and that this switching will be driven by the dynamics of , as the figure 2 illustrates. Therefore, one expects that the distribution of permanence times of in the up (down) state, becomes approximately equal to the distribution of permanence times of in the () regime, as long as is satisfied. Due to this equivalence, in order to compute we only need to compute the distribution of permanence times of the variable in the regime, denoted as .
Figure 2

Time series showing the dynamics of our system.

(A) Time series of the mean firing rate of the neural population for deterministic depressing synapses. The temporal evolution of the variable is also plotted for illustration purposes. (B) Histogram of the mean firing rate, which shows the existence of two well defined states of activity in and , corresponding to the down and up states respectively. The values of the parameters are and . (C) Same as (A), but with a certain level of intrinsic stochasticity on the dynamics of the synapses (concretely, we set ). The two-headed arrow shows a typical interval of permanence in the up state, denoted by . (D) Same as (B), but for . The other parameters take the same values as in (A) and (B).

Time series showing the dynamics of our system.

(A) Time series of the mean firing rate of the neural population for deterministic depressing synapses. The temporal evolution of the variable is also plotted for illustration purposes. (B) Histogram of the mean firing rate, which shows the existence of two well defined states of activity in and , corresponding to the down and up states respectively. The values of the parameters are and . (C) Same as (A), but with a certain level of intrinsic stochasticity on the dynamics of the synapses (concretely, we set ). The two-headed arrow shows a typical interval of permanence in the up state, denoted by . (D) Same as (B), but for . The other parameters take the same values as in (A) and (B). On the other hand, it should be noted that, since is a fraction of available neurotransmitters, its value should be kept within the range . In practice, this means that the value of must not be too large, so in order to make one has to restrict to small. In the results presented here, remain in its realistic range of values, and imposing ad hoc restrictions in such a way that is always within the range does not affect the results obtained here.

B. Distribution of permanence times

In order to compute the distribution of permanence times of in the (or ) regime, one can assume that the firing rate takes its mean value in equation (2). This is a reasonable approach since is much slower than for realistic values of the parameters. Considering this approach, and after the rescaling , equation (2) can be written aswhich is the equation of the Ornstein-Uhlenbeck (OU) process (see [45] for details), with being the correlation time and . Therefore, computing the distribution of permanence times in the up state for our system is equivalent to obtain the distribution of the so called ruin times for the OU process [46], [47], which may be defined as follows: if we consider a stochastic process starting at from , the ruin time is the interval , where is the time at which returns to for the first time. Since is a stochastic process, the ruin times are stochastic quantities which follow a certain probability distribution. The strategy employed here to calculate the distribution of ruin times is based on the relation between the ruin time and the first passage time, which is the typical time that a stochastic process needs to arrive at a certain threshold value when starting from a certain initial condition [47]. Because of the symmetry of the OU process, the distribution of ruin times are equivalent when considering excursions of the variable in the region or in the region. If we consider excursions in the region, we can set a small positive threshold near zero (that is, ), in such a way that the typical ruin time will be approximately equal to the corresponding first passage time, as the figure 1B illustrates. The excursions in the region typically lead to very short first passage times (since is too small) which we will not take into account in our calculations by considering only large enough ruin times. The first passage time for the OU process with a small threshold can be performed by using the relationwhere is the conditional probability distribution of the OU process, and is the first passage time distribution. This equation can be solved by taking into account the following property of the Laplace transformationwhere is the Laplace transform of . By solving the Fokker-Planck equation associated with equation (10), one can obtain the conditional probability for the OU processwhere , and being the standard deviation of . From expression (13), and assuming that is large enough (more precisely, assuming that , which is a valid hypothesis since most of the permanence times in the up state are much lower than ), one arrives atWe denote and . Employing the Laplace transformation in and the following expressions are obtainedNow, taking into account the property (12) in equation (11), the expression for isFinally, for small one can approximate . With this approximation, one can easily perform the inverse Laplace transformation to equation (16) and obtain the distribution of first passage times for the OU processA similar expression may be obtained if one considers more classical derivations of the first passage time of the OU process (see, for instance, [48]). In order to obtain the distribution of ruin times of the OU process, one has to consider a small (but positive) value of , which leads to . The distribution of ruin times of the variable , namely , and therefore, the distribution of permanence times in the up state, namely , for our system are also given bywhich corresponds to a power-law probability distribution for . Summarizing, the three following conditions must be fulfilled to obtain a power-law dependence in with exponent : Large enough values of . With this condition, we ensure that the dynamics of is much slower than that of . Large enough values of . In particular, we must have , according to the condition and the definitions and . This condition can be achieved even with very small values of , since can be arbitrarily small (by increasing , for instance). The condition must hold to ensure the existence of two well defined (up-down) states. All these conditions may be easily achieved (up to some point) with realistic values of the model parameters, indicating that power-law distributions of the permanence times in the up state are plausible to be found in actual cortical media.

Results

As we have stated in the previous sections, equations (1–2) govern the dynamics of our simplified neural system. A typical time series of the dynamics of this model, for the case of deterministic synapses (that is, ), is depicted in figure 2A. In this case, the mean firing rate of the population is characterized by a periodic switching between up and down states. This type of periodic behavior was already found and analyzed in previous theoretical studies [14], [16], [18] and yields bimodal histograms for the mean firing rate of the neural population (see figure 2B), as the experiments indicate [3]. However, these approaches ignore the stochastic nature of synaptic transmission, and other forms of stochasticity at the synaptic level, which seem to be crucial for information processing in neural systems [31], [32], [49]. Considering a certain level of synaptic stochasticity in addition to STD in our model, one obtains a qualitatively different emergent behavior, as is shown in figure 2C for . The mean firing rate presents then a complex switching between up and down states, and in particular involves a high variability in the permanence times in the up state. When deterministic synapses are considered (that is, ) the dynamics of the mean firing rate becomes quasi periodic, as it was reported in [16], [18], [19], for instance. This type of dynamics naturally leads to exponential distributions for the permanence times. More precisely, for our model is similar (except for the term ) to the one analyzed in [16], which shows periodic oscillations of the network mean firing rate. In our case, however, the term introduces certain level of stochasticity which turns these periodic oscillations into quasi-periodic oscillations. This leads to the exponential distributions for the permanence times in the up state. When is increased, on the other hand, the stochasticity of the synapses leads to the appearance of power-law distributions for the duration of the up states. This behavior is shown in figure 3A, where low values of corresponds to exponential distributions for , while larger values of give as predicted by our theoretical calculations. Such power-law distributions may explain the high variability of permanence times in the up state, which has been observed in a number of in vivo experiments, such as in the cat visual cortex [7] and rat auditory cortex [21], to name a few. Interestingly, similar power-law dependences have been observed during sleep-wake transitions in vivo when one measures the distribution of permanence times in the wake state [22], [23]. On the other hand, exponential-like distributions, obtained for the case of having , are not able to explain this variability of the duration of up states.
Figure 3

Probability distributions of permanence times in the up state.

(A) Probability distribution , obtained with numerical simulations, for different values of the noise strength . One can see that high values of lead to the appearance of power-law distributions with , as the mean-field solution predicts. For numerical simulations, we employed time series of duration and averaged over trials. The values of the other parameters were and . To compute , we have considered that the up state has been reached during a period (with ) if during this period. We set . (B) Probability distributions of permanence times in the up state, for different values of and fixed and . In order to fix and , we have conveniently modified and , respectively, for each value of . We employed time series of duration and averaged over trials. Other parameters are and .

Probability distributions of permanence times in the up state.

(A) Probability distribution , obtained with numerical simulations, for different values of the noise strength . One can see that high values of lead to the appearance of power-law distributions with , as the mean-field solution predicts. For numerical simulations, we employed time series of duration and averaged over trials. The values of the other parameters were and . To compute , we have considered that the up state has been reached during a period (with ) if during this period. We set . (B) Probability distributions of permanence times in the up state, for different values of and fixed and . In order to fix and , we have conveniently modified and , respectively, for each value of . We employed time series of duration and averaged over trials. Other parameters are and . By looking at the data for in figure 3A, one can observe the existence of a small deviation of the numerical results (blue points) with respect to the theoretically predicted slope (solid line) for very large values of . Such deviation is due to the fact that the separation of timescales between the dynamics of and (a necessary condition to obtain power-law dependences in ) is only approximate when considering realistic values of the parameters (and in particular, realistic values for ). More precisely, the approximation fails when the activity of the system falls in the occasional periods of very long permanence times in the up state (that is, for large enough values of , comparable with ). In order to study the effect of the separation of timescales between and , we have computed for different (increasing) values of while keeping fixed values for and (this can be done by properly modifying and with , respectively). As a consequence of this, the only effect of increasing will be a clearer separation of timescales between and . The results are shown in figure 3B, where one can see that larger values of (that is, clearer separation of timescales) lead to a displacement of the effective cut-off towards higher values of , as expected, and a clearer power-law distribution emerges. It is worth noting that the appearance of an effective cut-off in for realistic conditions does not represent an unrealistic feature of the model, but rather it constitutes a prediction about the effective range of permanence times which are expected to occur in actual neural systems. Indeed, for realistic values of the parameters, our results predict permanence times in the up state up to , which is the maximum permanence time observed in experimental realizations [7]. Larger permanence times in the up state (of about seconds, for instance) should be expected to appear only as a consequence of input driven mechanisms (such as persistent activity associated with working memory tasks [50], [51]), and not as a consequence of spontaneous transitions between different voltage levels, which are the matter of interest in this work. For a better characterization of the dynamics of the system, one can use, for instance, other statistical magnitudes such as the autocorrelation function of , which can be defined asHere, indicates a temporal average. The autocorrelation function is depicted in figure 4A for the case of deterministic depressing synapses () and stochastic depressing synapses (). presents, for , two well located peaks at , which indicates a strong periodicity of the time series (as can be seen in figure 2A). On the contrary, the inclusion of a certain level of intrinsic stochasticity in the dynamics of introduces more pronounced temporal correlations in the dynamics of the system. This fact reflects the existence of long permanence stays in the up state, which occurs with more probability for high enough values of , as we have already discussed.
Figure 4

Autocorrelation and power spectra.

(A) Autocorrelation function of the mean firing rate for deterministic () and stochastic () synapses, in the presence of STD. (B) Power spectra of the mean firing rate for the two cases illustrated in (A). For both panels, we have averaged over time series of duration each, and we have fixed and .

Autocorrelation and power spectra.

(A) Autocorrelation function of the mean firing rate for deterministic () and stochastic () synapses, in the presence of STD. (B) Power spectra of the mean firing rate for the two cases illustrated in (A). For both panels, we have averaged over time series of duration each, and we have fixed and . The spectral properties of the dynamics can be analyzed as well, via the power spectrum defined as As one could expect, the power spectrum of the case presents a pronounced peak around a certain frequency, which in the particular case presented in the figure 4B is . The power spectrum for higher values of shows however different properties than the case . For instance, the figure 4B (which considers ) indicates an approximated power-law behavior for the power spectrum, with . This scale-free dependence can be understood by considering that, if is algebraic with exponent , the corresponding power spectrum becomes also algebraic with exponent , where the equation relates both exponents [44]. In our particular case, since , one obtains a theoretical prediction of for the exponent of the power spectrum. The theoretical relation between and exposed above, however, is only valid under the so called single interval approximation, which implies that the integration variable in equation (20) is smaller than the permanence time (see [44] for details). This condition does not strictly hold for our system (where ranges over several scales), and therefore it may introduce deviations in the theoretically predicted value of (which is around ) with respect to the value found in simulations (of around ). Besides the level of synaptic stochasticity, i.e. , other parameters of the model could have an important effect on the dynamics as well. The parameter , for instance, controls the level of stochasticity of the dynamics of , and therefore one should expect that increasing its value could strongly influence the probability distribution . This is shown in figure 5A, where an increase of disrupts the appearance of power-law dependences, and exponential distributions appear instead. This change in is due to the fact that high levels of the additive noise make the system to jump more frequently from one state to the other, and therefore long stays in the up state (and thus distributions with long power-law tails) rarely occur.
Figure 5

Influence of other parameters of the model.

(A) Probability distributions of permanence times in the up state, for different values of . Other parameters are and . (B) Same as in (A), but for different values of . The other parameters take the same values as in (A), except for . (C) Probability distribution as a function of and . The three different regimes are shown with different colors (see main text for details). Other parameters are and . For all panels, we have averaged over times series of duration each.

Influence of other parameters of the model.

(A) Probability distributions of permanence times in the up state, for different values of . Other parameters are and . (B) Same as in (A), but for different values of . The other parameters take the same values as in (A), except for . (C) Probability distribution as a function of and . The three different regimes are shown with different colors (see main text for details). Other parameters are and . For all panels, we have averaged over times series of duration each. The parameters involving the dynamics of also affect the probability distributions . The parameter , for instance, is responsible for the modulation of via the mean firing rate (see equation (2)), and therefore it can influence both the dynamics of and . As one may see in figure 5B, when takes low values a bump in emerges for high . Such deviation from the power-law dependence indicates that long stays in the up state occur more frequently than in the power-law case. Attending at equation (2), one can see that an increase of the mean firing rate decreases the variable via the parameter . Therefore, if takes lower values the decrement of will be smaller. As a consequence, the stays of in the regime (see Methods Section) will last longer, and the stays of the system in the up state will also last longer, causing the observed deviation from the power-law tendency. It should be noted, however, that the values of which allow the appearance of power-law dependences in for our model agree with the values of measured in actual cortical media where up and down transitions are observed [36]. We have also analyzed in detail the effect that varying has on the probability distribution of permanence times. Note that, contrarily to the previous study presented above, we have now varied the parameter while all the other parameters are kept fixed. This implies that the modification of will now have an effect on the separation of timescales between and , but also on the concrete value of and on the amplitude of the noisy term of equation (2) (namely ). The results are shown in figure 5C, where one can distinguish three different regimes as a function of the particular value of . For low (red region in the figure), the probability distributions show an exponential decay for large permanence times. The reason for this decay is that, for low , the variable does not perform long excursions in the region (see Methods Section), and therefore the probability to have large values of decreases and the power law behavior for is not obtained. As is increased, long excursions for begin to occur, and we obtain a power law behavior (green region in the figure). Finally, one can appreciate that, for even larger values of (blue region in the figure), the probability distribution of permanence times in the up state presents a power law dependence with being an increasing function of . Such dependence can not be explained by our previous theoretical predictions, based in the assumption that the system is in the bistable regime, and deserves a detailed analysis which will be exposed in the next section.

Further analysis

In the Methods Section, we established several conditions which had to be fulfilled in order to obtain power law dependences for . In particular, our previous analysis indicates that the condition must hold in order to have a potential function with three extrema (bistable regime). However, as we will see in the following, power law expressions for may appear even if the potential function has only one extremum in (concretely, one minimum), although the origin of such power law distributions is different from the one considered in previous sections, as we will see. When (which occurs for or , for instance), the potential function has only one minimum in , whose location strongly depends on . An approximated expression for the location of this minimum as a function of can be obtained by expanding the hyperbolic tangent of the fixed point expression of (see equation (4)) around its argument (which is small in this limit), yieldingwhere is the value of which corresponds to the minimum of the potential function. Therefore as varies around , the location of the minimum of the potential also varies in the same way around . As an example, time series of both and are shown in figure 6A for a given set of parameters which satisfies . In this time series, the variable fluctuates around the value , which is fully determined by (that is, the variable becomes a slave variable of ). The predictions of equation (21) agree approximately well with simulations and with the numerical evaluation of the fixed points of equation (1), as the figure 6B shows.
Figure 6

Behavior of the system when the condition holds.

(A) Time series of the variables and . (B) The same time series, but represented on the plane, illustrates the fact that is a slave variable of (although some level of inner stochasticity on is still present). The green line corresponds to the approximate expression (21), while the blue line is the numerical evaluation of the fixed point solutions of (see equation (4)). The inset shows the situation in which the system shows a bistable dynamics, analyzed in the previous section. (C) The potential function as a function of for different values of . One can appreciate the existence of only one minimum, whose location is controlled by . (D) Histograms of the mean firing rate of the system for different values of . For the cases showed in this panel, the condition is only satisfied for the case . For all panels, , and unless specifically specified.

Behavior of the system when the condition holds.

(A) Time series of the variables and . (B) The same time series, but represented on the plane, illustrates the fact that is a slave variable of (although some level of inner stochasticity on is still present). The green line corresponds to the approximate expression (21), while the blue line is the numerical evaluation of the fixed point solutions of (see equation (4)). The inset shows the situation in which the system shows a bistable dynamics, analyzed in the previous section. (C) The potential function as a function of for different values of . One can appreciate the existence of only one minimum, whose location is controlled by . (D) Histograms of the mean firing rate of the system for different values of . For the cases showed in this panel, the condition is only satisfied for the case . For all panels, , and unless specifically specified. Since behaves now as a stochastic variable which does not present a clear bistable dynamics, the numerical computation of the distribution of the permanence times will depend on the exact value of above which the system is considered to be in the up state. As we have seen before, this threshold value takes the form (see caption of figure 3), where usually may take a value between and . While the results presented for (that is, the bistable regime) are quite robust for different values of , in the regime this parameter has indeed some effect on , which indicates the difficulty to accurately analyze the up and down dynamics in this case. In figure 7A, one observes that the distribution shows also a power law behavior for and different values of , for a set of parameter values which satisfies (that is the monostable regime). The concrete value of depends strongly on and it has also a weaker dependence with , as the figure 7B illustrates. This type of power-law behavior appearing in the monostable regime corresponds to the blue region in figure 5C, as well.
Figure 7

Statistics of permanence times in the up state for .

(A) Probability distribution of permanence times in the up state in the regime, for and different values of . One can see that power law relations appear. (B) Dependence of with for the conditions presented in (A). The inset shows the dependence of with the parameter for the case . We have averaged over time series of duration each. Other parameters are and .

Statistics of permanence times in the up state for .

(A) Probability distribution of permanence times in the up state in the regime, for and different values of . One can see that power law relations appear. (B) Dependence of with for the conditions presented in (A). The inset shows the dependence of with the parameter for the case . We have averaged over time series of duration each. Other parameters are and . It is worth noting that actual recordings of up and down transitions does not present a clear distinction between up and down states, and several nontrivial methods are commonly employed to discriminate between both states [52]. Therefore, the results found for the regime could indeed reflect the behavior of actual cortical up-down transitions, showing power law dependences in with and indicating that the concrete nature of the transitions is a synaptic-driven monostable dynamics. For a complete characterization of the model, one can summarize all the observed behaviors in a phase plot such as the one presented in figure 8A. A total of four different behaviors can be found in the space. The first one concerns the dynamics of whose permanence times in the up state follows an exponential distribution (labeled as “E” in the figure). If the noise amplitude is sufficiently high, one can increase the value of to reach the regime “C”, in which the dependence is obtained. By increasing even more, the probability distribution takes the form , with (regime denoted by “S”), as we have already seen in figure 6. Finally, we also observe that when the depression time scale is not large enough (and ), a regime of quasi-periodic time series of is obtained, with a well-defined duration of up states (regime denoted by “P”). The lines between the different regimes have been obtained by visual inspection of for different values of and . In particular, the regime “P” is characterized by the appearance of a bump in the probability distribution for some value of (which reflects a preferred duration of the up state), and the existence of such bump has been used as a criterion to distinguish between regimes “P” and “E”. Similarly, we assumed that the regimes “C” and “S” correspond to the situation in which a power-law behavior that extends for two decades or more is found for . Such criterion, together with an estimation of the slope of the power-law via standard Levenberg-Marquardt fitting algorithms, allows to distinguish between regimes “E”, “C” and “S”.
Figure 8

The different dynamical regimes of the model.

(A) Phase plot which shows the different behaviors found in our system. These behaviors corresponds to time series of for which permanence times in the up state follow an exponential distribution (E), a power-law distribution with (C), or a power-law distribution with (S). In addition, a phase with a well-defined duration of the up state is found (P). In panel (B) some of these behaviors are depicted. From top to bottom one can see situations P, E and C. Other parameters are and .

The different dynamical regimes of the model.

(A) Phase plot which shows the different behaviors found in our system. These behaviors corresponds to time series of for which permanence times in the up state follow an exponential distribution (E), a power-law distribution with (C), or a power-law distribution with (S). In addition, a phase with a well-defined duration of the up state is found (P). In panel (B) some of these behaviors are depicted. From top to bottom one can see situations P, E and C. Other parameters are and . It must be clarified, however, that actual up and down cortical transitions present most likely a richer repertoire of dynamical regimes than the one obtained with our simplified model. It is known, for instance, that attractor neural networks with dynamic synapses may exhibit different dynamics corresponding to memory, non-memory and switching regimes [18], [19]. In this work, we have extensively explored different regimes of switching behavior, and its implications for the up and down dynamics observed in the cortex. The memory and non-memory regimes, however, can be also found in our simplified model by assuming that . After taking these limits, the system will be in the memory regime if the potential function is bistable, or in the non-memory regime if is monostable.

Discussion

We have shown that the experimentally observed large fluctuations in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Our study suggests that a power-law distribution for these permanence times may emerge as a consequence of these two ingredients. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise. The origin of up and down cortical transitions is still unclear, although different factors that may influence their occurrence have been recently reported. It is known, for instance, that inhibitory GABAergic currents strongly contribute to the temporal coding and spike timing precision of cortical networks during up states of activity [3], [53], [54]. Several modeling studies also show the relevance of inhibitory interneurons in the generation of many types of oscillations in the brain (see for instance [55]). However, other studies indicate that most of the main features of up and down transitions depends strongly on synaptic plasticity mechanisms, both of long-term and short-term ones [16], [56], and that the transitions appear even in the absence of inhibition [16]. In this work we have made the common assumption that the effects of inhibition can be treated as additive and can be incorporated in the threshold of the neuron. This is known to be a valid approximation in mean field neural network analysis, but may fail when precise timing and details of the dynamical aspects of the neuron affect the inhibition [57], [58]. Regarding to synaptic characteristics, recent works show that synaptic fluctuations could have an important role in the generation of transitions between up and down states [14], [59], [60]. Since our model introduces stochasticity in the synaptic dynamics in a highly simplified manner, however, the last term in equation (2) should not be associated only with ureliability in synaptic transmission. Indeed, we have assumed that other sources of stochasticity may be contributing to this fluctuating term in the mean-field quantities and . For instance, it is widely known that connectivity in actual cortical media is highly sparse. Such feature implies that, in order to obtain the mean-field quantity , the average over synapses must be performed over a number of synapses, with ranging over connections per neuron [38]. In this situation, the fluctuations of would be of order , which leads to a range of for the values given above. As we have seen, our results state that a value of is enough to obtain power-law distributions (see figure 3), which lies within this range. Therefore, topology-induced fluctuations constitute an important source of stochasticity which could be responsible of the appearance of power-law distributions in . Other sources of stochasticity at synaptic level, such as the stochastic properties of receptor-transmitter interactions, may also contribute to the last term of equation (2). Moreover, the low activity rates typical from cortical media lead to a poor time-averaging of the incoming input, and therefore the fluctuations at the postsynaptic level will be large at these short-time scales (of the order of the typical synaptic integration time constant). On the other hand, the amplitude of the noisy term, , does not need to be very high to induce the appearance of power-law distributions in . As we have stated above, a sparse connectivity already induces a level of stochasticity which is within the desired range, for instance. Furthermore, the noisy term could even be arbitrarily small: attending to our theoretical predictions, a necessary condition to have power-law distributions is that fluctuations of must be much larger than (see Methods Section). Since may be lowered to arbitrary levels (by increasing , for instance), even a small noisy term in the dynamics of may induce power-law distributions. It is also known that short-term synaptic mechanisms, such as short-term depression and facilitation, usually play a role in the efficient processing of information. In particular, they may be relevant in many tasks, such as in signal detection and coding [29], [61]–[63] or switching between different activity patterns previously stored [19], [64]. However, their role on the transitions between cortical states has been pointed out only by a few studies [15], [16], [65], and their possible effects on the statistics of the transitions, which is the focus of our work, have been ignored. To the best of our knowledge, the present study is the first one which analyzes, even in a simplified manner, the strong effect of synaptic stochasticity– in a general sense– and dynamic synapses in the statistics of the up and down transitions. The possible role of other short-term synaptic mechanisms, such as STF, has not been addressed yet and constitutes a interesting issue still open. In our analysis we assumed that the dynamics is symmetric in the up and down states. This is in contradiction with experimental evidences [66] which shows that power-law distributions are obtained for permanence times in the up state, while permanence times in the down state are exponentially distributed. However, this discrepancy disappears when one considers a more realistic transduction function which gives an asymmetric potential for the dynamics, and as a consequence the up-down symmetry is broken. More detailed studies considering, for instance, some of the biologically realistic aspects discussed above, should be performed to test our predictions. In particular, a more elaborated study considering realistic neuron models (such as Hodgkin-Huxley model [67]) and stochastic STD models (see [29], [49], for instance) is necessary, as well as more detailed experimental studies which may confirm our predictions. From a general point of view, evidences of criticality have been recently found in an increasing number of neural systems, such as in the functional connectivity of the living human brain [24], in critical avalanches of neuronal activity [25], or in sleep-wake transitions [23], to name a few. According to the results presented in this work, transitions between up and down cortical states could also present some relevant properties typical of systems at criticality. Some of these properties have been already measured in experiments, such as a high sensitivity of the system to external stimuli [8], or the presence of power-law dependences in the power spectra of the neural dynamics [53]. It is worth noting that other kind of probability distributions for , such as a log-normal distribution, could also satisfactorily explain the irregularity in the up states found in experiments. Our study shows the importance of some biophysical factors, such as the neurotransmitter recovery time and the inherent synaptic stochasticity, and predicts a power-law dependence on as a consequence of such factors. However, further study is needed to investigate other mechanisms, not taken in account in this work, which could influence the permanence times in the up state. In a more general sense, our results may proportionate a new perspective of the phenomena of up and down transitions (and a theoretical framework) that could serve to conciliate the main experimental findings, and that could help for a deep understanding of this complex dynamics of the brain activity.
  56 in total

1.  Population dynamics of spiking neurons: fast transients, asynchronous states, and locking.

Authors:  W Gerstner
Journal:  Neural Comput       Date:  2000-01       Impact factor: 2.026

2.  Impact of intrinsic properties and synaptic factors on the activity of neocortical networks in vivo.

Authors:  I Timofeev; F Grenier; M Steriade
Journal:  J Physiol Paris       Date:  2000 Sep-Dec

3.  A new hypothesis for sleep: tuning for criticality.

Authors:  Barak A Pearlmutter; Conor J Houghton
Journal:  Neural Comput       Date:  2009-06       Impact factor: 2.026

Review 4.  Cellular basis of working memory.

Authors:  P S Goldman-Rakic
Journal:  Neuron       Date:  1995-03       Impact factor: 17.173

5.  Excitatory and inhibitory interactions in localized populations of model neurons.

Authors:  H R Wilson; J D Cowan
Journal:  Biophys J       Date:  1972-01       Impact factor: 4.033

6.  Neuron activity related to short-term memory.

Authors:  J M Fuster; G E Alexander
Journal:  Science       Date:  1971-08-13       Impact factor: 47.728

7.  Common scale-invariant patterns of sleep-wake transitions across mammalian species.

Authors:  Chung-Chuan Lo; Thomas Chou; Thomas Penzel; Thomas E Scammell; Robert E Strecker; H Eugene Stanley; Plamen Ch Ivanov
Journal:  Proc Natl Acad Sci U S A       Date:  2004-12-06       Impact factor: 11.205

8.  Intracellular analysis of relations between the slow (< 1 Hz) neocortical oscillation and other sleep rhythms of the electroencephalogram.

Authors:  M Steriade; A Nuñez; F Amzica
Journal:  J Neurosci       Date:  1993-08       Impact factor: 6.167

9.  Structure of spontaneous UP and DOWN transitions self-organizing in a cortical network model.

Authors:  Siu Kang; Katsunori Kitano; Tomoki Fukai
Journal:  PLoS Comput Biol       Date:  2008-03-07       Impact factor: 4.475

10.  Robust off- and online separation of intracellularly recorded up and down cortical states.

Authors:  Yamina Seamari; José A Narváez; Francisco J Vico; Daniel Lobo; Maria V Sanchez-Vives
Journal:  PLoS One       Date:  2007-09-12       Impact factor: 3.240

View more
  17 in total

1.  Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activity.

Authors:  Maurizio Mattia; Maria V Sanchez-Vives
Journal:  Cogn Neurodyn       Date:  2011-11-01       Impact factor: 5.082

2.  Stochastic transitions into silence cause noise correlations in cortical circuits.

Authors:  Gabriela Mochol; Ainhoa Hermoso-Mendizabal; Shuzo Sakata; Kenneth D Harris; Jaime de la Rocha
Journal:  Proc Natl Acad Sci U S A       Date:  2015-03-04       Impact factor: 11.205

3.  The phase response of the cortical slow oscillation.

Authors:  Arne Weigenand; Thomas Martinetz; Jens Christian Claussen
Journal:  Cogn Neurodyn       Date:  2012-06-13       Impact factor: 5.082

4.  Landau-Ginzburg theory of cortex dynamics: Scale-free avalanches emerge at the edge of synchronization.

Authors:  Serena di Santo; Pablo Villegas; Raffaella Burioni; Miguel A Muñoz
Journal:  Proc Natl Acad Sci U S A       Date:  2018-01-29       Impact factor: 11.205

5.  Short term synaptic depression improves information transfer in perceptual multistability.

Authors:  Zachary P Kilpatrick
Journal:  Front Comput Neurosci       Date:  2013-07-01       Impact factor: 2.380

6.  Stochastic amplification of fluctuations in cortical up-states.

Authors:  Jorge Hidalgo; Luís F Seoane; Jesús M Cortés; Miguel A Muñoz
Journal:  PLoS One       Date:  2012-08-07       Impact factor: 3.240

7.  Synaptic depression and slow oscillatory activity in a biophysical network model of the cerebral cortex.

Authors:  Jose M Benita; Antoni Guillamon; Gustavo Deco; Maria V Sanchez-Vives
Journal:  Front Comput Neurosci       Date:  2012-08-28       Impact factor: 2.380

8.  Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity.

Authors:  Jorge F Mejias; Joaquin J Torres
Journal:  PLoS One       Date:  2011-03-08       Impact factor: 3.240

9.  Emerging phenomena in neural networks with dynamic synapses and their computational implications.

Authors:  Joaquin J Torres; Hilbert J Kappen
Journal:  Front Comput Neurosci       Date:  2013-04-05       Impact factor: 2.380

10.  Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in Up states.

Authors:  Khanh Dao Duc; Pierre Parutto; Xiaowei Chen; Jérôme Epsztein; Arthur Konnerth; David Holcman
Journal:  Front Comput Neurosci       Date:  2015-07-29       Impact factor: 2.380

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.