Literature DB >> 24324621

Reliable neuronal systems: the importance of heterogeneity.

Johannes Lengler1, Florian Jug, Angelika Steger.   

Abstract

For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.

Entities:  

Mesh:

Substances:

Year:  2013        PMID: 24324621      PMCID: PMC3851464          DOI: 10.1371/journal.pone.0080694

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

A main difference between computers and the human brain is that computers are composed of extremely reliable components with failure rates as small as [1], while the failure rate of vesicle release at a synaptic site is – [2], meaning that failure of vesicle release is rather the rule than an exception. Another difference is that computers contain billions of identical gates, while neurons in the brain are highly individual [3], [4]. These seemingly different aspects of the brain have a joint effect: both add variance to signal processing. At first glance, this appears like an hindrance for neuronal networks to be reliable. However, in other contexts variance has proven to enhance the inherent information in a system. In particular, it has been shown that extrinsic noise can lead to a more reliable and efficient signal processing in the crayfish [5] and other animals [6], [7]. This effect is known as “stochastic resonance” and is particularly well-established in the theory of coupled oscillators [8]. In this article we demonstrate a similar, but intrinsic mechanism for neuronal networks. Our simulations show that the heterogeneity of neurons and unreliability of synaptic transmission increase speed, responsiveness, and even robustness of networks of spiking neurons as depicted in Figure 1.
Figure 1

Network.

Schematic representation of our recurrent network (cf. text).

Network.

Schematic representation of our recurrent network (cf. text). Over the last decades a lot of theoretical work has been invested in understanding neuronal coding and signal processing [9], [10]. Most of these investigations studied neural behavior in abstract networks consisting of highly simplified and identical neurons. This approach is based on a principle that proved very successful in mathematics and many other disciplines: first understand how a system works in a pure setting and then generalize it step by step in order to transfer it to a more noisy real world scenario. In this paper we argue that in the case of neuroscience such an approach may well lead to misconceptions of fundamental principles of information processing in the brain. Our simulations of populations of neurons whose connectivities and properties are closely matched with biological data (cf. methods) show that variance in the synapses and neurons crucially changes the dynamics of the network. For example, the spikes in reliable, homogeneous networks tend to synchronize to a precision of a few milliseconds (and thus to a precision considerably higher than observed for behaving humans and animals). On the other hand, the same network with unreliable and heterogeneous synapses and neurons decreases these correlations (cf. Fig. 2 c,d). We also show that the amount of input activity needed in order to elicit activity is significantly smaller, and thus more energy efficient in a heterogeneous setup (Fig. 3 a–c). The same is true for the time it takes for a population of neurons to react to an external stimulation (Fig. 3d). The differences became even more distinctive when we used the output of one population as the input of another one as depicted in Figure 4.
Figure 2

Response to Poisson Input.

(a) Input-response curve of a heterogeneous (blue) and homogeneous (red) population for pure Poisson input (250 neurons) of varying rates. The shaded areas shows the standard deviation over 100 trials, each lasting s. The synaptic weights are chosen in a way such that the network has a high dynamic range. (b–d) Behavior of the network in response to flawed Poisson input of Hz; -axis measures the synchronization (cf. text). (b) The output rate in the heterogeneous network (blue) remains unaffected while the homogenous network reacts with an increase of the output rate and an increasing variance; shaded areas show the standard deviation over 100 trials. (c) The reason for the behavior in (b): the coefficient of variation (CV) of the interspike times increases only very slowly in the heterogenous case (blue curve), but quickly moves to high values in the homogeneous case (red curve); high CV values of the heterogeneous network are not caused by a single parameter: eliminating variance only from neuronal properties (purple) or by just making synapse 100% reliable (brown) increase the CV values only slightly above the blue curve. (Curves show mean values of the experiments in (b)). (d) Data points from experiments in (b); -axis corresponds to CV-value of input. As input is identical to homogeneous and heterogenous networks, each input gives rise to a blue point (heterogeneous network) and to a red point (homogeneous network) at the same -value; the plot shows that the heterogeneous network has strictly smaller CV values. (e,f): Behavior if the output of the network is fed as input to an additional network; here we study the effect on a sequence of up to eight such populations. (e): Coefficient of variation in various population for Poisson input with a given rate, blue: heterogeneous network, red: homogeneous network, green: input; curves show means of 20 trials. Note that we show population ,,, and for homogeneous the network, and population , , , and for the heterogeneous network (f) Cross correlation for flawed Poisson input () as a function of the bin size (in ms); curves show means of the cross-variances of 20 experiments, each using 20 trials to compute the cross-variance.

Figure 3

Response to Flanks.

Behavior of a heterogeneous (blue) and homogeneous (red) population in response to a single input flank. (a–c) -axes denote the number of input neurons that spike, -axes the number of neurons that spike within the population; input spikes are randomly distributed within an interval of (a) ms, (b) ms, (c) ms, shaded regions show standard deviation of 100 trials. Note that a broader input distribution leads to more spikes – at the price of a later activation of the population: (d) shows the time of the first spike in the population as a function of the number of input neurons (-axis) and size of the input interval: ms: solid lines, ms: dashed lines, ms: dotted lines, blue: heterogeneous network, red: homogeneous network. The curves start at the input size where all 100 trials produced at least one spike. The heterogeneous network can be activated by fewer input spikes, and reacts faster.

Figure 4

Feed Forward Model.

A feed-forward chain of several populations (here for ), as used in Fig. 2 (e,f).

Response to Poisson Input.

(a) Input-response curve of a heterogeneous (blue) and homogeneous (red) population for pure Poisson input (250 neurons) of varying rates. The shaded areas shows the standard deviation over 100 trials, each lasting s. The synaptic weights are chosen in a way such that the network has a high dynamic range. (b–d) Behavior of the network in response to flawed Poisson input of Hz; -axis measures the synchronization (cf. text). (b) The output rate in the heterogeneous network (blue) remains unaffected while the homogenous network reacts with an increase of the output rate and an increasing variance; shaded areas show the standard deviation over 100 trials. (c) The reason for the behavior in (b): the coefficient of variation (CV) of the interspike times increases only very slowly in the heterogenous case (blue curve), but quickly moves to high values in the homogeneous case (red curve); high CV values of the heterogeneous network are not caused by a single parameter: eliminating variance only from neuronal properties (purple) or by just making synapse 100% reliable (brown) increase the CV values only slightly above the blue curve. (Curves show mean values of the experiments in (b)). (d) Data points from experiments in (b); -axis corresponds to CV-value of input. As input is identical to homogeneous and heterogenous networks, each input gives rise to a blue point (heterogeneous network) and to a red point (homogeneous network) at the same -value; the plot shows that the heterogeneous network has strictly smaller CV values. (e,f): Behavior if the output of the network is fed as input to an additional network; here we study the effect on a sequence of up to eight such populations. (e): Coefficient of variation in various population for Poisson input with a given rate, blue: heterogeneous network, red: homogeneous network, green: input; curves show means of 20 trials. Note that we show population ,,, and for homogeneous the network, and population , , , and for the heterogeneous network (f) Cross correlation for flawed Poisson input () as a function of the bin size (in ms); curves show means of the cross-variances of 20 experiments, each using 20 trials to compute the cross-variance.

Response to Flanks.

Behavior of a heterogeneous (blue) and homogeneous (red) population in response to a single input flank. (a–c) -axes denote the number of input neurons that spike, -axes the number of neurons that spike within the population; input spikes are randomly distributed within an interval of (a) ms, (b) ms, (c) ms, shaded regions show standard deviation of 100 trials. Note that a broader input distribution leads to more spikes – at the price of a later activation of the population: (d) shows the time of the first spike in the population as a function of the number of input neurons (-axis) and size of the input interval: ms: solid lines, ms: dashed lines, ms: dotted lines, blue: heterogeneous network, red: homogeneous network. The curves start at the input size where all 100 trials produced at least one spike. The heterogeneous network can be activated by fewer input spikes, and reacts faster.

Feed Forward Model.

A feed-forward chain of several populations (here for ), as used in Fig. 2 (e,f). Our results can also explain a discrepancy between experimental results and theoretical modeling/simulations present in current efforts to understand neuronal signal propagation. While some experiments show decorrelating effects [13], most simulations report increasing correlations [14]–[20]. And those that found propagation modes of stable or decreasing correlation needed to incorporate additional assumptions like (i) a high level of extrinsic noise [21], (ii) simultaneous convergence of multiple signals [20], [22], or (iii) unrealistically strong feed-forward synapses [19] (see also the section on related work). Here we show that decorrelation is actually possible without any of these additional assumptions. The existing discrepancy vanishes if we incorporate a realistic amount of heterogeneity in the network. Another and perhaps more surprising result is that variability in the neuronal parameters guarantees stability. In order to study this we considered the dynamics of a recurrent network (see below) to distorted input. By this we mean the following. Two types of input that are both intensively studied are (i) ‘Poisson input’, where the input neurons spike independently, and (ii) synchronized input where the input arrives as a ‘flank’, meaning that the input neurons spike more or less simultaneously. Both modes have been observed in biological neural systems [23], and most likely both play important roles for information processing in the brain. While it is easy to generate independent Poisson spikes in simulations, it is still unclear how neuronal ensembles in the brain can generate Poisson spiking: wherever activity arises there are lateral/local connections and thus some dependencies between spike times. That is, even if a Poisson-like spiking occurs it will most likely be flawed with synchronization (as is well known from simulations [18]). In Fig. 2 c-f we show the effect of homogeneity/heterogeneity in such a scenario. We find that a heterogeneous network is not only immune to a certain amount of synchronized activity but can even remove or weaken them, while homogeneous networks increase them.

Results and Discussion

We simulated populations of excitatory and inhibitory conductance-based leaky integrate-and-fire neurons randomly interconnected with a connectivity of [24]. The neuronal and synaptic models come with many parameters, all of which we drew randomly for each neuron and synapse from a distribution based on physiological data in mammalian cortex (“heterogeneous network”, see methods). Moreover, we incorporated the fact that vesicle release of a synaptic site is unreliable [2]. In a second setup, we set all parameters to the mean values of their distributions and made the vesicle release reliable (“homogeneous network”), normalizing the synaptic weights such that the expected postsynaptic current was the same in both setups. We studied the reaction of a population to perfect Poisson input and to Poisson input that was flawed by spontaneous synchronized activity in which all input neurons produced a spike within 5 ms. We control the amount of synchronization by a parameter giving the fraction of spikes belonging to the flanks (so means Poisson spiking, and for all spikes belong to flanks). The input was modeled by excitatory neurons, each projecting to randomly chosen of the target population. The mean synaptic weights of lateral connections were chosen in such a way that excitation and inhibition were balanced [25]. That is, the total activity in the population shows input normalization [26] for a large range of input rates (Fig. 2 a). This does not mean that all neurons behave the same. For example, in the experiment from Figure 2 a with heterogeneous setup and Hz input there more than neurons firing with less than Hz, while neurons fired with Hz, among them with Hz. For other input rates and also for the homogeneous setup the corresponding numbers were very similar. Finally, the transmission delays of spikes were drawn randomly in both cases, as they depend on the geometry of the network and thus differ even if all neurons are identical (see methods). We found that the global response of a population to Poisson input is very similar in both the heterogeneous and homogeneous case in terms of average output rate (Fig. 2 a). This was no surprise, as in a population of some thousand neurons the law of large numbers should diminish the effect of variations in the neuronal parameters. As a measure of synchrony, we computed the average cross-covariance (CC) of the binned spike trains of pairs of neurons (see methods for reasons), as done in the experimental paper [13]. The cross-correlation is a measure of the correlation between neuronal activity in a small time interval (“bin”) over several trials. A high CC indicates that for every bin neurons tend to fire jointly. A CC close to zero indicates that the precise spike time of one neuron does not have strong implications for the spikes times of other neurons in the same bin. The CC generally tends to be smaller for small bin sizes, since the number of spikes per bin is small in this case, and has been criticized for this reason (see [27] for a review). Therefore, we also compute the coefficient of variation (CV) of the interspike intervals of all spike events in each population. The CV is a measure for the irregularity of the interspike intervals, and does not suffer from the drawbacks of the CC. It is equal to for Poisson spiking and greater than for correlated spike times. Note that the interspike intervals are not taken between spikes of the same neuron, but rather between any neurons of the network (see methods for the reasons). We observed that the CV is slightly higher for the homogeneous network than in the heterogeneous case. In addition, the CV reacts to flawed Poisson input much stronger in the homogeneous case (Fig. 2 c–d). We also observed that this increase was not due to a single parameter, but had its cause rather in the interplay of many different sources of variance (Fig. 2 c). Moreover, different sources of variance add up in a non-linear way. For the CV, variance in the inhibitory neurons is especially important, since synchronous inhibition is able to diminish the output of the population drastically. The picture became even more distinctive when we let the signal propagate along a feed-forward network of several such populations as depicted in Figure 4. It is known that synchrony tends to increase along such a feed-forward network (see section on related work). We studied a sequence of eight neuronal populations, with the excitatory cells in population projecting to randomly chosen neurons in population (see methods). In this way we could investigate how correlations evolve when a signal is propagated through several populations. While in the homogeneous network the CV increased as the signal propagated, the heterogeneous network remained close to being Poisson (CV approximately one) even in subsequent populations (Fig. 2e). Fig. 2f shows the evolution of the cross-correlation over several populations in a flawed Poisson setup ( = 0.2). The cross-correlations of the homogenous networks (red) were larger than for the input (green) and increased from population to population. On the other hand, the heterogeneous network (blue) decorrelated the small disturbances in the input and then remained close to Poisson. Note also that this implies that in the heterogeneous case all eight populations behaved very similar, while the increasing synchronization in the homogeneous network led to significant changes in activity between several populations (data not shown). At first sight it may seem that such a decorrelating network would perform poorly in processing input flanks like the ones appearing in gamma oscillations. However, recent work on purely excitatory networks has shown that varying spike thresholds can improve sensitivity to the input [28]. Our simulations show that this effect remains in decorrelating networks with balanced excitation and inhibition. When we tested the reaction of the networks to input flanks, heterogeneous networks showed not only a stronger response (Fig. 3 a–c), but also reacted faster (Fig. 3d). Moreover, the heterogeneous network was activated by fewer spikes, which makes it more energy efficient. Note that flanks in the input like those in Fig. 3 (that may be seen as carriers of information) as well as the synchronization in the flawed Poisson inputs are characterized by a high level of synchrony. The difference lies in the time scale: while for flawed Poisson input the spikes are synchronized up to a few milliseconds, the spikes of the input flank in Fig. 3 are scattered over 10–30 ms. Moreover, as the synchronization within Poisson input arises spontaneously and randomly [18], it does not carry meaningful information. The symmetry breaking properties of heterogeneous networks allow to distinguish between these two cases: they desynchronize the spontaneous synchronization and at the same time increase the effect of flanks as the carrier of information.

Related Work

Although a lot of theoretical work has been invested into understanding neural signal propagation [10], [17]–[20], [29]–[34], no simulation could explain the decorrelation in vivo [13]. Most simulations reported increasing correlations [14]–[20], and those that found propagation modes of stable or decreasing correlation needed to incorporate assumptions incompatible with the experimental setup, like a high level of extrinsic noise [21], simultaneous convergence of multiple signals [20], [22], or unrealistically strong feed-forward synapses, up to -fold stronger than lateral synapses [19]. For other types of recurrent physical networks, a bit more is known. In particular, synchronization effects have been studied for networks of coupled oscillators [8], [35]–[38], which sometimes have been interpreted as networks of neuronal ensembles, or for networks of neurons whose membrane potential are directly coupled to each other [39]. Hansel and Mato [40] studied synchronization of networks of rate-based approximations to neurons [40]. Also networks of spiking neurons have been investigated for purely excitatory neurons [41], [42], or purely inhibitory neurons receiving constant input currents [42], [43]. Closest to our work are the embedded synfire chains considered in [44], and the Dale networks studied in [45]. Both are networks of spiking neurons, similar to the present paper. However, as both groups model the synaptic currents as delta peaks, and use uniform synaptic delays for all connections, the impact of synaptic time dynamics on synchronization were not investigated in these studies. For the various networks, several properties have been shown to desynchronize the networks dynamics in simulations, including noise [39], [41], large system size [36], [46], and heterogeneity of synaptic strengths [8], [35], [40], [44] and connectivity [44], [45] and of other biophysical parameters [8], [37], [38], [42], [43]. Recently, Mejias and Longtin [28] have studied the effect of varying spike thresholds on synchrony in purely excitatory networks. They found that higher variance leads to stronger synchronization, an opposing effect to the one we observe. One important difference in our setup is that we use a balanced system of inhibition and excitation, cf. Figure 2 a. Mejias and Longtin observe that increasing variance in spike thresholds increases also the output rates. For single neurons it is well known [47] that synchronization rises with the output rate. In our balanced system the main effect of the variances is not a change of the output rate but instead a more subtle reaction of the population to varying input strengths, which in turn results in a more asynchronous behaviour. In behaving animals and humans, the activity of clusters of neurons is oscillatory with frequencies of Hz [48]–[51], with most excitatory neurons firing highly irregular [52], named synchronous irregular (SI) state in [47]. Many experiments also reported strong spike count correlation on single cell level, for example of pyramidal cells in V1 with similar receptive fields [12], [53], [54]. Recently Ecker et al. [55] with permanently implanted tetrodes reported, in contrast to these results, that the correlations are in fact negligibly low when a high temporal resolution ( ms) is applied. They reasoned that previous, contradictory findings were an artifact of measurement [55] or analysis techniques [56], or were due to exceptionally high and polysynaptic input from LGN [53]. In light of their findings, Ecker et al. speculated about an active decorrelation process in the brain. Nevertheless, Cohen and Kohn [27] have in turn challenged the measurements and the interpretation of Ecker et al. such that a conclusive bottom-line can not yet been drawn. Our experiments may be viewed as a support of the speculations in [55]. Our results can be explained as follows. In accordance with the law of large numbers, the variance in the parameters plays only a negligible role if we study simple input-output systems without complicated dynamics. For the considered input ranges, our system is of this type, despite of the recurrent connections. In particular the input-response curve does not change much (Fig. 2a). However, variations in the neuronal parameters do have a symmetry-breaking effect that tremendously influences the local reactivity to changes in the input, cf. Fig. 2 c–d. Concretely, in a homogeneous network interneurons tend to react groupwise, thus easily over- or underreacting to pyramidal activity. When each interneuron has different integration properties, they can counterbalance the pyramidal activity more accurately. And as it is well known that functionality of the neuron system crucially depends on a careful and balanced interplay of excitation and inhibition [11], [12], such symmetry breaking effects make the system react in a more subtle and balanced way than in a homogeneous setup.

Conclusion

While the benefits of a high variance are generally accepted in terms of the biodiversity of ecologic systems [57], the potential benefits for neural signal processing are still largely unexplored. We hope that a systematic exploration will be as fruitful as the study of noise in the field of stochastic resonance. In this paper we have undertaken a first step by showing that heterogeneity can enhance speed, responsiveness, and – counterintuitively – robustness of networks of spiking neurons. Our simulations show that various kinds of variance – from variances in neuronal parameters to unreliability of synapses – contribute to these effects. Quantifying the effects of the various parameters is hard, as the contributions do not seem to add up linearly, but depend on each other. We leave a more thorough study of these interdependencies to future work.

Methods

1 Neuron Model

1.1 Leaky Integrate and Fire Dynamics

All model neurons in our simulations are conductance-based leaky integrate-and-fire (LIF) [58], [59] neurons. Gerstner gives a thorough overview of LIF neurons [60]. All simulations were implemented within the NEST framework [61]. The dynamics of the current based LIF model are governed by the following differential equation:where is the membrane voltage, is the resting potential, is the membrane time constant, and is the capacitance of the neuron's membrane. The post-synaptic current (PSC) is determined by the time-dependent voltage and the time-dependent membrane conductance, where and are the reversal potentials of excitatory ions and inhibitory (potassium) ions, respectively. We did not estimate conductances and capacitances separately, but only their quotients (cf. section 2.5). In case that synaptic input raises the membrane potential above the threshold potential , the cell elicits an action potential (spike) and all the neurons the cell projects to will receive conductance changes that express excitatory postsynaptic currents (EPSCs) if the projecting cell is excitatory, or inhibitory postsynaptic currents (IPSCs) if the projecting cell is an interneuron (see also below). After the generation of such a spike event, the neuron undergoes an absolute refractory period of milliseconds (ms) in which it is incapable of generating further spikes. At the end of the absolute refractory period the cell's value is reset to . The conductance induced by the excitatory synapses is given bywhere every presynaptic spike event contributes given by Equation (5) below, and is the dimensionless strength of the connection, defined as the integrated conductance change induced at the soma divided by its capacitance . The conductance is analogously given bywhere is the number of inhibitory synapses, and is given by Equation (6) below. The response curve consists of -Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-D-aspartat (NMDA) components for excitatory synapses,where determines the ratio between AMPA- and NMDA-mediated conductance changes. For inhibitory synapses, the response curve is given by gamma-aminobutyric acid (GABA), so in this case we have AMPA and triggered conductance changes are modeled by (normalized) single exponentials [62] where , with typical decay time-constants of ms at pyramidal cells [62]–[64] and ms at interneurons [62]. Note that the integral over is normalized to . The term accounts for the axonal, synaptic, and dendritic delay of the synaptic connection. NMDA triggered currents are modeled by double exponentials [65] with rise time-constant of ms and decay time-constant of ms [64], [65]. The constantnormalizes the integral over to .

2 Parameters

Table 1 and Table 2 provide the values for the parameters of neurons and synapses that we used in our simulations. Each parameter was drawn uniformly at random from an interval. Mean value, upper and lower bound of the interval are given in the tables. Note that the standard deviation of such a uniform distribution in is given by .
Table 1

Neuronal parameters.

Excitatory CellsInhibitory Cells
parametermean UnitParameterMean Unit
mV mV
mV mV
mV mV
Ms Ms
Ms Ms
mV mV
mV mV
Ms ms
Ms
Ms
Ms Ms
Table 2

Synaptic parameters. For a specifiction of and the amplitude see section 2.5.

parameterunitAPAIPPPIIPII
amplitudemV
Ms

A: afferent pyramidal cells (neuron from previous population in the propagation chain, also used for connections between external input and first population). P: pyramidal cells. I: interneurons.

A: afferent pyramidal cells (neuron from previous population in the propagation chain, also used for connections between external input and first population). P: pyramidal cells. I: interneurons. In the subsequent sections we provide an overview of experimental data on cortical pyramidal cells and cortical basket cells that justify the choice of these values. All the collected animal data was measured in cats, ferrets, and rodents.

2.1 The Resting Potential

There is a long list of publications containing in vitro resting potential data in various animals [62], [66]–[81]. The in vitro spectrum of measured resting potentials in pyramidal cells ranges from mV to mV. We choose mV for pyramidal cells [68], [80]–[84] and mV for interneurons [73]. This is also consistent with in vivo measurements [85], although there is again a wide range from mV [86] to mV [87]. For inhibitory cells resting potential measured in vitro range from to mV [62], [68], [71], [74]–[78], [88]–[91], with standard deviations from mV [77] to mV [88]. Measurements in vivo show slightly less negative resting potentials of – mV [92], possibly due to ongoing background activity. Unfortunately, the sample size () in [92] was too small to give reliable information on the standard deviation.

2.2 The Threshold Potential

In vitro data for the threshold potential of individual excitatory or inhibitory neurons can be found in [72], [73], [79]–[81], [93]. Detailed in vivo data in rat prefrontal cortex can be found in Degenetais et al. [85]. For pyramidal cells they find parameter ranges from mV to mV. In vitro data for basket cells can be found in [73] ( mV) or [93] ( mV). Since varying the threshold potential has essentially the same effect as varying the resting potential, we restricted variations to the resting potential.

2.3 Membrane Time Constant (Leakage)

A vast amount of data can be found for membrane time constants in various places in different animals for excitatory cells [62], [66]–[68], [70], [71], [76], [78], [80], [81], [94] and for inhibitory cells [62], [68], [71], [75], [76], [78], [89]–[91], [94], [95]. For pyramidal cells the cited studies offer values ranging from ms [70] to ms [62]. A majority of this data supports a membrane time constant of about to ms: [78], [94] in monkey, [66], [68] in guinea pig, and [76], [94] in rats. For inhibitory cells values range from ms [91] to ms [75]. Most studies support mean values of ms [62], [68], [71], [76], [78], [89], [90], [94] and standard deviations of ms [62], [75], [76], [78], [89]–[91], [94], [95]

2.4 Refractory Period

Measurements of the absolute refractory period are rare. Values of ms have been found in vivo in rat hippocampal pyramidal cells [96], and fast-spiking neurons tend to have similar refractory period as regular spiking neurons [97]. Note that the terminology of absolute and relative refractory period in the second references deviates from our terminology.

2.5 Synaptic strength

In order not to estimate the synaptic capacitance and conductance separately, we rather fitted the total integral over the quotient , which we call synaptic efficacy . Intuitively, this parameter corresponds to the weight of the synapse. Table 2 shows the amplitude of the EPSP change that a single incoming spike evokes (for mean values of other synaptic and neuronal parameters) if the voltage is initially at the resting potential. We did not derive the synaptic efficacies from literature, but we chose them in a way that inhibition in our populations balanced excitation (the gain in activity is close to for a wide range of inputs). This was important for the homogeneous network to function properly, but not for the inhomogeneous. The latter one also showed a large stable operation range if the gain was positive (data not shown). Although we did not directly fit efficacies to the literature, all used values are within the reported bounds. Depending on the nature of the synaptic currents, will be denoted with appropriate suffixes like , , , or and for total excitatory and inhibitory currents.

2.6 AMPA mediated PSPs and PSCs

The AMPA time constant that we use is rarely estimated directly; more often the half-width of the EPSP is considered. Results for pyramidal cells are ms [63], ms [98], ms [62], ms [99], and ms [99], for interneurons ms [62] and ms [99]. In general, the time constant and its variance are higher if only NMDA is blocked, but not other neurotransmitters like kainate [99]. We used the data from Karayannis et al. [62] since they measured both pyramidal cells and interneurons. Furthermore, the corresponding rise times and half-width of AMPA mediated PSPs in [62] match with other studies [76], [100], except that those studies find higher variances.

2.7 NMDA mediated PSCs

We chose the values of to match rise times between ms [64] and ms [65]. Decay rates are reported in the range between ms [64] and ms [65].

2.8 AMPA/NMDA Ratio

We used data about the AMPA/NMDA ratio for synapses onto pyramidal neurons in [65], [101], [102], and in a review by Thomson et al. [103] for synapses onto inhibitory neurons. For pyramidal neurons ranges from [101] to [65]. For excitatory connections onto inhibitory neurons the NMDA component seems to be much smaller or even absent [103]. Therefore we did not incorporate NMDA components in the latter case.

2.9 GABA mediated PSCs

As for the EPSC, there is only sparse data on the decay time constant of inhibitory postsynaptic currents (IPSCs). For pyramidal cells, Wang et al. [75] find ms, for fast spiking cells Tamas et al. [88] report ms. This is consistent with the finding of Thomson et al. [104] that the IPSP rise time is about twice as large in pyramidal cells compared two interneurons.

2.10 Synaptic Sites and Reliability

There are typically synaptic contacts between any two pyramidal cells in layer 5 [101], [105], [106], each of them having potentially more than one vesicle release site. The transmission probability of the total dendritic tree within one layer is [107], while it is for projections between layers. Unfortunately, estimates on quantal count and quantal release probability are highly contradictory (see [108] for an overview). Therefore, we decided to assume vesicle release sites [106] with a resting release probability of for all glutamatergic synapses, regardless of the innervating cell type. For GABAergic synapses, we assumed vesicle release sites with a resting release probability of to account for the low transmission failure rate of basket cells [71].

2.11 Latencies

The time constant accounts for the time lag between generation of an action potential in a presynaptic neuron and the arrival of the EPSC at the postsynaptic soma. Hence it includes axonal, synaptic and dendritic delay. For close-by neurons (), connections between pyramidal cells have a delay between ms and ms [82], [109], [110]. Connections to and from basket cells have been reported to be faster: between ms and ms from pyramidal cells to basket cells [74], [75], [109]; between ms and ms from pyramidal cells to basket cells [75], [77], [90], [94]; and between ms and ms from basket cells to basket cells (no standard deviation given) [109]. In all the latter cases, a majority of measurements supports a value of roughly ms [75], [77], [94], [109]

3 Feed-Forward Network

3.1 Populations

The network consisted of a feed forward chain of up to populations. Each population consisted of pyramidal cells and interneurons. Within each population, each neuron projected to each other neuron with probability [111], as depicted in Figure 4. The probability was the same for pyramidal cells and interneurons. Moreover, each pyramidal cell in population projected to each (excitatory or inhibitory) neuron in population with probability . Interneurons did not project to other populations, as cortical basket cells do rarely project into other layers. [112].

3.2 Input

There are 250 excitatory input neurons giving input to the network. For Fig. 2 (f,g) the input goes only to the first population. As for the connections between population, each input neuron projects to each neuron in the next population with probability . For Figure 2, each input neuron emits a Poisson spike train of fixed rate. For Figure 3, a random subset of the input neurons each emits a spike at a randomly chosen time in some small, predefined interval. We speak of a “flank” of input spikes in this case.

4 Measures of synchrony

The coefficient of variation (CV) of the spike time intervals was computed as follows. Let be the time between the th and the st pyramidal spike (not necessarily of the same neuron). Then we computed the mean , the standard variation , and the coefficient of variation CV as Many experimental and theoretical papers consider the CV of the spike time intervals of a single neuron (between two spikes of this specific neuron), and possibly average this value over many neurons. This individual CV serves other purposes, and should not be confused with the population-CV that we compute. In particular, the individual CVs can not serve as measures of synchrony: Consider a perfectly synchronous system of neurons which all spike at exactly the same times, but these times are random. Then each individual neuron will have a CV close to . On the contrary, the population-CV of such a system will be extremely high: while most spike time intervals are close to , there are some (comparably) extremely long time intervals in which the complete system is silent. The cross-correlation (CC) of the binned spike times was computed as follows. We repeated the experiment times. Then we binned the time of the experiments with some bin size (for the value of , see figures). For each neuron and each trial we counted the number of spikes of neuron that occurred in the th bin in trial . We computed the average number of spikes of neuron in bin . Then the cross-correlation between two neurons and in the th bin was calculated aswhere and are the standard deviation of and , respectively. Finally, the cross-correlation CC was computed as the mean of , taken over all neurons and and all bins .
  100 in total

1.  Efficacy of thalamocortical and intracortical synaptic connections: quanta, innervation, and reliability.

Authors:  Z Gil; B W Connors; Y Amitai
Journal:  Neuron       Date:  1999-06       Impact factor: 17.173

2.  Pyramidal cell communication within local networks in layer 2/3 of rat neocortex.

Authors:  Carl Holmgren; Tibor Harkany; Björn Svennenfors; Yuri Zilberter
Journal:  J Physiol       Date:  2003-06-17       Impact factor: 5.182

3.  Synaptic efficacy during repetitive activation of excitatory inputs in primate dorsolateral prefrontal cortex.

Authors:  Guillermo González-Burgos; Leonid S Krimer; Nathaniel N Urban; Germán Barrionuevo; David A Lewis
Journal:  Cereb Cortex       Date:  2004-03-28       Impact factor: 5.357

Review 4.  Neuronal computations with stochastic network states.

Authors:  Alain Destexhe; Diego Contreras
Journal:  Science       Date:  2006-10-06       Impact factor: 47.728

5.  Topology and dynamics of the canonical circuit of cat V1.

Authors:  T Binzegger; R J Douglas; K A C Martin
Journal:  Neural Netw       Date:  2009-07-18

6.  Synchrony in excitatory neural networks.

Authors:  D Hansel; G Mato; C Meunier
Journal:  Neural Comput       Date:  1995-03       Impact factor: 2.026

7.  Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex.

Authors:  D A McCormick; B W Connors; J W Lighthall; D A Prince
Journal:  J Neurophysiol       Date:  1985-10       Impact factor: 2.714

8.  Anatomical, physiological, molecular and circuit properties of nest basket cells in the developing somatosensory cortex.

Authors:  Yun Wang; Anirudh Gupta; Maria Toledo-Rodriguez; Cai Zhi Wu; Henry Markram
Journal:  Cereb Cortex       Date:  2002-04       Impact factor: 5.357

9.  Dopamine modulation of perisomatic and peridendritic inhibition in prefrontal cortex.

Authors:  Wen-Jun Gao; Yun Wang; Patricia S Goldman-Rakic
Journal:  J Neurosci       Date:  2003-03-01       Impact factor: 6.167

10.  Spike transmission and synchrony detection in networks of GABAergic interneurons.

Authors:  M Galarreta; S Hestrin
Journal:  Science       Date:  2001-06-22       Impact factor: 47.728

View more
  6 in total

1.  Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

Authors:  Cheng Ly
Journal:  J Comput Neurosci       Date:  2015-10-09       Impact factor: 1.621

2.  Enhanced Peptide Detection Toward Single-Neuron Proteomics by Reversed-Phase Fractionation Capillary Electrophoresis Mass Spectrometry.

Authors:  Sam B Choi; Camille Lombard-Banek; Pablo Muñoz-LLancao; M Chiara Manzini; Peter Nemes
Journal:  J Am Soc Mass Spectrom       Date:  2017-11-16       Impact factor: 3.109

3.  Enabling Large-Scale Simulations With the GENESIS Neuronal Simulator.

Authors:  Joshua C Crone; Manuel M Vindiola; Alfred B Yu; David L Boothe; David Beeman; Kelvin S Oie; Piotr J Franaszczuk
Journal:  Front Neuroinform       Date:  2019-11-15       Impact factor: 4.081

4.  Signal Propagation via Open-Loop Intrathalamic Architectures: A Computational Model.

Authors:  Jeffrey W Brown; Aynaz Taheri; Robert V Kenyon; Tanya Y Berger-Wolf; Daniel A Llano
Journal:  eNeuro       Date:  2020-02-25

5.  On the Use of a Multimodal Optimizer for Fitting Neuron Models. Application to the Cerebellar Granule Cell.

Authors:  Milagros Marín; Nicolás C Cruz; Eva M Ortigosa; María J Sáez-Lara; Jesús A Garrido; Richard R Carrillo
Journal:  Front Neuroinform       Date:  2021-06-03       Impact factor: 4.081

Review 6.  A dynamic architecture of life.

Authors:  Beatrix P Rubin; Jeremy Brockes; Brigitte Galliot; Ueli Grossniklaus; Daniel Lobo; Marco Mainardi; Marie Mirouze; Alain Prochiantz; Angelika Steger
Journal:  F1000Res       Date:  2015-11-18
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.