A Sanzeni1,2,3, M H Histed3, N Brunel2,4. 1. Center for Theoretical Neuroscience, Columbia University, New York, New York, USA. 2. Department of Neurobiology, Duke University, Durham, North Carolina, USA. 3. National Institute of Mental Health Intramural Program, NIH, Bethesda, Maryland, USA. 4. Department of Physics, Duke University, Durham, North Carolina, USA.
Abstract
Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1 / K . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.
Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1 / K . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.
Each neuron in the cortex receives inputs from hundreds to thousands of presynaptic neurons. If these inputs were to sum to produce a large net current, the central limit theorem argues that fluctuations should be small compared to the mean, leading to regular firing, as observed during in vitro experiments under constant current injection [1,2]. Cortical activity, however, is highly irregular, with a coefficient of variation of interspike intervals (CV of ISI) close to one [3,4]. To explain the observed irregularity, it has been proposed that neural networks operate in a balanced state, where strong feed forward and recurrent excitatory inputs are canceled by recurrent inhibition and firing is driven by fluctuations [5,6]. At the single-neuron level, in order for this state to emerge, input currents must satisfy two constraints. First, excitatory and inhibitory currents must be fine-tuned to produce an average input below threshold. Specifically, if K and J represent the average number of input connections per neuron and synaptic efficacy, respectively, the difference between excitatory and inhibitory presynaptic inputs must be of the order of 1/KJ. Second, input fluctuations should be large enough to drive firing.It has been shown that the balanced state emerges dynamically (without fine-tuning) in randomly connected networks of binary units [7,8] and networks of current-based spiking neurons [9,10], provided that coupling is strong, and recurrent inhibition is powerful enough to counterbalance instabilities due to recurrent excitation. However, these results are all derived assuming that the firing of a presynaptic neuron produces a fixed amount of synaptic current, hence neglecting the dependence of synaptic current on the membrane potential, a key aspect of neuronal biophysics. In real synapses, synaptic inputs are mediated by changes in conductance, due to opening of synaptic receptor channels on the membrane, and synaptic currents are proportional to the product of synaptic conductance and a driving force which depends on the membrane potential. Models that incorporate this description are referred to as “conductance-based synapses”.Large synaptic conductances have been shown to have major effects on the stationary [11] and dynamical [12] response of single cells and form the basis of the “high-conductance state” [13-19] that has been argued to describe well in vivo data [20-22] (but see Ref. [23] and Sec. IX). At the network level, conductance modulation plays a role in controlling signal propagation [24], input summation [25], interactions between traveling waves [26], and firing statistics [27]. However, most of the previously mentioned studies rely exclusively on numerical simulations, and, in spite of a few attempts at analytical descriptions of networks of conductance-based neurons [17,28-32], an understanding of the behavior of such networks when coupling is strong is still lacking.Here, we investigate networks of strongly coupled conductance-based neurons. We find that, for synapses of the order of , fluctuations are too weak to sustain firing, questioning the relevance of the balanced state idea to cortical dynamics. Our analysis, on the other hand, shows that stronger synapses [of the order of 1/log (K)] generate irregular firing when coupling is strong. We characterize the properties of networks with such a scaling, showing that they match properties observed in the cortex, and discuss constraints induced by the synaptic time constants. The model generates qualitatively different predictions compared to the current-based model, which could be tested experimentally.
MODELS OF SINGLE-NEURON AND NETWORK DYNAMICS
Membrane potential dynamics
We study the dynamics of networks of leaky integrate-and-fire (LIF) neurons with conductance-based synaptic inputs. The membrane potential V of the jth neuron in the network follows the equation
where is the neuronal capacitance; E, E, and E are the reversal potentials of the leak, excitatory, and inhibitory currents, respectively; while , , and are the leak, excitatory, and inhibitory conductances, respectively. Assuming instantaneous synapses (the case of finite synaptic time constants is discussed in Sec. VIII), excitatory and inhibitory conductances are given by
In Eq. (2), is the single-neuron membrane time constant, a are dimensionless measures of synaptic strength between neuron j and neuron m, and represents the sum of all the spikes generated at times by neuron m. Every time the membrane potential V reaches the firing threshold θ, the jth neuron emits a spike, and its membrane potential is set to a reset V and stays at that value for a refractory period τ; after this time, the dynamics resumes, following Eq. (1).We use a = a (ag) for all excitatory (inhibitory) synapses. In the homogeneous case, each neuron receives synaptic inputs from K = K (K = γK) excitatory (inhibitory) cells. In the network case, each neuron receives additional K = K excitatory inputs from an external population firing with Poisson statistics with rate ν. We use excitatory and inhibitory neurons with the same biophysical properties; hence, the above assumptions imply that the firing rates of excitatory and inhibitory neurons are equal; ν = ν = ν. Models taking into account the biophysical diversity between the excitatory and inhibitory populations are discussed in the Appendix D. When heterogeneity is taken into account, the above-defined values of K represent the means of Gaussian distributions. We use the following single-neuron parameters: τ = 2 ms, θ = −55 mV, V = −65 mV, E = 0 mV, E −75 mV, E = −80 mV, and τ = τ = 20 ms. We explore various scalings of a with K, and, in all cases, we assume that a ≪ 1. When a ≪ 1, an incoming spike produced by an excitatory presynaptic neuron produces a jump in the membrane potential of amplitude a(E – V), where V is the voltage just before spike arrival. In the cortex, V ~ −60 mV and average amplitudes of postsynaptic potentials are on the order of 0.5–1.0 mV [33-39]. Thus, we expect realistic values of a to be on the order of 0.01.
Diffusion and effective time constant approximations
We assume that each cell receives projections from a large number of cells (K ≫ 1), neurons are sparsely connected and fire approximately as Poisson processes, each incoming spike provides a small change in conductance (a ≪ 1), and temporal correlations in synaptic inputs can be neglected. Under these assumptions, we can use the diffusion approximation and approximate the conductances as
where r and r are the firing rates of presynaptic E and I neurons, respectively, and ζ and ζ are independent Gaussian white noise terms with zero mean and unit variance density. In the single-neuron case, we take r = ν, r = ην, where η represents the ratio of I/E input rate. In the network case, r = ν + ν, r = ν, where ν is the external rate, while ν is the firing rate of excitatory and inhibitory neurons in the network, determined self-consistently (see below). We point out that, for some activity levels, the assumption of Poisson presynaptic firing made in the derivation of Eq. (3) breaks down, as neurons in the network show interspike intervals with CV significantly different from one [e.g., see Fig. 3(c)]. However, comparisons between mean field results and numerical simulations (see Appendix E) show that neglecting non-Poissonianity [as well as other contributions discussed above Eq. (3)] generates quantitative but not qualitative discrepancies, with magnitude that decreases with coupling strength. Moreover, in Appendix B, we show that if a ≪ 1, the firing of neurons in the network matches that of a Poisson process with a refractory period and, hence, when ν ≪ 1/τ, deviations from Poissonianity become negligible.
FIG. 3.
Response of networks of conductance-based neurons for large K. (a) Scaling relation defined by self-consistency condition given by Eqs. (14) and (19) (black line), values of parameters used in (b)–(d) (colored dots). Constant scaling (a ~ K0 dotted line) and scaling of the balanced state model (, dashed line) are shown for comparison. (b),(c) Firing rate and CV of ISI as a function of the external input, obtained from Eqs. (10) and (15) (colored lines) with the strong-coupling limit solution of Eqs. (20) and (16) (black line). (d) Probability distribution of the membrane potential obtained from Eq, (17). In (b)–(d), dotted and dashed lines represent quantities obtained with the scalings J ~ K0 and , respectively, for values of K and J indicated in (a) (black dots). Parameters: γ = 1/4 and g = 30.
Using the diffusion approximation, Eq. (1) reduces to
where ζ is a white noise term, with zero mean and unit variance density, while
In Eq. (4), τ is an effective membrane time constant, while μ and σ2(V) represent the average and the variance of the synaptic current generated by incoming spikes, respectively.The noise term in Eq. (4) can be decomposed into an additive and a multiplicative component. The latter has an effect on membrane voltage statistics that is of the same order of the contribution coming from synaptic shot noise [40], a factor which is neglected in deriving Eq. (3). Therefore, for a consistent analysis, we neglect the multiplicative component of the noise in the above derivation; this leads to an equation of the form of Eq. (4) with the substitution
This approach is termed the effective time constant approximation [40]. Note that the substitution of Eq. (6) greatly simplifies mathematical expressions, but it is not a necessary ingredient for the results presented in this paper. In fact, all our results can be obtained without having to resort to this approximation (see Appendixes A, B, and D).
Current-based model
The previous definitions and results translate directly to current-based models, with the only exception that the dependency of excitatory and inhibitory synaptic currents on the membrane potential are neglected (see Ref. [10] for more details). Therefore, Eq. (1) becomes
where
represent the excitatory and inhibitory input currents. Starting from Eq. (7), making assumptions analogous to those discussed above and using the diffusion approximation [10], the dynamics of current-based neurons is given by an equation of the form of Eq. (4) with
Note that, unlike what happens in conductance-based models, τ is a fixed parameter and does not depend on the network firing rate or external drive. Another difference between the current-based and conductance-based models is that in the latter, but not the former, model σ depends on V; as we discuss above, this difference is neglected in the main text, where we use the effective time constant approximation.
BEHAVIOR OF SINGLE-NEURON RESPONSE FOR LARGE K
We start our analysis by investigating the effects of synaptic conductance on single-neuron response. We consider a neuron receiving K (γK) excitatory (inhibitory) inputs, each with synaptic efficacy J (gJ), from cells firing with Poisson statistics with a rate
and analyze its membrane potential dynamics in the frameworks of current-based and conductance-based models. In both models, the membrane potential V follows a stochastic differential equation of the form of Eq. (4); differences emerge in the dependency of τ, μ, and σ on the parameters characterizing the connectivity, K and J. In particular, in the current-based model, the different terms in Eq. (8) can be written as
where , , and are independent of J and K. In the conductance-based model, the efficacy of excitatory and inhibitory synapses depend on the membrane potential as J = a(E − V); the different terms in Eq. (4), under the assumption that Ka ≫ 1, become of the order of
Here, all these terms depend on parameters in a completely different way than in the current-based case. As we show below, these differences drastically modify how the neural response changes as K and J are varied and, hence, the size of J ensuring a finite response for a given value of K.The dynamics of a current-based neuron is shown in Fig. 1(a)(i), with parameters leading to irregular firing. Because of the chosen parameter values, the mean excitatory and inhibitory inputs approximately cancel each other, generating subthreshold average input and fluctuation-driven spikes, which leads to irregularity of firing. If all parameters are fixed while K is increased (J ~ K0), the response changes drastically [Fig. 1(a)(ii)], since the mean input becomes much larger than threshold and firing becomes regular. To understand this effect, we analyze how terms in Eq. (4) are modified as K increases. The evolution of the membrane potential in time is determined by two terms: a drift term −(V − μ)/τ, which drives the membrane potential toward its mean value μ, and a noise term , which leads to fluctuations around this mean value. Increasing K modifies the equilibrium value μ of the drift force and the input noise, which increase proportionally to KJ(1 − γgη) and KJ2(γg2η + 1), respectively [Figs. 1(b) and 1(c)]. This observation)suggests that, to preserve irregular firing as K is increased, two ingredients are needed. First, the rates of excitatory and inhibitory inputs must be fine-tuned to maintain a mean input below threshold; this can be achieved by choosing γgη − 1 ~ 1/KJ. Second, the amplitude of input fluctuations should be preserved; this can be achieved by scaling synaptic efficacy as . Once these two conditions are met, irregular firing is restored [Fig. 1(a)(iii)]. Importantly, in a network with , irregular firing emerges without fine-tuning, since rates dynamically adjust to balance excitatory and inhibitory inputs and maintain mean inputs below threshold [7,8].
FIG. 1.
Effects of coupling strength on the firing behavior of current-based and conductance-based neurons. (a) Membrane potential of a single current-based neuron for (i) J = 0.3 mV, K = 103, g = γ = 1, and η such that 1 − gγη = 0.075; (ii) with K = 5 × 104; (iii) with K = 5 × 104 and scaled synaptic efficacy (, which gives J = 0.04 mV) and input difference 1 − gγη = 0.01; (b),(c) Effect of coupling strength on drift force and input noise in a current-based neuron. (d) Membrane potential of a single conductance-based neuron for fixed input difference (g1 − γη = −2.8) and (i) a = 0.01, K = 103; (ii) K = 5 × 104; (iii) K = 5 × 104 and scaled synaptic efficacy (, a = 0.001). (e),(f) Effect of coupling strength on drift force and input noise in a conductance-based neuron. In (a) and (d), dashed lines represent the threshold and reset (black) and equilibrium value of membrane potential (green). In (a) (ii) and (d)(ii), light purple traces represent dynamics in absence of a spiking mechanism. Input fluctuations in (c) and (f) represent input noise per unit time, i.e., the integral of of Eq. (4) computed over an interval Δt and normalized over Δt.
We now show that the above solution does not work once synaptic conductance is taken into account. The dynamics of a conductance-based neuron in response to the inputs described above is shown in Fig. 1(d)(i). As in the current-based neuron, it features irregular firing, with mean input below threshold and spiking driven by fluctuations, and firing becomes regular for larger K, leaving all other parameters unchanged [Fig. 1(d)(ii)]. However, unlike the current-based neuron, input remains below threshold at large K; regular firing is produced by large fluctuations, which saturate the response and produce spikes that are regularly spaced because of the refractory period. These observations can be understood by inspecting the equation for the membrane potential dynamics [Eq. (4)]: increasing K leaves invariant the equilibrium value of the membrane potential μ but increases the drift force and the input noise amplitude as Ka and , respectively [Figs. 1(e) and 1(f)]. Since the equilibrium membrane potential is fixed below threshold, response properties are determined by the interplay between drift force and input noise, which have opposite effects on the probability of spike generation. The response saturation observed in Fig. 1(d)(ii) shows that, as K increases at fixed a, fluctuations dominate over drift force. On the other hand, using the scaling leaves the amplitude of fluctuations unchanged but generates a restoring force of the order of [Fig. 1(e)] which dominates and completely abolishes firing at strong coupling [Fig. 1(d)(iii)].Results in Fig. 1 show that the response of a conductance-based neuron when K is large depends on the balance between drift force and input noise. The scalings a ~ O(1) and leave one of the two contributions dominant, suggesting that an intermediate scaling could keep a balance between them. Below, we derive such a scaling, showing that it preserves firing rate and CV of ISI when K becomes large.
A SCALING RELATION THAT PRESERVES SINGLE-NEURON RESPONSE FOR LARGE K
We analyze under what conditions the response of a single conductance-based neuron is preserved when K is large. For a LIF neuron described by Eqs. (4)–(6), the single cell transfer function, i.e., the dependency of the firing rate ν on the external drive ν, is given by [41,42]
with
In the biologically relevant case of a ≪ 1, Eq. (10) simplifies significantly, using the fact that vmax, the distance between the average membrane potential and the threshold, is of the order of . Therefore, vmax is large when a is small; in this limit, the firing rate is given by the Kramers escape rate [43], and Eq. (10) becomes
where we define and . The motivation to introduce and is that they remain of the order of 1 in the small a limit, provided the external inputs ν are at least of the order of 1/(aKτ). When the external inputs are such that ν ≫ 1/(aKτ), these quantities become independent of ν, and a and K are given by
The firing rate given by Eq. (12) remains finite when a is small and/or K is large if remains of the order of one; this condition leads to the following scaling relationship:
i.e., a should be of the order of 1/log K.In Appendix C, we show that expressions analogous to Eq. (12) can be derived in integrate-and-fire neuron models which feature additional intrinsic voltage-dependent currents, as long as synapses are conductance based and input noise is small (a ≪ 1). Examples of such models include the exponential integrate-and-fire neurons with its spike-generating exponential current [44] and models with voltage-gated subthreshold currents [23]. Moreover, we show that, in these models, firing remains finite if a ~ 1/ log(K), and voltage-dependent currents generate corrections to the logarithmic scaling which are negligible when coupling is strong.In Fig. 2(a), we compare the scaling defined by Eq. (14) with the scaling of current-based neurons. At low values of K, the values of a obtained with the two scalings are similar; at larger values of K, synaptic strength defined by Eq. (14) decays as a ~ 1/log(K)—i.e., synapses are stronger in the conductance-based model than in the current-based model. Examples of single-neuron transfer function computed from Eq. (10) for different coupling strength are shown in Figs. 2(b) and 2(c). Responses are nonlinear at onset and close to saturation. As predicted by the theory, scaling a with K according to Eq. (14) preserves the firing rate over a region of inputs that increases with the coupling strength [Figs. 2(c) and 2(d)], while the average membrane potential remains below threshold [Fig. 2(d)]. The quantity represents the distance from threshold of the equilibrium membrane potential in units of input fluctuations; Eq. (14) implies that this distance increases with the coupling strength. When K is very large, the effective membrane time constant, which is of the order of τ ~ 1/aKν, becomes small and firing is driven by fluctuations that, on the timescale of this effective membrane time constant, are rare.
FIG. 2.
The scaling of Eq. (14) preserves the response of a single conductance-based neuron for large K. (a) The scaling relation preserving firing in conductance-based neurons [Eq. (14), solid line]; constant scaling (a ~ K0, dotted line) and scaling of the balanced state model (, dashed line) are shown as a comparison. Colored dots indicate values of a and K used subsequently. (b)–(h) Response of conductance-based neurons, for different values of the coupling strength and synaptic efficacy (colored lines). The scaling of Eq. (14) preserves how the firing rate (b),(c), equilibrium value of the membrane potential (d), and CV of the interspike interval distribution (e) depend on external input rate ν. This invariance is achieved by increasing the drift force (f) and input fluctuation (g) in a way that weakly decreases (logarithmically in K) membrane potential fluctuations (h). Different scalings either saturate or suppress the response [(b); black lines correspond to K = 105 and a values as in (a)]. Parameters: a = 0.01 for K = 103, g = 12, η = 1.8, and γ = 1/4.
We next investigate if the above scaling preserves irregular firing by analyzing the CV of interspike intervals. This quantity is given by [10]
and, for the biologically relevant case of a ≪ 1 and μ < θ, reduces to (see Appendix B for details)
i.e., the CV is close to one at low rates, and it decays monotonically as the neuron approaches saturation. Critically, Eq. (16) depends on the coupling strength only through ν; hence, any scaling relation preserving firing rate also produces a CV of the order of one at a low rate. We validate numerically this result in Fig. 2(e).We now investigate how Eq. (14) preserves irregular firing in conductance-based neurons. We have shown that increasing K at fixed a produces large input and membrane fluctuations, which saturate firing; the scaling preserves input fluctuations but, because of the strong drift force, suppresses membrane potential fluctuations and, hence, firing. The scaling of Eq. (14), at every value of K, yields the value of a that balances the contribution of drift and input fluctuations, so that membrane fluctuations are of the right size to preserve the rate of threshold crossing. Note that, unlike what happens in the current-based model, both input fluctuations and drift force increase with K [Figs. 2(f) and 2(g)], while the membrane potential distribution, which is given by [45]
slowly becomes narrower [Fig. 2(h)]. This result can be understood by noticing that, when a ≪ 1 and neglecting the contribution due to the refractory period, Eq. (17) reduces to
Hence, the probability distribution becomes Gaussian when coupling is strong, with a variance proportional to σ2 ~ a. We note that, since a is of the order of 1/ log K, the width of the distribution becomes small only for unrealistically large values of K.
ASYNCHRONOUS IRREGULAR ACTIVITY IN NETWORK RESPONSE AT STRONG COUPLING
We have so far considered the case of a single neuron subjected to stochastic inputs. We now show how the above results generalize to the network case, where inputs to a neuron are produced by a combination of external and recurrent inputs.We consider networks of recurrently connected excitatory and inhibitory neurons, firing at rate ν, stimulated by an external population firing with Poisson statistics with firing rate ν. Using again the diffusion approximation, the response of a single neuron in the networks is given by Eq. (10) [and, hence, Eq. (12)] with
Equation (10), if all neurons in a given population are described by the same single-cell parameters and the network is in an asynchronous state in which cells fire at a constant firing rate, provides an implicit equation whose solution is the network transfer function. Example solutions are shown in Fig. 3(b) (numerical validation of the mean field results is provided in Appendix E). In Appendix D, we prove that firing in the network is preserved when coupling is strong if parameters are rescaled according to Eq. (14). Moreover, we show that response nonlinearities are suppressed and the network response in the strong-coupling limit (i.e., when K goes infinity) is given, up to saturation, by
The parameter ρ, which is obtained by solving Eq. (12) self-consistently (see Appendix D for details), is the response gain in the strong-coupling limit. Finally, our derivation implies that Eq. (14) preserves irregular firing and creates a probability distribution of membrane potential whose width decreases only logarithmically as K increases [Figs. 3(c) and 3(d) and numerical validation in Appendix E], as in the single-neuron case. While this logarithmic decrease is a qualitative difference with the current-based balanced state in which the width stays finite in the large K limit, in practice, for realistic values of K, realistic fluctuations of membrane potential (a few mV) can be observed in both cases.We now turn to the question of what happens in networks with different scalings between a and K. Our analysis of single-neuron response described above shows that scalings different from that of Eq. (14) fail to preserve firing for large K, as they let either input noise or drift dominate. However, the situation in networks might be different, since recurrent interactions could, in principle, adjust the statistics of input currents such that irregular firing at low rates is preserved when coupling becomes strong. Thus, we turn to the analysis of the network behavior when a scaling a ~ K− is assumed. For α ≤ 0, the dominant contribution of input noise at the single-neuron level (Figs. 1 and 2) generates saturation of response and regular firing in the network (Fig. 3). This can be understood by noticing that, for large K, the factor in Eq. (12) becomes negligible and the self-consistency condition defining the network rate is solved by ν = 1/τ. For α > 0, the network response for large K is determined by two competing elements. On the one hand, input drift dominates and tends to suppress firing (Figs. 1 and 2). On the other hand, for the network to be stable, inhibition must dominate recurrent interactions [9]. Hence, any suppression in network activity reduces recurrent inhibition and tends to increase neural activity. When these two elements conspire to generate a finite network response, the factor in Eq. (12) must be of the order of one and . In this scenario, the network activity exhibits the following features (Fig. 3): (i) the mean inputs drive neurons very close to threshold ; (ii) the response of the network to external inputs is linear and, up to corrections of the order of K−, given by
(iii) firing is irregular [because of Eq. (16)]; (iv) the width of the membrane potential distribution is of the order of a ~ K− [because of Eq. (18)]. Therefore, scalings different from that in Eq. (14) can produce asynchronous irregular activity in networks of conductance-based neurons, but this leads to networks with membrane potentials narrowly distributed close to threshold, a property which seems at odds with what is observed in the cortex [46-51].
ROBUST LOG-NORMAL DISTRIBUTION OF FIRING RATES IN NETWORKS WITH HETEROGENEOUS CONNECTIVITY
Up to this point, we have assumed a number of connections equal for all neurons. In real networks, however, this number fluctuates from cell to cell. The goal of this section is to analyze the effects of heterogeneous connectivity in networks of conductance-based neurons.We investigate numerically the effects of connection heterogeneity as follows. We choose a Gaussian distribution of the number of connections per neuron, with mean K and variance ΔK2 for excitatory connections and mean γK and variance γ2ΔK2 for inhibitory connections. The connectivity matrix is constructed by drawing first randomly E and I in-degrees from these Gaussian distributions for each neuron and then selecting at random
E/I presynaptic neurons. We then simulate network dynamics and measure the distribution of rates and CV of the ISI in the population. Results for different values of CV ΔK/K are shown in Figs. 4(a)–4(c). For small and moderate values of connection heterogeneity, increasing CV broadens the distribution of rates and CV of the ISI, but both distributions remain peaked around a mean rate that is close to that of homogeneous networks [Figs. 4(a) and 4(b)]. For larger CV, on the other hand, the distribution of rates changes its shape, with a large fraction of neurons moving to very low rates, while others increase their rates [Fig. 4(a)] and the distribution of the CV of ISI becomes bimodal, with a peak at low CV corresponding to the high-rate neurons, while the peak at a CV close to 1 corresponds to neurons with very low firing rates [Fig. 4(b)].
FIG. 4.
Effects of heterogeneous connectivity on the network response. (a),(b) Distribution of ν and CV of ISI computed from network simulations (dots) and from the mean field analysis [(a), black lines] for different values of CV [values are indicated by dots in (c)]. (c) Δν/ν (green, left axis) l Communication within Local and fraction of quiescent cells (brown, right axis) computed from network simulations as a function of CV. For , Δν/ν increases linearly, as predicted by the mean field analysis; deviations from linear scaling emerge for , when a significant fraction of cells become quiescent. The deviation from linear scaling at low CV is due to a sampling error in estimating the firing rate from simulations. (d) as a function of K computed from the mean field theory (green, left axis), with a rescaled according to Eq. (14). For large K, decays proportionally to a (brown, right axis). When K is too low, the network is silent and . In (a)–(c), K = 103, g = 20, a = 1.6 × 10−3, N = N = N/γ = 10K, and ν = 0.05/τ. In network simulations, the dynamics is run for 20 s using a time step of 50 μs. Parameters in (d) are as in Fig. 3.
To characterize more systematically the change in the distribution of rates with CV, we measure, for each value of CV, the fraction of quiescent cells, defined as the number of cells that do not spike during 20 s of the simulated dynamics [Fig. 4(c)]. This analysis shows that the number of quiescent cells, and, hence, the distribution of rates, changes abruptly as the CV is above a critical value . Importantly, unlike our definition of the fraction of quiescent cells, this abrupt change is a property of the network that is independent of the duration of the simulation.To understand these numerical results, we perform a mean field analysis of the effects of connection heterogeneity on the distribution of rates (Appendix F). This analysis captures quantitatively numerical simulations [Fig. 4(a)] and shows that, in the limit of small CV and a, rates in the network are given by
where ν0 is the population average in the absence of heterogeneity, z is a Gaussian random variable, and the prefactor Ω is independent of a, K, and ν. The exponent in Eq. (22) represents a quenched disorder in the value of v, i.e., in the distance from threshold of the single cell μ in units of input noise. As shown in Appendix F, Eq. (22) implies that the distribution of rates is log-normal, a feature consistent with experimental observations [52-54] and distributions of rates in networks of current-based LIF neurons [55]. It also implies that the variance of the distribution Δν/ν should increase linearly with CV, a prediction which is confirmed by numerical simulations [Fig. 4(c)]. The derivation in Appendix F also provides an explanation for the change in the shape of the distribution for larger CV. In fact, for larger heterogeneity, the small CV approximation is not valid, and fluctuations in input connectivity produce cells for which μ far from θ, that are firing either at an extremely low rate (μ < θ) or regularly (μ > θ). The latter generates the peak at low values in the CV of the ISI seen for large values CV.The quantity represents the level of connection heterogeneity above which significant deviations from the asynchronous irregular state emerges; i.e., large fractions of neurons show extremely low or regular firing. Equation (22) suggests that should increase linearly with a. We validate this prediction with our mean field model, by computing the minimal value of CV at which 1% of the cells fire at a rate of 10−3 spk/s [Fig. 4(d)]. Note that the derivation of Eq. (22) assumes only a to be small and does not depend on the scaling relation between a and K. On the other hand, the fact that increases linearly with a makes the state emerging in networks of conductance-based neurons with a ~ 1/ log(K) significantly more robust to connection fluctuations than that emerging with a ~ K−, for which , and with current-based neurons, where [56]. Note that, while in randomly connected networks , a larger degree of heterogeneity is observed in cortical networks [50,56-62]. Our results show that networks of conductance-based neurons could potentially be much more robust to such heterogeneities than networks of current-based neurons.
COMPARISON WITH EXPERIMENTAL DATA
The relation between synaptic efficacy and number of connections per neuron has been recently studied experimentally using a culture preparation [63]. In this paper, it is found that cultures in which K is larger have weaker synapses than cultures with smaller K (Fig. 5). In what follows, we compare these data with the scalings expected in networks of current-based and conductance-based neurons and discuss implications for in vivo networks.
FIG. 5.
Comparison of predictions given by current-based and the conductance-based models in describing experimental data from cultures. (a) Strength of excitatory (EPSP) and inhibitory (IPSP) postsynaptic potentials recorded in Ref. [63] are compared with best fits using scaling relationships derived from networks with current-based synapses (dashed line) and conductance-based synapses (continuous line). Root mean square (rms) and best fit parameters are rms = 2.2 mV, g = 1.1, and J0 = 20 mV for the current-based model and rms = 2.4 mV, g = 3.4, and for the conductance-based model. (b) Value of predicted by the conductance-based model as a function of K. (c) Ratio between excitatory and leak conductance as a function of K, for ν = ν = ν = 1 spk/s (black) and ν = ν = ν = 5 spk/s (gray) obtained with a rescaled as Eq. (14) (continuous line) and as (dashed line). (d) Ratio between τ and τ as a function of K; parameters and scaling as in (c).
In the current-based model, the strength of excitatory and inhibitory postsynaptic potentials as a function of K can be written as and J = gJ, respectively. In the conductance-based model, these quantities become J = (V − E)a and J = g(V − E)a, where is given by Eq. (14) while, for the dataset of Ref. [63], V ~ −60 mV, J ~ J, E ~ 0 mV, and E ~ −80 mV. For each model, we infer free parameters from the data with a least-squares optimization in logarithmic scale (best fit, g = 1.1 and J0 = 20 mV in the current-based model and g = 3.4 and in the conductance-based model) and compute the expected synaptic strength as a function of K [lines in Fig. 5(a)]. Our analysis shows that the performances of the current-based and the conductance-based model in describing the data, over the range of K explored in the experiment, are similar, with the former being slightly better than the latter (root mean square 2.2 vs 2.4 mV). This result is consistent with the observation made in Ref. [63] that, when fitted with a power law J ~ K−, data are best described by β = 0.59 but are compatible with a broad range of values (95% confidence interval [0.47:0.70]). Note that, even though both models give similar results for PSP amplitudes in the range of values of K present in cultures (approximately 50–10 00), they give significantly different predictions for larger values of K. For instance, for K = 10 000, J is expected to be approximately 0.2 mV in the current-based model and approximately 0.7 mV in the conductance-based model.In Fig. 5(b), we plot the distance between the equilibrium membrane potential μ and threshold θ in units of input fluctuations and as a function of K using the value of obtained above and find that the expected value in vivo, where K ~ 103–104, is in the range 2–3. In Figs. 5(c) and 5(d), we plot how total synaptic excitatory conductance and the effective membrane time constant change as a function of K. Both quantities change significantly faster using the conductance-based scaling [g/g ~ K/ log (K); τ/τ ~ log (K)/K] than expected by the scaling of the current-based model . For K in the range 103–104 and mean firing rates in the range 1–5 spk/s, the total synaptic conductance is found be in a range from about 2 to 50 times the leak conductance, while the effective membrane time constant is found to be smaller than the membrane time constant by a factor of 2–50. We compare these values with available experimental data in Sec. IX.
EFFECTS OF FINITE SYNAPTIC TIME CONSTANTS
Results discussed in previous sections show that the effective membrane time constant τ decreases with presynaptic activity and with coupling strength. This observation raises the question whether the assumption of negligible synaptic time constants we have made in our analysis is reasonable. Synaptic decay time constants of experimentally recorded postsynaptic currents range from a few milliseconds (for AMPA and GABA receptor-mediated currents) to tens of milliseconds (for GABA and NMDA receptor-mediated currents; see, e.g., Ref. [64]); i.e., they are comparable to the membrane time constant already at weak coupling, where τ ~ τ is typically in the range 10–30 ms [65]. Interestingly, experiments suggest that synaptic dynamics might be faster in physiological conditions (e.g., Ref. [66] finds a 0.5 ms decay time constant for the AMPA receptor at 35°C). Nonetheless, in the strong-coupling limit, the effective membrane time constant goes to zero, and so our assumption of negligible synaptic time constant clearly breaks down in that limit. In this section, we analyze models with finite coupling strength and show that synaptic dynamics modifies the drift-diffusion balance characteristic of conductance-based models, making it input dependent. At the end of the section, we discuss how this input-dependent drift-diffusion balance can be preserved in the strong-coupling limit.With finite synaptic time constants, the temporal evolution of conductances in Eq. (2) is replaced by
where τ/τ are the decay time constant of E/I synaptic conductances, respectively. The single-neuron membrane potential dynamics is described by Eqs. (1) and (23). Here, for simplicity, we take excitatory and inhibitory synaptic currents to have the same decay time constant: τ = τ = τ. Figure 6(a) shows how the synaptic time constant modifies the mean firing rate of single integrate-and-fire neurons in response to K (γK) excitatory (inhibitory) inputs with synaptic strength a (ga) and frequency ν (ην). The figure shows that, though the mean firing rate is close to predictions obtained with instantaneous synapses for low ν, deviations emerge as input increases and firing is strongly suppressed for large ν. To understand these numerical results, we resort again to the diffusion approximation [67,68], together with the effective time constant approximation [11,69], to derive a simplified expression of the single-neuron membrane potential dynamics with finite synaptic time constant (details in Appendix G):
where τ, μ, and σ are as in the case of negligible synaptic time constant [Eq. (5)] while z is an Ornstein-Uhlenbeck process with correlation time τ. Thus, compared to the instantaneous synapse case [Eq. (4)], input fluctuations with frequency larger than 1/τ are suppressed, and, for large τ/τ, the membrane potential dynamics is given by
i.e., the membrane potential is essentially slaved to a time-dependent effective reversal potential given by the rhs of Eq. (25) [14]. Note that Eq. (25) is valid only in the subthreshold regime. When the rhs of Eq. (25) exceeds the threshold, the neuron fires a burst of action potentials whose frequency, in the strong-coupling limit, is close to the inverse of the refractory period [70]. As ν increases, the equilibrium value μ remains constant while τ decreases, leading to a suppression of membrane fluctuations [Figs. 6(a) and 6(c)] and, in turn, to the suppression of response observed in Fig. 6(a). Therefore, the filtering of synaptic input induced by synaptic dynamics breaks the drift-diffusion balance which supports firing in conductance-based neurons. In Appendix H, we show that the suppression of the single-neuron firing rate described here cannot be prevented by short-term synaptic plasticity.
FIG. 6.
Effects of synaptic time constant on single-neuron and network response. (a) Single-neuron response as a function of input rate ν, computed numerically from Eqs. (1) and (23). Different colors correspond to different values of τ (purple, 1 ms; blue, 2 ms; red, 5 ms). Firing rates (first row) match predictions obtained for instantaneous synapses (lines) for small τ/τ; significant deviations and response suppression emerge for larger τ/τ. The effective membrane time constant (τ, second row) decreases with the input rate and reaches the value τ/τ ~ 1 (dashed line) for lower levels of external drive when τ is larger. The equilibrium value of the membrane potential (μ, third row) increases with the input rate and is independent of τ (black dotted line represents the spiking threshold). The magnitude of fluctuations of the membrane potential (σ, fourth row) has a nonmonotonic relationship with the input rate and peaks at a value of ν for which τ is of the same order as τ. (b) Analogous to (a) but in the network case. Firing rates are no longer suppressed as τ/τ increases but approach the response scaling predicted by Eq. (21) (dashed line). As discussed in the text, high firing rates are obtained by increasing the value of μ toward threshold. (c) Examples of membrane potential dynamics for a single neuron in the absence of spiking mechanisms and for two different values of τ. Colors correspond to increasing ν = 5 (blue), 40 (orange), and 100 spk/s (green), respectively. High-frequency fluctuations are suppressed as ν increases. (d) Analogous to (c) but in the network case and for ν = 5, 40, and 100 spk/s. Increasing ν reduces recurrent inhibition and produces membrane potential trajectories which are increasingly closer to the firing threshold. Simulations parameters are K = 103, a = 0.01, g = 12, η = 1.4, and γ 1/4 (single neuron); K = 103, a = 0.002, g = 22, and γ = 1/4 (network). Simulations are performed with the simulator brian2 [71], with neurons receiving inputs from independent Poisson units firing at rates Kν and γKην, in the single-neuron case, or Kν, in the network case. Network simulations use N = 10K excitatory and inhibitory neurons.
We next examine the effect of a finite synaptic time constant on network response. Numerically computed responses in networks of neurons with a finite synaptic time constant are shown in Fig. 6(b). The network response is close to the prediction obtained with instantaneous synapses for small τ/τ, and deviations emerge for τ/τ ~ 1. Hence, analogously to the single-neuron case, network properties discussed in the case of instantaneous synapses remain valid for low inputs. However, unlike the single-neuron case, no suppression appears for larger τ/τ. This lack of suppression in the network response, analogously to the one we discuss in networks with instantaneous synapses and a ~ K−, is a consequence of the fact that, to have stable dynamics when K is large, inhibition must dominate recurrent interactions [9]. In this regime, any change which would produce suppression of single-neuron response (e.g., increase of ν) lowers recurrent inhibition and increases the equilibrium value of the membrane potential μ [Figs. 6(b) and 6(d)]. The balance between these two effects determines the network firing rate and, when τ/τ ≫ 1, generates a response which (see the derivation in Appendix G), up to corrections of the order of , is given by Eq. (21) [dashed line in Fig. 6(b)]. Similarly to what happens in networks with instantaneous synapses and a ~ K−, this finite response emerges because recurrent interactions set μ very close to threshold, at a distance that matches the size of the membrane potential fluctuations [Eq. (25), ]. Hence, as the input to the network increases, recurrent interactions restore the drift-diffusion balance by adjusting the membrane potential mean μ close to threshold, so that fluctuations can sustain firing. Moreover, the single-neuron membrane potential correlation approaches τ and firing becomes bursty, with periods of regular spiking randomly interspersed in time.We next discuss the effects of the values of τ and coupling strength on how the model response evolves with inputs; this discussion is relevant for both the single-neuron and the network model. In Appendix G, using existing analytical expansions [67,68,70,72] and numerical simulations, we show that neural responses obtained with finite τ are in good agreement with predictions obtained using a short synaptic time constant approximation for τ/τ ≲ 0.1 and are captured by predictions obtained with a large synaptic time constant approximation for τ/τ ≳ 1. The input value at which τ/τ ~ 1, i.e., ν ~ 1/aKτS, determines the input range over which the model expresses one of the two behaviors. Therefore, models with larger (smaller) τ or coupling strength have a smaller (larger) region of inputs in which their response is captured by results obtained with instantaneous synapses (Figs. 6 and 7). Importantly, when biologically relevant parameters are considered (e.g., Fig. 6), both the small and the large τ/τ behaviors are expected to appear. In fact, biological synapses span a wide range of parameters, and most neuron types typically express both fast and slow synaptic receptors; in this condition, fast synapses (characterized by τ of a few milliseconds) are the ones that drive rapid membrane potential fluctuations and, hence, firing. Assuming aK ~ 10, we find that the transition from small to large τ/τ in the cortex is expected to appear for inputs ν ~ 1/aKτ ~ 10–100 spk/s, which is compatible with experimentally observed firing rates [23,46-54].
FIG. 7.
Single-neuron and network response with finite synaptic time constants, when both a and τ are rescaled with K. (a) Single-neuron response as a function of input rate ν, computed numerically from Eqs. (1) and (23). Different colors correspond to different values of K (103, purple; 104, light blue; 105, yellow; 106, red) with a and τ scaled as in Eqs. (14) and (26); for K = 103, a = 0.01 and τ = 1 ms (i.e., ms). The scaling relation described in the main text preserves the response properties observed in Fig. 6. (b) Analogous to (a) but in the network case; colors correspond to K = 500, 103, 2 × 103, and 4 × 103. For K = 103, a = 0.002 and τ = 1 ms.
We next investigate if and under which conditions the input-dependent behavior described in this section is preserved in the strong-coupling limit. For large inputs, the membrane potential dynamics of Eq. (25) becomes independent of a for large K, and, hence, the model behavior is independent of the scaling relation used. For low inputs and finite coupling, the model behaves as in the case of instantaneous synapses, and, therefore, response properties can be preserved in the strong-coupling limit only if a ~ 1/ log(K). With this scaling, the value of ν separating the low and large input regimes decreases with coupling strength as log(K)/Kτ. This is problematic because, as coupling increases, the model loses its low input behavior and converges to a pathological state in which, for all inputs, membrane potential fluctuations become small, the single-neuron response is suppressed, and, in the network case, the membrane potential is squeezed close to threshold. Thus, to preserve the input-dependent behavior in the strong-coupling limit, the synaptic time constant should decrease with coupling strength as
where is a constant independent of a and K. In Fig. 7, we show that the scaling of Eq. (26) preserves the input-dependent response as coupling increases.The activity-dependent drift-diffusion balance described here produces features that are not present in models with instantaneous synapses and that can be tested experimentally (see Table I for a summary). First, the increase of μ with inputs is absent in strongly coupled networks with instantaneous synapses and is consistent with the increased membrane potential observed in cortical circuits with the strength of sensory stimuli [23,49]. Second, with instantaneous synapses, the decay time constant of the autocorrelation of the membrane potential is of the order of τ and, hence, decreases, without bounds, as 1/ν with inputs. The finite synaptic time constant modifies the input dependence of the autocorrelation time constant—it decreases with τ for low inputs and becomes constant (of the order of τ) for larger inputs. Third, with a finite synaptic time constant, firing becomes more bursty as input increases; this effect should be more prominent in networks with stronger coupling (e.g., prefrontal cortex). Fourth, synaptic dynamics makes the robustness of network response to connection heterogeneity input dependent: For small inputs, τ/τ ≪ 1 and ; for large inputs, τ/τ ≫ 1 and (derivation in Appendix G). Therefore, the model predicts that networks of neurons with heterogeneous connections and a log-normal distribution of rates for low inputs (e.g., Refs. [52-54]) should show an increasing number of silent and regular spiking cells as the input strength increases.
TABLE I.
Overview of networks of current-based and conductance-based neurons. The synaptic time constant strongly affects response properties in networks of conductance-based neurons. Properties similar to what is observed in the cortex emerge in these networks if a ~ 1/ log K and input rates are lower than or comparable to [defined in Eq. (26)]. The model predicts that response properties should gradually mutate as the input to the network increases and, for large inputs, should coincide with those indicated in the last line of the table. In the table, the different quantities related to the membrane potential represent the mean distance from threshold (θ − μ), the size of temporal fluctuations (σ), and the membrane potential correlation time constant (τ).
Synaptic model
Ratio of synaptic and membrane time constant (τS/τ)
Synaptic strength
Membrane potential statistics
Activity structure
Heterogeneity of in-degree supported (CVK*)
Current-based (balanced state model)
Constant, independent of vX, a, and K
J~(1/K)
θ – μ ~ σV ~ 1; τV~ τL
Irregular firing, CV of ISI ~ 1
~(1/K)
Conductance-based
≪ 1 for vX ≪ (1/τS*); always satisfied for instantaneous synapses (τS*=0)
a ~ (1/ log K)
θ – μ ~ 1;σV~(1/log K); τV ~ log(K)/K
Irregular firing, CV of ISI ~ 1
~ (1/ log K)
a ~ K−α, α > 0
θ – μ ~ σV ~ K(−α/2); τV ~Kα−1
Irregular firing, CV of ISI ~ 1
~K−α
≫ 1 for vX ≫ (1/τS*)
Any scaling
θ−μ~σV~(1/K); τV~ τL
Irregular bursting
~(1/K)
DISCUSSION
In this work, we analyzed networks of strongly coupled conductance-based neurons. The study of this regime is motivated by the experimental observation that in cortex K is large, with single neurons typically receiving inputs from thousands of presynaptic cells. We showed that the classical balanced state idea [5,6], which was developed in the context of current-based models and features synaptic strength of the order of [7,8], results in current fluctuations of very small amplitude, which can generate firing in networks only if the mean membrane potential is extremely close to threshold. This is inconsistent with intracellular recordings in the cortex that show large membrane potential fluctuations (see, e.g., Refs. [21,46-51]). To overcome this problem, we introduced a new scaling relation which, in the case of instantaneous synaptic currents, maintains firing by preserving the balance of input drift and diffusion at the single-neuron level. Assuming this scaling, the network response automatically shows multiple features that are observed in the cortex in vivo: irregular firing, wide distribution of rates, membrane potential with non-negligible distance from threshold, and fluctuation size. When finite synaptic time constants are included in the model, we showed that these properties are preserved for low inputs but are gradually modified as inputs increase: The membrane mean approaches threshold, while its fluctuations decrease in size and develop non-negligible temporal correlations. These properties, which are summarized in Table I, provide a list of predictions that could be tested experimentally by analyzing the membrane potential dynamics as a function of the input strength in cortical neurons.When synaptic time constants are negligible with respect to the membrane time constant, our theory shows properties that are analogous to those of the classical balanced state model: linear transfer function, CV of the order of one, and distribution of membrane potentials with finite width. However, these properties emerge from a different underlying dynamics than in the current-based model. In current-based models, the mean input current is at a distance of the order of one from threshold in units of input fluctuations. In conductance-based models, this distance increases with coupling strength, and firing is generated by large fluctuations at strong coupling. The different operating mechanism manifests itself in two ways: the strength of synapses needed to sustain firing and the robustness to connection heterogeneity, as we discuss in the next paragraphs.The scaling relation determines how strong synapses should be to allow firing at a given firing rate, for a given value of K. In current-based neurons, irregular firing is produced as long as synaptic strengths are of the order of . In conductance-based neurons, stronger synapses are needed, with a scaling which approaches 1/ log (K) for large K. We showed that both scaling relations are in agreement with data obtained from culture preparations [63], which are limited to relatively small networks, and argued that differences might be important in vivo, where K should be larger.In current-based models, the mean input current must be set at an appropriate level to produce irregular firing; this constraint is realized by recurrent dynamics in networks with random connectivity and strong enough inhibition [7-9]. However, in networks with structural heterogeneity, with connection heterogeneity larger than , the variability in mean input currents produces significant departures from the asynchronous irregular state, with large fractions of neurons that become silent or fire regularly [56]. This problem is relevant in cortical networks [56], where significant heterogeneity of in-degrees has been reported [50,57-62], and different mechanisms have been proposed to solve it [56]. Here, we showed that networks of conductance-based neurons also generate irregular activity without any need for fine-tuning and, furthermore, can support irregular activity with substantial structural heterogeneity, up to the order of 1/ log(K). Therefore, these networks are more robust to connection heterogeneity than the current-based model and do not need the introduction of additional mechanism to sustain the asynchronous irregular state.When the synaptic time constant is much larger than the effective membrane time constant, we showed that, regardless of synaptic strength, the size of membrane potential fluctuations decreases and firing in the network is preserved by a reduction of the distance from threshold of the mean membrane potential. Moreover, the robustness to heterogeneity in connection fluctuations decreases substantially (the maximum supported heterogeneity becomes of the order of ), and the membrane potential dynamics becomes correlated over a timescale fixed by the synaptic time constant. The network response at low rates is well approximated by that of networks with instantaneous synapses, and the regime of large synaptic time constant is reached gradually, as the input to the network increases (Fig. 6). This observation provides a list of predictions on how properties of cortical networks should evolve with input strength (summary in Table I.) that are testable experimentally. While some of these predictions require new experiments to be validated, we point out that one of them—that the equilibrium value of the membrane potential should increase with inputs—is consistent with the increased membrane potential observed in cortical circuits with the strength of sensory stimuli [23,49].In conductance-based models, we showed that response properties observed at finite coupling survive in the strong-coupling K → ∞ limit only if unitary conductances obey a specific scaling law [Eq. (14)], and synaptic time constants also obey a scaling law [Eq. (26)]. While there is evidence in cortical cultures that average synaptic strengths do decay with increasing connectivity [63], no such evidence exists to our knowledge to support decreasing synaptic time constants with increasing connectivity. However, it is well known that synaptic decay time constants depend on subunit composition of the receptors (see, e.g., Ref. [73] for GABA receptors, Ref. [74] for NMDA receptors, and Ref. [75] for AMPA receptors), and subunit composition can depend on synaptic activity (e.g., Ref. [76]). It is thus tempting to speculate that both scaling laws could be implemented in neurobiological circuits. If such plasticity exists, our theory predicts that it should produce smaller synaptic time constants in networks with larger K.In our analytical calculations, we have neglected correlations between neurons and assumed the network operates in the asynchronous regime. This assumption is consistent with observations that correlations between cells in cortex in vivo can in some cases be small, i.e., of the order of 0.01 [77,78]. It is also consistent with the results of our numerical simulations, which show good agreement with the calculations in networks with connection probabilities of 0.1, on the same order of magnitude as observed connection probabilities in cortex. However, correlations between neurons can vary significantly between cortical state, layer, and firing rate, with many studies finding average correlation coefficients of the order of 0.1 or more (e.g., Ref. [79]). Intriguingly, weak but nonzero correlations between inputs, on the order of 0.1, have been argued to be necessary to quantitatively capture the amplitude of membrane potential fluctuations observed in the cat cortex [21]. Understanding how correlations affect the results obtained in our work is an important problem which should be addressed in the future.Experimental evidence suggests that the response to multiple inputs in cortex is nonlinear (for an overview, see Ref. [80]). Such nonlinearities, which are thought to be fundamental to perform complex computations, cannot be captured by the classical balanced state model, as it features a linear transfer function [7,8]. Several studies have shown how relaxing assumptions underlying the classical balanced state model can lead to nonlinear responses. In particular, moderate coupling and power-law single-neuron input-output function [80-82], short-term plasticity [83], and differential inputs to subsets of excitatory neurons [84] can lead to nonlinearities. We have recently shown [85] that nonlinear responses appear in networks of current-based spiking neurons when coupling is moderate and only at response onset or close to single-neuron saturation. Here, we have shown that response onset and saturation nonlinearities appear also in networks of conductance-based neurons when coupling is moderate. In addition, we have found that synaptic time constants provide an additional source of nonlinearity, with nonlinear responses emerging as the network transitions between the response onset and saturation. A full classification of the nonlinearities generated in these networks is outside the scope of this work but could be performed by generalizing the approach developed in Ref. [85].The strength of coupling in a network, both in the current-based model [81,85] and in the conductance-based model (e.g., Fig. 3), determines the structure of its response and, hence, the computations it can implement. Recent theoretical work, analyzing experimental data in the framework of current-based models, has suggested that the cortex operates in a regime of moderate coupling [82,86], where response nonlinearities are prominent. In conductance-based models, the effective membrane time constant can be informative on the strength of coupling in a network, as it decreases with coupling strength. Results from in vivo recordings in the cat parietal cortex [21] showed evidence that single-neuron response is sped up by network interactions. In particular, measurements are compatible with inhibitory conductance approximately 3 times larger than leak conductance and support the idea that the cortex operates in a “high-conductance state” [22]. This limited increase in conductance supports the idea of moderate coupling in cortical networks, in agreement with what was found in previous work [82,86]. More recent studies have, however, obtained results that seem at odds with the high-conductance state idea. Recent whole cell recordings have reported that an intrinsic voltage-gated conductance, whose strength decreases with membrane potential, contributes to the modulation of neuronal conductance of cells in the primary visual cortex of awake macaques and anesthetized mice [23]. For spontaneous activity, this intrinsic conductance is the dominant contribution to the cell conductance and drives its (unexpected) decrease with increased depolarization. For activity driven by sensory stimuli, on the other hand, modulations coming from synaptic interactions overcome the effect of the intrinsic conductance, and neuronal conductance increases with increased depolarization. The decrease in conductance observed during spontaneous activity in Ref. [23] seems incompatible with previous experimental results [22], and it is still unclear which differences between experimental preparations underlie these differences. While a resolution of this discrepancy will require additional experimental work, we point out that our work is relevant for both scenarios. In fact, our analysis shows that voltage-dependent currents, such as that produced by the voltage-gated channels [23] or during spike generation [44], affect quantitatively, but not qualitatively, the single-neuron response. Moreover, our theory explains the mechanisms shaping response properties at finite coupling and identifies a scaling relation that preserves these properties in the strong-coupling limit. Therefore, results described in this contribution seem to be a general property of networks of spiking neurons with conductance-based synapses, and they should be relevant for a wide range of single-neuron models and coupling strengths.Understanding the dynamical regime of operation of the cortex is an important open question in neuroscience, as it constrains which computations can be performed by a network [81]. Most of the theories of neural networks have been derived using rate models or current-based spiking neurons. Our work provides theoretical tools to investigate the dynamics of strongly coupled conductance-based neurons, and it suggests predictions that could be tested experimentally.
Authors: Xiaolong Jiang; Shan Shen; Cathryn R Cadwell; Philipp Berens; Fabian Sinz; Alexander S Ecker; Saumil Patel; Andreas S Tolias Journal: Science Date: 2015-11-27 Impact factor: 47.728