Alan Eric Akil1, Robert Rosenbaum2,3, Krešimir Josić1,4. 1. Department of Mathematics, University of Houston, Houston, Texas, United States of America. 2. Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, United States of America. 3. Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, United States of America. 4. Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America.
Abstract
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory-inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike-timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity-induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory-inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike-timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity-induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input.
Cortical neuronal activity is irregular, correlated, dominated by a low dimensional component [1-6], and characterized by a balance between excitation and inhibition [7-12]. Such balance is now widely thought to give rise to stable, irregular neural activity [1, 7, 12–18]. Early theoretical work has focused on irregular asynchronous dynamics, with large networks exhibiting vanishing correlations [13, 19]. However, more recent extensions have shown how correlated dynamics can be generated both endogenously and exogenously, while preserving irregular single cell activity [20-28], showing the existence of both asynchronous and correlated states in balanced networks.Correlated firing can also produce changes in synaptic weights [29, 30]. For instance spike–time dependent plasticity (STDP), is driven by patterns in the timing of pre– and post–synaptic spikes [31, 32]. However, we still lack a theory that relates STDP to changes in neural activity, and the resulting neural computations. Hence, often the analysis of the effects of STDP relies on simulations [29, 33–35]. Analytical treatments have been proposed for a number of cases, starting with the description of mean synaptic dynamics of a single integrate–and–fire neuron receiving feed–forward input from a collection of Poisson neurons [36]. These results have been extended to small networks [34], and networks of Poisson neurons [37-40]. Other work provided analytical treatments of specific plasticity rules, such as homeostatic inhibitory plasticity [41, 42]. Using linear response and motif resumming techniques [43], Ocker et al. developed a theory describing the evolution of mean weights in recurrent neural networks of noisy integrate–and–fire neurons under STDP [44]. This approach relies on the assumption that the input to individual cells is dominated by white noise, local synaptic input is weak, and that the integral of the STDP function is small. Related results were obtained by treating neural firing as a Poisson process [37–39, 45]. In particular, Ravid Tannenbaum et al. showed that in networks of Poisson neurons synfire chains and self connected assemblies can emerge autonomously in recurrent networks [46]. Montangie et al. showed that a more realistic form of STDP based on spike triplets also leads to autonomous emergence of assemblies [47].Here, we develop a complementary theory describing the evolution of synaptic weights and associated mean rates in tightly balanced networks in both correlated and asynchronous states. We combine the mean–field theory of firing rates and correlations in balanced networks [13, 14, 23, 24, 48–50] with an averaging approach assuming a separation of timescales between changes in spiking activity, and the evolution of synaptic weights [30]. We show how the weights and the network dynamics co–evolve under different classical rules, such as Hebbian plasticity, Kohonen’s rule, and a form of inhibitory plasticity [31, 32, 41, 51, 52]. In general, the predictions of our theory agree well with empirical simulations. We also explain when the mean–field theory fails, leading to disagreements with simulations, and we develop a semi–analytic extension of the theory that explains these disagreements.We find that spike train correlations, in general, have a mild effect on the synaptic weights and firing rates, in agreement with previous work [44, 53]. We also show that for some STDP rules, synaptic competition can introduce correlations between synaptic weights and firing rates, resulting in the formation of a stable manifold of fixed points in weight space, and hence asymptotic weight distributions that depend on the initial state. Finally, we apply this theory to show how inhibitory STDP [41] can lead to a reestablishment of an asynchronous, balanced state that is broken by optogenetic stimulation of a neuronal subpopulation [54]. We thus extend the classical theory of balanced networks to understand how synaptic plasticity shapes their dynamics.
Materials and methods
Review of mean–field theory in balanced networks
In mammals, local cortical networks can be comprised of thousands of cells, with each neuron receiving thousands of inputs from cells within the local network, and other cortical layers, areas, and thalamus [55]. Predominantly excitatory, long–range inputs would lead to high, regular firing unless counteracted by local inhibition. To reproduce the sparse, irregular activity observed in cortex, model networks often exhibit a balance between excitatory and inhibitory inputs [13, 14, 19, 23, 48, 56–58]. This balance can be achieved robustly and without tuning, when synaptic weights are scaled like , where N is the network size [13, 14]. In this balanced state mean excitatory and inhibitory inputs cancel one another, and the activity is asynchronous [19]. Inhibitory inputs can also track excitation at the level of cell pairs, cancelling each other in time, and produce a correlated state [1, 23].We first review the mean–field description of asynchronous and correlated states in balanced networks, and provide expressions for firing rates and spike count covariances averaged over subpopulations that accurately describe networks of more than a few thousand neurons [13, 14, 23, 24, 48–50]: Let N be the total number of neurons in a recurrent network composed of Ne excitatory and Ni inhibitory neurons. Cells in this recurrent network also receive input from Nx external Poisson neurons firing at rate rx, and with pairwise correlation cx (See Fig 1, and S1 Appendix for more details). We assume that for b = e, i, x. Let p be the probability of a synaptic connection, and the weight of a synaptic connection from a neuron in population b = e, i, x to a neuron in population a = e, i. For simplicity we assume that both the probabilities, p, and weights, , are constant across pairs of subpopulations.
Fig 1
A plastic, balanced network in asynchronous and correlated regimes.
: A recurrent network of excitatory, E, and inhibitory, I, neurons is driven by an external feedforward layer, X, of correlated Poisson neurons. : Raster plot of all neurons in a network of N = 5000 neurons in an asynchronous state. E cells in blue, I neurons in red. : Same as (), but in a correlated state. : Mean steady state EE synaptic weight, j, in an asynchronous state. : Mean E and I firing rates for different network sizes, N, in an asynchronous state. : Mean EE, II and EI spike count covariances in an asynchronous state. : Same as () but for a network in a correlated state. Solid lines represent simulations, and dashed lines are values obtained using Eqs (3), (5) and (21). All empirical results were averaged over 10 realizations. In the asynchronous state cx = 0, and in the correlated state cx = 0.1. Unless otherwise stated, colors carry the same meaning in all figures.
A plastic, balanced network in asynchronous and correlated regimes.
: A recurrent network of excitatory, E, and inhibitory, I, neurons is driven by an external feedforward layer, X, of correlated Poisson neurons. : Raster plot of all neurons in a network of N = 5000 neurons in an asynchronous state. E cells in blue, I neurons in red. : Same as (), but in a correlated state. : Mean steady state EE synaptic weight, j, in an asynchronous state. : Mean E and I firing rates for different network sizes, N, in an asynchronous state. : Mean EE, II and EI spike count covariances in an asynchronous state. : Same as () but for a network in a correlated state. Solid lines represent simulations, and dashed lines are values obtained using Eqs (3), (5) and (21). All empirical results were averaged over 10 realizations. In the asynchronous state cx = 0, and in the correlated state cx = 0.1. Unless otherwise stated, colors carry the same meaning in all figures.We define the recurrent, and feedforward mean–field connectivity matrices as
where .Let = [re, ri] be the vector of mean excitatory and inhibitory firing rates. The mean external input and recurrent input to a cell are then and , respectively, and the mean total synaptic input to any neuron is given byWe next make the ansatz that in the balanced state the mean input and firing rates remain finite as the network grows, i.e., [13, 14, 23, 48–50]. This is only achieved when external and recurrent synaptic inputs are in balance, that is when
provided that also [13, 14]. Eq (3) holds in both the asynchronous and correlated states.We define the mean spike count covariance matrix as:
where C is the mean spike count covariance between neurons in populations a = e, i and b = e, i, respectively, counted over time windows of size Twin. Throughout all simulations and theoretical predictions, we set Twin = 250 ms, however the theory is flexible to other time window sizes.From [23, 24] it follows that in large networks, to leading order in 1/N (See [19, 59–61] for similar expressions derived for similar models),
In Eq (5), F is the Fano factor of the spike counts averaged over neurons in populations a = e, i over time windows of size Twin. The second term in Eq (5) is and accounts for intrinsically generated covariability [23] within excitatory or inhibitory populations (note that this term does not refer to variances, but instead to mean covariances between spike trains in the same subpopulations). The matrix Γ has the same structure as C and represents the covariance between external inputs (See Baker et al., 2019 Appendix A for a detailed derivation of this term [23]).If external neural activity is uncorrelated (cx = 0), then
so that , and the network is in an asynchronous regime. If external neural activity is correlated with mean pairwise correlation coefficient cx ≠ 0, then in leading order N,
so that , and the network is in a correlated state. Eq (5) can be extended to cross–spectral densities as shown in S1 Appendix and by Baker et al. [23].
Network model
For illustration, we used recurrent networks of N exponential integrate–and–fire (EIF) neurons (See S1 Appendix), 80% of which were excitatory (E) and 20% inhibitory (I) [23, 24, 35, 54, 62]. The initial connectivity structure was random:
Initial synaptic weights were therefore independent. We set p = 0.1 for all a, b = e, i, and denote by the weight of a synapse between presynaptic neuron k in population b = e, i, x and postsynaptic neuron j in population a = e, i. We modeled postsynaptic currents using an exponential kernel, for each a = e, i, x where H(t) is the Heaviside function.
Synaptic plasticity rules
To model activity–dependent changes in synaptic weights we used eligibility traces to define the propensity of a synapse to change [63-67]. The eligibility trace, , of neuron j in population a evolves according to
for a = e, i, where is the sequence of spikes of neuron j. The eligibility trace, and the time constant, τSTDP, define a period following a spike in the pre– or post–synaptic cell during which a synapse can be modified by a spike in its counterpart.Our theory of synaptic plasticity allows any synaptic weight to be subject to constant drift, changes due to pre– or post–synaptic activity only, and/or due to pairwise interactions in activity between the pre– and post–synaptic cells (zero, first, and second order terms, respectively, in Eq (10)). The theory can be extended to account for other types of interactions. Each synaptic weight therefore evolves according to a generalized STDP rule:
where η is the learning rate that defines the timescale of synaptic weight changes, A0, A, B are functions of the synaptic weight, and a, b = e, i. For instance, the term represents the contribution due to a spike in post–synaptic cell j in the inhibitory subpopulation, at the value of the eligibility trace in the pre–synaptic cell k in the excitatory subpopulation. Higher order interactions are at the heart of triplet rules [47, 68–70], and other types of interactions may also be important, e.g., for calcium–based update rules [71, 72]. For simplicity we here focus on pairwise interactions between spikes and eligibility traces, and leave extensions to more complex rules for future work.This general formulation captures a range of classical plasticity rules as special examples: Table 1 shows that different choices of parameters yield Hebbian [31, 32, 51], anti–Hebbian, as well as Oja’s [73], and other rules (See Fig A in S1 Appendix for illustrations of the STDP function of each rule in Table 1). The BCM rule [68], and other rules [69, 70] that depend on interactions beyond second order will be considered elsewhere.
Table 1
Examples of STDP rules.
A number of different plasticity rules can be obtained as special cases of the general form given in Eq (10).
A number of different plasticity rules can be obtained as special cases of the general form given in Eq (10).
Dynamics of mean synaptic weights in balanced networks
To understand how the dynamics of the network, and synaptic weights co–evolve we derived effective equations for the firing rates, spike count covariances, and synaptic weights using Eqs (3) and (5). The following is an outline, and details can be found in S1 Appendix.We assumed that changes in synaptic weights occur on longer timescales than the dynamics of the eligibility traces and the correlation timescale, i.e., 1/η ≫ Twin [30, 38–40, 45, 74]. Let ΔT be a time increment larger than the timescale of eligibility traces, τSTDP, and Twin, but smaller than 1/η, so that the difference quotient of the weights and time is given by [30]:
The difference in timescales allows us to assume that the firing rates and covariances are in quasi–equilibrium. We used 1/η = 105 ms, and τSTDP = 200 ms, with correlation time window width Twin = 250 ms. Our derivations require τSTDP ≪ ΔT ≪ 1/ηab, however an exact numerical value for ΔT is neither used nor needed (See S1 Appendix: “What happens when timescales are not separated?”). Replacing the terms on the right hand side of Eq (11), with their averages over time, and over different network subpopulations, we obtain the following mean–field equation for the weights:
where
and is the Fourier transform of the synaptic kernel, K(t). Recall that 〈S, S〉(f) is the average cross spectral density of spike trains in populations α, β. The cross spectral density (CSD) of a pair of spike trains is defined as the Fourier transform of the covariance function between the two spike trains, and when evaluated at f = 0, the CSD is proportional to the spike count covariance between the two spike trains (See S1 Appendix).For example, classical Hebbian EE plasticity in Table 1 leads to the following mean–field equation,
Eqs (3), (5) and (12) thus self–consistently describe the macroscopic dynamics of the balanced network. There are two approaches to analyzing this coupled system of ordinary differential equations: (1) solve directly for the steady–states of the system; or (2) apply numerical integration to obtain the evolution of the system in time. To obtain the equilibria, we first find the firing rates and covariances (both in terms of plastic weight J) obtained using the mean–field description of the balanced network, Eqs (3) and (5). We next substitute the results into Eq (12), set , and find the roots. We denote the solution by . We then use the mean synaptic weight (root of Eq (12), ) to obtain the corresponding rates and covariances using Eqs (3) and (5). Alternatively, we can solve the system iteratively over time and obtain the time evolution of rates, covariances, and weights. Starting at an arbitrary value of J(t), we proceed in the same way as in the first approach, but instead of setting , we use J(t) to compute the value of the derivative at time t, , and use it to update the mean weight at the next time step, J(t + ΔT). We then use J(t + ΔT) to update rates and covariances. We repeat this process until convergence (See S1 Appendix: “Transient dynamics of synaptic weights” for sample trajectories under different rules, and for our criterion to determine stationarity of synaptic weights).
Perturbative analysis
We next show how rates and spike count covariances are impacted by perturbations in synaptic weights. At steady state the average firing rates in a balanced network with mean–field connectivity matrix are given by
We assume that the mean–field connectivity matrix is perturbed to . Using Neumann’s approximation [75], (I + H)−1 ≈ (I − H), which holds for any square matrix H with ‖H‖ < 1, and ignoring terms of power 2 and larger, we obtain,
where I is the identity matrix of appropriate size. We use this approximation of the perturbed weights to estimate the rates and spike count covariances using Eqs (3) and (5). The 2 × 2 mean–field connectivity matrix, , must be non–singular for the balanced state to exist and for Neumann’s approximation to hold [13]. While the non–singularity of is a non–restrictive condition for two neural populations, can become singular in some models with several neural sub–populations [49, 54].
Comparison of theory with numerical experiments
We define spike trains of individual neurons in the population as sums of Dirac delta functions, S(t) = ∑
δ(t − t), where the t is the time of the j spike of neuron i. Assuming the system has reached equilibrium, we partition the interval over which activity has been measured into K equal subintervals, and define the spike count covariance between two cells as,
where n is the spike count of neuron i in subinterval, or time window, k, and is the average spike count over all subintervals. In simulations we used subintervals of size Twin = 200 ms, although the theory applies to sufficiently long subintervals, and can be extended to shorter intervals as well. The spike count covariance thus captures shared fluctuations in firing rates between the two neurons [76].
Results
We next apply the theory described in the Methods to show how synaptic weights coevolve with firing rates in balanced networks under different plasticity rules. We start with an example of excitatory plasticity which has been the main focus of experimental and theoretical studies, and show that our theory can be used to determine the stability of balanced networks under commonly used excitatory STDP rules. More recently, inhibitory plasticity has been proposed to play an important role in regulating the dynamics of neural networks. Our approach provides a theoretical foundation for some of these findings. Finally, we show that our theory can be used to make experimental predictions by considering a plastic network under optogenetic stimulation, and demonstrating that our framework can describe the dynamics of networks in such biologically relevant regimes.
Balanced networks under excitatory plasticity
Excitatory plasticity plays a central role in theories of learning, but can lead to instabilities [31, 32, 34, 44]. Our theory predicts the stability of the balanced state, the fixed point of the system, and the effect the plasticity rule on the dynamics of the network.We consider a network in a correlated state with excitatory–to–excitatory (EE) weights that evolve according to Kohonen’s rule [52, 77]. This rule was first introduced in artificial neural networks [78], and was later shown to lead to the formation of self–organizing maps in model biological networks. [78, 79] We use our theory to show that Kohonen’s rule leads to stable asynchronous or correlated balanced states, and verify these predictions in simulations.Kohonen’s Rule can be implemented by letting EE synaptic weights evolve according to [52] (See Table 1),
where β > 0 is a parameter that can change the fixed point of the system (See S1 Appendix: “Saddle–node bifurcation of excitatory weights in Kohonen’s STDP rule”). This STDP rule is competitive as weight updates only occur when the pre–synaptic neuron is active, so that the most active pre–synaptic neurons change their synapses more than less active pre–synaptic cells.The mean–field approximation describing the evolution of synaptic weights given in Eq (12) has the form:
The fixed point of Eq (21) can be obtained by using the expressions for the rates and covariances obtained in the balanced state (Eqs (3) and (5)). The rates and covariances at steady–state can then be obtained from the resulting weights.
Equilibria of correlated balanced networks under excitatory STDP
Our theory predicts that the network attains a stable balanced state, and the average rates, weights, and covariances at this equilibrium (Fig 1) (See S1 Appendix: “Statistics and dynamics of balanced networks under pairwise STDP rules” for empirical distributions under Kohonen’s and other rules). These predictions agree with numerical simulations in both the asynchronous and correlated states (Fig 1). As expected, predictions improve with network size, N, and spike count covariances scale as 1/N in the asynchronous state (Fig 1). Similar agreement holds in the correlated state, including the impact of the correction introduced in Eq (21) (Fig 1).The predictions of the theory hold in all cases we tested (See S1 Appendix: “Asymptotic behavior in weight–dependent Hebbian STDP”). Understanding when plasticity will support a stable balanced state allows one to implement Kohonen’s rule in complex contexts and tasks, without the emergence of pathological states (See S1 Appendix: “Classical Hebbian STDP leads to unstable dynamics”).
Dynamics of correlated balanced networks under excitatory STDP
We next asked whether and how the equilibrium and its stability are affected by correlated inputs to a plastic balanced network. In particular, we used our theory to determine whether changes in synaptic weights are driven predominantly by the firing rates of the pre– and post–synaptic cells, or correlations in their activity. We also asked whether correlations in neural activity can change the equilibrium, the speed of convergence to the equilibrium, or both?We first address the role of correlations. As shown in the previous section, our theory predicts that a plastic balanced network remains stable under Kohonen’s rule, and an increase in the mean EE weights by 10–20% when input correlations are increased. Both predictions were confirmed by simulations (Fig 2). The theory also predicted that this increase in synaptic weights results in negligible changes in firing rates, which simulations again confirmed (Fig 2).
Fig 2
Spike count covariances mildly impact the fixed point of synaptic weights and firing rates.
: The rate of change of EE weights as function of the weight, jee, at different levels of input correlations, cx. : Mean steady–state EE synaptic weight for a range of input correlations, cx. : Mean E and I firing rates as a function of input correlations. : Same as () but for an EE STDP rule with all coefficients involving order 2 interactions set equal to 1, and all other coefficients set equal to zero. : Mean E and I firing rates as a function of mean EE synaptic weights. : Mean spike count covariances between E spike trains, I spike trains, and between E–I spike trains as a function of EE synaptic weight, jee. Solid lines represent simulations (except in , ), dashed lines are values obtained from theory (Eqs (3), (5) and (21)), and dotted lines were obtained from the perturbative analysis. Note that in all panels, ‘Exc weight’ refers to jee rather than Jee, as the former does not depend on N.
Spike count covariances mildly impact the fixed point of synaptic weights and firing rates.
: The rate of change of EE weights as function of the weight, jee, at different levels of input correlations, cx. : Mean steady–state EE synaptic weight for a range of input correlations, cx. : Mean E and I firing rates as a function of input correlations. : Same as () but for an EE STDP rule with all coefficients involving order 2 interactions set equal to 1, and all other coefficients set equal to zero. : Mean E and I firing rates as a function of mean EE synaptic weights. : Mean spike count covariances between E spike trains, I spike trains, and between E–I spike trains as a function of EE synaptic weight, jee. Solid lines represent simulations (except in , ), dashed lines are values obtained from theory (Eqs (3), (5) and (21)), and dotted lines were obtained from the perturbative analysis. Note that in all panels, ‘Exc weight’ refers to jee rather than Jee, as the former does not depend on N.How large is the impact of correlations in plastic balanced networks more generally? To address this question, we assumed that only pairwise interactions affect EE synapses, as first order interactions depend only on rates after averaging. We thus set B ≡ 1, and all other coefficients to zero in Eq (10). While the network does not reach a stable state under this arbitrary plasticity rule, it allows us to estimate the contribution of rates and covariances to the evolution of synaptic weights. Here B can have any nonzero value, since it scales both the rate and covariance terms. Under these conditions, our theory predicts that the rate term is at least an order of magnitude larger than the correlation term (even when rates themselves are small, i.e., when jee is small), and so correlations only have a low impact on the dynamics of synaptic weights (Fig 2). Therefore, our theory predicts that, in general, changes in synaptic weights will largely be driven by changes in firing rate patterns, rather than changes in pairwise correlations.We next ask the opposite question: How do changes in synaptic weights impact firing rates, and covariances? The full theory (see Eqs (3) and (5), and perturbative analysis in Materials and Methods) predict that the potentiation of EE weights leads to large increases in rates and spike count covariances. This prediction was again confirmed by numerical simulations (Fig 2). This observation holds generally, and STDP rules that result in large changes in synaptic weights will produce large changes in rates and covariances.Our theory thus shows that in general weight dynamics can be moderately affected by correlations when these are large enough (See S1 Appendix: “General impact of correlations in weight dynamics” for a similar analysis on Classical Hebbian STDP). In turn, changes in synaptic weights will generally change the structure of correlated activity in a balanced network.
Balanced networks under inhibitory plasticity
Next, we show that in its basic form our theory can fail in networks subject to inhibitory STDP, and how the theory can be extended to capture such dynamics. The failure is due to correlations between weights and pre–synaptic rates which are typically ignored [13, 14, 23, 48–50], but can cause the mean–field description of network dynamics to become inaccurate. This is similar to the breakdown of balanced state theory in the presence of correlations between in– and out–degrees discussed by Vegué and Roxin, 2019 [80].To illustrate this, we consider a balanced network subject to homeostatic plasticity [41]. This type of plasticity has been shown to stabilize the asynchronous balanced state and conjectured to play a role in the maintenance of memories [35, 41, 81]. Following [41] we assume that EI weights evolve according to:
where αe is a constant that determines the target firing rates of E cells and is a normalization constant. Note that Jnorm < 0 so the fraction in Eq (22) is positive assuming . In a departure from the rule originally proposed by Vogels et al. [41], we chose to multiply the time derivative by the current weight value. This modification creates an unstable fixed point at zero, prevents EI weights from changing signs, and keeps the analysis mathematically tractable (See S1 Appendix: “Modification to the inhibitory STDP rule” for details). An alternative way to prevent weights from changing sign would be to place a hard bound at zero, but this would create a discontinuity in the vector field of Jei, complicating the analysis.Under the rule described by Eq (22) a lone pre–synaptic spike depresses the synapse, while near–coincident pre– and post–synaptic spikes potentiate the synapse (See Fig A in S1 Appendix). Changes in EI weights steer the rates of individual excitatory cells to the target value . Indeed, individual EI weights are potentiated if post–synaptic firing rates are higher than ρe, and depressed if the rate is below ρe. Our theory predicts that the network converges to a stable balanced state (Fig 3). Correlations again have only a mild impact on the evolution of synaptic weights (Fig 3).
Fig 3
Correlations between synaptic weights and inhibitory rates lead to the formation of a manifold in weight space.
: The rate of change of EI weights as a function of the weights themselves. The contributions of the covariance (blue) is considerably smaller than the contribution of the rate (red), and the theory predicts a stable fixed point. : Evolution of inhibitory weights showing that different initial weights converge to different fixed points. Also, weights starting at different initial conditions converge to equilibrium at different times for fixed 1/ηei = 105 ms. : A manifold of fixed points in space emerges due to correlations between weights and rates. Solid line represents simulations, dashed line are values obtained from the modified theory (Eqs (23) and (5), and mean–field equation for weights under iSTDP in Table 1). Inset: Final distribution of EI weights for a network with initial weights (yellow), and (blue). Modified theory predicts the manifold of fixed points. : Same as , but obtained from simulations. Lines represent trajectories from different initial weights (red dots). Inset: Total recurrent input to E neurons, for a range of initial weights. Mean recurrent input to E cells, . The mean input deviates from the total input due to emergent correlations between weights and rates, Cov(Jei, ri) = Re,total − Re,mean. : The weights of individual EI synapses corresponding to the same post–synaptic E cell as a function of the equilibrium firing rates of pre–synaptic I neurons. Each color represents a different simulation of the network with different initial EI weight. Equilibrium inhibitory weights and pre–synaptic rates are correlated (Blue: R2 = 0.952, Red: R2 = 0.9865, Yellow: R2 = 0.979). : Sample trajectories of the jei − jii system for a network of N = 104 neurons in an asynchronous state. Simulations with different initial weights (dashed lines), converge to a fixed point close to the one predicted by the theory (solid line).
Correlations between synaptic weights and inhibitory rates lead to the formation of a manifold in weight space.
: The rate of change of EI weights as a function of the weights themselves. The contributions of the covariance (blue) is considerably smaller than the contribution of the rate (red), and the theory predicts a stable fixed point. : Evolution of inhibitory weights showing that different initial weights converge to different fixed points. Also, weights starting at different initial conditions converge to equilibrium at different times for fixed 1/ηei = 105 ms. : A manifold of fixed points in space emerges due to correlations between weights and rates. Solid line represents simulations, dashed line are values obtained from the modified theory (Eqs (23) and (5), and mean–field equation for weights under iSTDP in Table 1). Inset: Final distribution of EI weights for a network with initial weights (yellow), and (blue). Modified theory predicts the manifold of fixed points. : Same as , but obtained from simulations. Lines represent trajectories from different initial weights (red dots). Inset: Total recurrent input to E neurons, for a range of initial weights. Mean recurrent input to E cells, . The mean input deviates from the total input due to emergent correlations between weights and rates, Cov(Jei, ri) = Re,total − Re,mean. : The weights of individual EI synapses corresponding to the same post–synaptic E cell as a function of the equilibrium firing rates of pre–synaptic I neurons. Each color represents a different simulation of the network with different initial EI weight. Equilibrium inhibitory weights and pre–synaptic rates are correlated (Blue: R2 = 0.952, Red: R2 = 0.9865, Yellow: R2 = 0.979). : Sample trajectories of the jei − jii system for a network of N = 104 neurons in an asynchronous state. Simulations with different initial weights (dashed lines), converge to a fixed point close to the one predicted by the theory (solid line).Although our theory predicts a single stable fixed point for the average EI weight, simulations show that weights converge to a different average depending on the initial EI weights (Fig 3 solid lines). A manifold of stable fixed points emerges due to synaptic competition, which is a consequence of heterogeneity in inhibitory firing rates in the network: Weights of highly active pre–synaptic inhibitory cells are potentiated more strongly compared to those of lower firing cells (Fig 3). Thus while inhibitory rates and EI weights are initially uncorrelated, correlations emerge as the excitatory rates approach their target. Networks with different initial EI synaptic weights, converge to different final distributions, and the emergent correlations between weights and rates drive the system to different fixed points (Fig 3).We used a semi–analytical approach to confirm that correlations between weights and rates explain the discrepancy between predictions of the mean field theory, and simulations. To do so we introduced a correlation dependent correction term into the expression for the rates:
where . The average covariances between weights and rates obtained numerically explain the departure from the mean–field predictions (Fig 3). Using the corrected equation (Eq (23)) predicts mean equilibrium weights that agree well with simulations (Fig 3 dashed line).We next asked whether the mean–field theory provides a good description of network dynamics in the absence of correlations between weights and rates. Such correlations disappear in a network with homogeneous inhibitory firing rates. Finding an initial distribution of weights that result in a balanced state with uniform inhibitory firing rates is non–trivial, and may not be possible outside of unstable regimes exhibiting rate–chaos where mean–field theory ceases to be valid [82]. However, allowing II synapses to evolve under the same plasticity rule we used for EI synpases can homogenize inhibitory firing rates: If we let
all inhibitory responses approach a target rate , effectively removing the variability in I rates. The evolution of the mean II and EI synaptic weights is now given by
We conjectured that if inhibitory rates converge to a common target, synaptic competition would be eliminated, and no correlations between weights and rates would emerge. This in turn would remove the main obstacle to the validity of a mean–field description. The fixed point of these equations can again be obtained using Eqs (3) and (5) which predict that the network remains in a stable balanced state (asynchronous or correlated). We also require ηei ≥ ηii, since when ηei is much slower than ηii, the network becomes unstable as homogeneous inhibitory weights and rates are not able to stabilize the heterogeneous distribution of E activity (See S1 Appendix: “Stability of iSTDP in EI and II connections.”). We chose the same STDP timescale for both EI and II synapses, and our predictions agree with the results of simulations (Fig 3). The stable manifold of fixed points is replaced by a single stable fixed point, and the average weights and rates approach a state that is independent of the initial weight distribution.This model of inhibitory plasticity is likely a large oversimplification. Synapses of different interneuron subtypes are likely subject to different plasticity rules operating on different timescales [17, 83], and would therefore not lead to uniform inhibitory firing rates. The mean–field theory we presented here can be extended to account for multiple inhibitory subtypes with different plasticity rules.We next show that the balanced network subject only to EI plasticity is robust to perturbatory inputs. Our theory predicts, and simulations confirm, that this learning rule maintains balance when non–plastic networks do not, and it can return the network to its original state after stimulation.
Inhibitory plasticity adapts response to stimuli
Thus far, we analyzed the dynamics of plastic networks in isolation. However, cortical networks are constantly driven by sensory input, as well as feedback from other cortical and sub–cortical areas. We next ask whether and how balance is restored if a subset of pyramidal neurons are stimulated [54].In experiments using optogenetics not all target neurons express the channelrhodopsin 2 (ChR2) protein [84-87]. Thus stimulation separates the target, e.g., pyramidal cell population into stimulated and unstimulated subpopulations. Although classical mean–field theory produced singular solutions, Ebsch et al. showed that the theory can be extended, and that a non–classical balanced state is realized: Balance at the level of population averages (E and I) is maintained, while balance at the level of the three subpopulations is broken [54]. Since local connectivity is not tuned to account for the extra stimulation (optogenetics), local synaptic input cannot cancel external input to the individual subpopulations. However, the input averaged over the stimulated and unstimulated excitatory population is cancelled.We show that inhibitory STDP, as described by Eq (22), can restore balance in the inputs to the stimulated and unstimulated subpopulations. Similarly, Vogels et al. showed numerically that such plasticity restores balance in memory networks [41]. Here, we present an accompanying theory that describes the evolution of rates, covariances, and weights before, during, and after stimulation, and confirm the prediction of the theory numerically.We assume that a subpopulation of pyramidal neurons in a correlated balanced network receives a transient excitatory input. This could be a longer transient input from another subnetwork, or an experimentally applied stimulus. To model this drive, we assume that the network receives input from two populations of Poisson neurons, X1 and X2. The first population drives all neurons in the recurrent network, and was denoted by X above. The second population, X2, provides an additional input to a subset of excitatory cells in the network, for instance ChR2 expressing pyramidal neurons (Eexpr in Fig 4). The resulting connectivity matrix between the stimulated (e1), unstimulated (e2) and inhibitory (i) subpopulations, and the feed–forward input weight matrix have the form:
where , as before.
Fig 4
Framework for STDP in balanced networks describes the dynamics of networks receiving optogenetic input.
: A recurrent network of excitatory, E, and inhibitory, I, neurons is driven by an external feedforward layer X1 of uncorrelated Poisson neurons. Neurons that express ChR2 are driven by optogenetic input, which is modeled as an extra layer of Poisson neurons denoted by X2. : Evolution of mean synaptic weights over the course of the experiment. : Evolution of mean firing rates. Inhibitory STDP maintains E rates near the target, . : Evolution of mean excitatory, external, inhibitory, and total currents. Balance is transiently disrupted at stimulus onset and offset, but it is quickly restored by iSTDP. : Mean spike count correlations before, during, and after stimulation remain very weak for all pairs. : The distribution of spike count correlations also remains nearly unchanged with weak mean correlations before, during, and after stimulation. Solid lines represent simulations, dashed lines are values obtained from theory (Eqs (3), (5), (27) and (28)).
Framework for STDP in balanced networks describes the dynamics of networks receiving optogenetic input.
: A recurrent network of excitatory, E, and inhibitory, I, neurons is driven by an external feedforward layer X1 of uncorrelated Poisson neurons. Neurons that express ChR2 are driven by optogenetic input, which is modeled as an extra layer of Poisson neurons denoted by X2. : Evolution of mean synaptic weights over the course of the experiment. : Evolution of mean firing rates. Inhibitory STDP maintains E rates near the target, . : Evolution of mean excitatory, external, inhibitory, and total currents. Balance is transiently disrupted at stimulus onset and offset, but it is quickly restored by iSTDP. : Mean spike count correlations before, during, and after stimulation remain very weak for all pairs. : The distribution of spike count correlations also remains nearly unchanged with weak mean correlations before, during, and after stimulation. Solid lines represent simulations, dashed lines are values obtained from theory (Eqs (3), (5), (27) and (28)).The mean–field equation relating firing rates to average weights and input (Eq (3)) holds, with the vector of rates , and input vector . Similarly, mean spike count covariances are now represented by a 3 × 3 matrix that satisfies Eq (5). The mean E1
I and E2
I weights evolved according toWe simulated a network of N = 104 neurons in an asynchronous state with . A subpopulation of 4000 E cells receives transient input. Solving Eqs (27) and (28) predicts that inhibitory plasticity will alter EI synaptic weights so that the firing rates of both the Eexpr and the Enon-expr approach the target firing rate before, during, and after stimulation. Once the network reaches steady state the mean inputs to each subpopulation cancel. Thus changes in EI weights restore balance at the level of individual subpopulations or “detailed balance,” consistent with previous studies [41, 81]. Simulations confirm these predictions (Fig 4).When the input is removed, the inhibitory weights onto cells in the Eexpr subpopulation converge to their pre–stimulus values, returning Eexpr rates to the target value, and reestablishing balance (Fig 4). Correlations remain low () before, during, and after stimulation (Fig 4), suggesting that at equilibrium the network is in the asynchronous state.Our theory thus describes how homeostatic inhibitory STDP increases the stability and robustness of balanced networks to perturbations by balancing inputs at a level of individual cells, maintaining balance in regimes in which non–plastic networks cannot maintain balance. We presented an example in which only one subpopulation is stimulated. However, the theory can be extended to any number of subpopulations in asynchronous or correlated balanced networks receiving a variety of transient stimulus.
Discussion
We have developed an analytical framework that predicts the impact of a general class of STDP rules on the weights and dynamics of balanced networks. The balanced state is generally maintained under synaptic weight changes, as long as the rates remain bounded. Additionally, we found that correlations in spiking activity can introduce a small shift in the steady state, and change how quickly the fixed point is reached.One of the most important issues in understanding neural dynamics is establishing conditions under which the network remains active, yet stable as synaptic weights change. The theory we developed can help us address these questions, but it does have limitations. Since we used a mean–field approach, we can only capture first moments. While mean weight stability may not imply stable network dynamics (consider the case when weight variance diverges in Classical Hebbian STDP in S1 Appendix), instability in the mean weights does imply that the network is also unstable.As we mentioned, our theory can be used to show that small modifications to weight updates can stabilize different STDP rules. The question remains whether Hebbian EE plasticity can be stabilized through an interaction with STDP rules at different synapses? For instance, Litwin–Kumar and Doiron used a triplet voltage STDP rule that was stabilized by hard constraints and weight normalization to produce network assemblies [35]. This rule by itself lead to stable but pathological behavior, and they introduced iSTDP to restore a balanced, asynchronous network state. While such voltage–based triplet rules are outside the scope of the present study, we could use extensions of the mean–field theory to describe the impact of second and higher order moments on the evolution of weights, and network dynamics [88]. Our theory suggests that the classical pairwise Hebbian STDP cannot be stabilized by other STDP rules such as iSTDP.In the tight balance regime, large excitatory and inhibitory inputs cancel on average [15], resulting in a fluctuation–driven state exhibiting irregular spiking. This cancellation is achieved when synaptic weights are scaled by and external input is strong [13, 14, 19, 89]. Our main assumption was that synaptic weights change slowly compared to firing rates. As this assumption holds generally, we believe that our approach can be extended to other dynamical regimes. For instance supralinear stabilized networks (SSNs) operate in a loosely balanced regime where the net input is comparable in size to the excitatory and inhibitory inputs, and firing rates depend nonlinearly on inputs. Balanced networks and SSNs can behave differently, as they operate in different regimes. However, as shown in [56], SSNs and balanced networks may be derived from the same model under appropriate choices of parameters. In other words, the tight balanced solution can be realized in an SSN, and SSN–like solutions can be attained in a balanced network. This suggests that an extension of our theory of plasticity rules to SSNs should be possible.We obtained a mean–field description of the balanced network by averaging over the entire inhibitory and excitatory subpopulation, and a single external population providing correlated inputs. As shown in the last section, the theory can naturally be extended to describe networks consisting of multiple physiologically or functionally distinct subpopulations, as well as multiple input sources.The mean–field description cannot capture the effect of some second order STDP rules as synaptic competition can correlate synaptic weights and pre–synaptic rates. We have shown that this can lead to different initial weight distributions converging to different equilibria. This can be interpreted as the maintenance of a previous state of the network over time.The present theory relies on a separation of timescales between spiking dynamics and weight changes. Such timescale separation is supported by a number of experiments [30–32, 90, 91]. We show in the Appendix (see S1 Appendix: “What happens when timescales are not separated?’) that reducing this timescale separation, and increasing weight updates leads to a breakdown of the theory, and can result in network instability.In mammalian brains, timescales of weight changes may not always be separated from rate and correlation timescales. The size and timescale of weight updates is likely to depend on many factors that can modulate STDP, such as spiking patterns, synapses type, brain area, network state, neuromodulation, and others. Separation of timescales may not be pronounced in certain non–cortical areas, such as the hippocampus, which can be rapidly modified [91]. For example, Petersen et al., 1998 and Froemke et al., 2006 found significant changes in putative synaptic weights over short timescales in hippocampal CA1/CA3 slices [92] and in visual cortical slices subject to multispike pre–and post–synaptic bursts [93], respectively. However, it is possible that the rate of change of synaptic weights may be overestimated in vitro [91].How is our separation of timescales assumption affected when rapid compensatory processes are needed for homeostasis, given that experiments show that homeostasis is a process that is even slower than the timescale of STDP? Experimental evidence suggests that homeostatic processes can take hours or days [42, 81, 90, 91, 94–98]. On the other hand, theoretical models show that synaptic plasticity can be unstable in the presence of such slow homeostasis, and needs to be coupled with rapid compensatory processes such as inhibitory STDP [91, 94]. The separation of timescales in our theory still puts synaptic dynamics on the “fast” side of the spectrum, as it separates network dynamics that occur over milliseconds from weight dynamics that take place over seconds or minutes. Hence, the assumption of timescale separation is still valid in our implementation of homeostatic inhibitory plasticity.In plastic networks, correlations between weights and other features such as in–degrees, or out–degrees can emerge [80]. We have shown how the theory can capture the case in which synaptic weights and pre–synaptic rates are correlated. While we were not able to find analytical expressions for these correlations, we showed that a second–order correction is sufficient to explain the observed dynamics. Eventually, the mean–field theory would need to be extended to account for higher order network motifs and their potential correlations with synaptic weights and firing rates. This might be possible by extending our approach, but we leave these extensions for future work.We have assumed that connection probabilities are homogeneous which translates to a narrow distribution of in–degrees. Cortical networks are heterogeneous, and a broad distributions of in–degrees can break the classical balanced state [49, 50]. Balance can be restored with the introduction of homeostatic plasticity [49], or by including heterogeneous out–degrees correlated with in–degrees [50]. As we mentioned previously, in such cases our theory would need to be extended to account for possible emerging correlations between weights and in-degrees or out–degrees. We relegate such extensions to future work.A natural question that arises is why do correlations between weights and pre–synaptic rates only seem to play a role in iSTDP? In the examples of excitatory STDP we analyzed (Kohonen’s rule and weight–dependent Hebbian rule), weights at equilibrium are determined by other parameters (Weight–dependent Hebbian rule) or rates (Kohonen’s rule). Therefore weights are updated until those steady state values are achieved, yielding values independent of initial conditions. On the other hand, in the case of the inhibitory plasticity rule, inhibitory weights at equilibrium are determined by the firing rates alone. Since the firing rate vectors are lower–dimensional than the weight matrices, the equilibrium solution does not fully determine the weight matrices. This is shown in Fig 3 inset, where different distributions of weights can result in the same equilibrium firing rate when weights and pre–synaptic rates become correlated.We have shown that different plasticity rules can result in distinct firing rate distributions in different subpopulations. As shown by Mongillo et al. this can result in an increase or decrease in sensitivity of activity patterns and memories to perturbation of different synapse classes [99].Partial stimulation of a population of E neurons has been shown to break balance due to the inability of the network in cancelling inputs when weights are static [54]. Ebsch et al. showed how classical balanced network theory can be modified to account for effects of input perturbations that break the classical balanced state [54]. Vogels et al. [41] (in addition to subsequent studies [35, 42, 81, 100–102]) showed empirically using simulations that inhibitory iSTDP can restore balance. We here provide a theoretical framework that describes the evolution of rates and weights before, during, and after a perturbation that breaks balance.A number of mathematical theories have been proposed to describe the coevolution of weights, rates, and the structure of correlations under STDP in recurrent neural networks [37–39, 44–47, 74]. All of these approaches require knowledge of neurons’ transfer functions (f-I curves and/or correlation susceptibility functions). Often neurons are assumed to be Poissonian, and their responses to inputs (f-I curves) are prescribed [37–39, 45–47, 74]. Other work [44] uses Fokker–Planck techniques to compute transfer functions. These approaches rely on an assumption that the input to each neuron is relatively weak or dominated by Gaussian white noise [103, 104]. Efficient, direct Fokker–Planck approaches are not available for two–dimensional integrate–and–fire models such as those with adaptation currents, though one–dimensional approximations have been derived [105, 106]. Some previous work [44] also assumes that STDP curves are approximately anti–symmetric, i.e., there is a cancellation between the positive and negative parts of the curves (as in Panel A in Fig A in S1 Appendix).Our approach uses balanced network theory to avoid the computation of transfer functions. As such, the resulting theory does not require an assumption of weak synaptic interactions or dominant Gaussian white noise input, but can be applied to networks with highly non–Gaussian, temporally correlated input (such as the networks in the correlated state considered here). Moreover, the balanced network theory we used is accurate for a range of neuron models, including those with adaptation currents [54, 107], and different STDP curves (as in Panels B–F in Fig A in S1 Appendix). However, balanced network theory relies on large N asymptotics, which yielded accurate approximations for N ∼ 10, 000 in our case (Fig 1), but become less accurate in smaller networks. Our approach is not appropriate for modeling neural circuits that do not exhibit excitatory–inhibitory balance, such as observed in some disease states, some developmental stages, and in some sub–cortical neuronal networks. Finally, we used a mean–field approach that only yields approximations to population–averaged firing rates, synaptic weights, and covariances, while other approaches [37–39, 44–47, 74] give approximations to these quantities at the level of individual neurons. Despite these limitations, our analytical approach was sufficient for answering the questions related to the interaction between excitatory–inhibitory balance, correlated neuronal activity, and plasticity that we considered.We found that even in the correlated state, when the network receives temporally correlated input, changes in synaptic weights are dominated by firing rates, with correlations playing a secondary role (See Fig 2). These findings are in agreement with previous work on STDP already mentioned before [44, 53]. Results by Ocker et al. were obtained in recurrent neural networks in different dynamical regimes and under different assumptions (See above for more details), while Graupner et al. used networks of two neurons with varying natural firing patterns.The theoretical framework we presented is flexible, and can describe more intricate dynamics in circuits containing multiple inhibitory subtypes, and multiple plasticity rules, as well as networks in different dynamical regimes. Moreover, the theory can be extended to plasticity rules that depend on third order interactions [69, 70], such as the BCM rule [68]. This may produce richer dynamics, and change the impact of correlations.
Conclusion
We developed a second order theory of spike–timing dependent plasticity for classical asynchronous, and correlated balanced networks [13, 14, 19, 23]. Assuming that synaptic weights change slowly, we derived a set of equations describing the evolution of firing rates, correlations as well as synaptic weights in the network. We showed that, when the mean–field assumptions are satisfied, these equations accurately describe the network’s state, stability, and dynamics. However, some plasticity rules, such as inhibitory STDP, can introduce correlations between synaptic weights and rates. Although these correlations violate the assumptions of mean–field theory, we showed how to account for, and explain their effects. Additional plasticity rules can decorrelate synaptic weights and rates, reestablishing the validity of classical mean–field theory. Lastly, we showed that inhibitory STDP allows networks to maintain balance, and preserves the network’s structure and dynamics when subsets of neurons are transiently stimulated. Our approach is flexible and can be extended to capture interactions between multiple populations subject to different plasticity rules.
Review of mean–field theory in balanced networks and supporting results.
This supplementary text contains (1) a review of classical mean–field theory of firing rates and spike count covariances in balanced networks; (2) the derivation of the equation that describes mean synaptic weights, a derivation of conditions under which synaptic weights do not change signs when undergoing inhibitory STDP, and general remarks on how synaptic weights can be affected by changes in rates or covariances; and (3) supporting results on separation of timescales, synaptic weight transient dynamics, stability of weights under Kohonen’s rule, statistics and stability of synaptic weights under several STDP rules, the general impact of correlations in synaptic weights, a network undergoing iSTDP where synaptic weights change signs, and stability of iSTDP on EI and II synaptic weights. Fig A. STDP windows of different plasticity rules. : Change in synaptic weights as a function of the relative timing of pre– and post–synaptic spikes in Classical Hebbian STDP (same as weight–dependent Hebbian). : Same as , but for inhibitory STDP. : Same as , but for Kohonen’s rule when weights are below parameter β. : Same as , but for the case when weights are above β. : Same as , but for Oja’s rule when weights are below parameter β. : Same as , but for the case when weights are above β.(PDF)Click here for additional data file.27 Jul 2020Dear Dr. Josić,Thank you very much for submitting your manuscript "Synaptic Plasticity in Correlated Balanced Networks" for consideration at PLOS Computational Biology.As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. In particular, we would like to see how you address the comment regarding the novelty of this work, and how the work fits in the context of other related work in the field, as noted by Rev 2. As an example, the Introduction currently does not place the work in the context of other published work, and 2/3 of the introduction is basically an extended abstract.We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.When you are ready to resubmit, please upload the following:[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Julijana Gjorgjieva, PhDGuest EditorPLOS Computational BiologyKim BlackwellDeputy EditorPLOS Computational Biology***********************Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: Akil et al. develop a mean field theory (with some important semi analytical corrections) for the co-evolution of synaptic weights and first and second order firing rate statistics in spiking neuronal networks. This is potentially an important contribution to both theoretical and experimental neuroscience, where the understanding of the effects various plasticity rules have at the network level are lacking.This paper puts our understanding of these effects on more firm analytical grounds.However, I think that the results could be strengthened significantly if the authors leverage the theory they developed to address more fundamental problems than the applications included in the current version.Beyond that, the current version of the manuscript is missing some key references to and discussion of both theoretical and experimental literature that would help readers understand the importance of the results and their limitations.major concernA fundamental problem in theoretical neuroscience is the stability of neuronal network activity subject to activity dependent plasticity. It is clear--and the authors mention it in their paper (l. 217)-- that Hebbian excitatory plasticity leads to instabilities. The theory that the authors develop is exactly the analytical tool needed to understand what are minimal and/or realistic modifications of this rule which do lead to stable dynamics.More specifically-- can the reduced, mean-field, description of the network+plasticity rule be leveraged to understand under what circumstances a Hebbian EE STDP rule could be stabilized? Alternatively, are there families of plasticity rules which are guaranteed to not stabilize the dynamics?Does EE STDP combined with a specific form of inhibitory plasticity to do this job? (similar to the numerical work in Litwin Kumar Doiron 2014 which is mentioned in the discussion)?Kohonen's rule that the authors investigate is stable-- but I am not aware of this rule having an electrophysiological basis. The authors do not cite literature to that effect, and so I do not agree that the current results support their statement (l. 440): "Therefore, this framework provides mathematical tractability and biological relevance to a model of excitatory STDP [25], without the need for unrealistically fast inhibitory STDP."Other concerns and corrections108 "only pairwise interactions are meaningful". Please clarify what "meaningful" means here, since a few lines afterwards the authors say they will consider more general examples in the future (115 "the BCM rule... depend on interactions beyond second order ..."). Another class of plasticity rules that cannot immediately be written in terms of the current theory are calcium based rules (Shouval et al 2002, Graupner Brunel 2012)124 Timescale assumption.The separation of timescales assumption should be introduced more clearly, on a number of levels:1. Some readers may appreciate being given approximate values for the different timescales in the theory, perhaps in relation to the specific examples in Table 1.2. The authors should discuss whether these assumptions are consistent with available data. For example Petersen, Malenka, ... Hopfield 1998, Froemke, Tsay, et al 2006, where ~10 repeats of a plasticity protocol are sufficient to induce significant changes or even saturation.3. some computational studies indicate that the separation of timescales assumption must be violated for network stability, at least in relation to homeostatic plasticity (Zenke Gerstner Ganguli 2017). Does the theory provide a path for getting around these issues?154 Intuitively I expect Neumann's approximation to break down for sparse matrices because the inverse \\bar{W}_0^{-1} is not well defined when the rank is not full. Is the 10% connectivity sufficiently large to avoid this issue? If yes, how small can the connection probability be before the approximation (and consequently the theoretical analysis) breaks down? Guzman, Schlogl, et al 2016 showed ~1% connectivity in hippocampal region CA3-- a region where recurrent connectivity and plasticity are thought to play important roles. The authors should comment on whether they expect their theory to be applicable in such a scenario.158 \\Delta W can be defined such that the perturbation is done separately on EE, EI, IE, II connections. Does doing so within the current theory lead to the same results as Mongillo, Rumpel, Loewenstein 2018?170 The writing in the first results paragraph should be improved to clarify this is an outline of the rest of the paper. Currently the different sections and applications of the theory come as a surprise. Otherwise I like the structure of the paper where introduction is mixed with development of the theory.212 How does the stability depend on beta?Fig. 1 B, C: are these rasters of only E neurons? what about I?Fig. 1 D-I: what do the distributions of weights/rates/covariances look like? Kohonen's rule has the advantage that synaptic weights do not have a hard upper bound. Rather, \\beta plays the role of a soft upper bound. So the networks in these simulations have the potential of maintaining a realistic unimodal synaptic weight distribution, as seen in experiments, in contrast to previous theoretical studies where synaptic weights quickly go to their upper and lower bound (e.g. Rubin Lee Sompolinsky 2001). Is this indeed the case in these simulations?The authors should indicate and discuss whether their simulations capture the corresponding experimentally observed properties of plastic neuronal networks beyond the averages.Fig. 2 A, D: It seems that these plots can be used to estimate the characteristic time it takes the synaptic weights to reach steady state values. I think the authors should expand the discussion of these results (l. 232-241). What is the relationship between the slope of dJ/dt and \\tau_{STDP}? in other words, how many plasticity time constants does the synaptic weight transient last?There is a very strong slowing down as a function of input correlations (panel D). Does this mean that if rates are small, we should expect synaptic weights to always be away from steady state for correlated inputs? If not, can an experimental prediction be extracted from this or will the transient timescale always be swamped by firing rates?Fig. 2 C: is red E or I?Fig. 3 A: Is it possible to add a line to this panel showing the actual average synaptic weight in simulations in a way that takes into account the initial conditions? I would expect such a line to be close to 0 for a range of j values, perhaps making the result in panel B easier to understand and less confusing after looking at panel A.Fig. 3: increase font size of legend and inset figure labels.257: The authors should cite Vegue Roxin 2019 here. They did take some steps in extending the MFT to cases with heterogeneous connectivity which leads to changes in the firing rate distribution (which are predictable given the weights).304: From a theoretical point of view, it is nice that introduction of II plasticity (of the same form as EI) eliminates the heterogeneity of inhibitory firing rates. However, experimental data (e.g. Xue et al 2014, Chiu ,... Higley 2018) suggests that different inhibitory subtypes are subject to different inhibitory plasticity rules. I think it's fine to include this "trick" in a theoretical paper but I would at least mention the fact that inhibitory plasticity rules themselves are heterogeneous so in a real network we should not expect inhibitory firing rates to become similar due to homogeneity of the inhibitory plasticity rule.331: missing citation416: not sure that incapacity is a word. Consider using inability.416: typo and missing ref. "I Ebsch ..."426: The paragraph citing Ocker [57] should also cite Ravid-Tannenbaum Burak 2017Appendix 1, Generating correlated spike trains.I was not aware of this method. Its explanation is clear, but I lack intuition whether/why it is favorable over an alternative which seems simpler:Generating a Gaussian process with the exact desired correlations/covariancesComputing the Gaussian CDF of the random number for each neuron at each time.Computing the inverse Poisson CDF of the result.(see a more complete description of the method in Macke Berens ... Bethge 2009)Appendix 3, I think the authors should expand Appendix 3 to address my major concern 1: Are there minimal modifications of the classical EE STDP rule which imply a non-trivial solution to the fixed point equation (Eq. 2 in Appendix 3)?Reviewer #2: The authors developed a self-consistent theory of rates, covariance and synaptic weights in balanced recurrent network that receives correlated external input. The theory can help to understand coexistence of correlated activity and dynamic excitatory-inhibitory balance. Furthermore, they found a plasticity rule for inhibitory-to-excitatory connections which have interesting property of dependence on initial condition. Finally, they showed that their theory could be applied to “out of balance” scenario of optogenetic input. Although, these are interesting and important claims we have some reservation about the amount of proof they provide for such a general claims and consequently about biological relevance of the study.Major comment 1:First claim is that authors developed a general theory of plasticity in balanced network which can address network dynamics with different plasticity rules and describe weight evolution with correlated activity. We have some concerns about generality, novelty and predictive power of the theory.The theory presented is far from being a general theory of plasticity in balance networks. The claim hold and have be demonstrated by authors fairly well only in the case of tightly balanced large neural network (with assymptotic 1/sqrt(N) scaling) for the class of pairwise STDP rules (ignoring other plasticity types i.e. bcm, metaplasticity, intrinsic plasticity and structural plasticity). This STDP class authors chose is one with multiplicative dependence on synaptic weights which naturally lead to “nice” fixed points and for which mean-field description works very well.The main theoretical contribution of the paper is procedure to put together and solve self-consistently equations for rate, covariance and weights. This theory seems very computational efficient and simple, but it is not the only one which can threat this problem. Equations 1 and 2 for rate and covariance with tight balance condition and large networks are already derived earlier (partly by the same authors Baker et al 2019 "Correlated states in balanced neuronal networks"), and there is even a statistical field theory developments in this area e.g. Ocker et al 2016 paper "Linking structure and activity in nonlinear spiking networks". The equation for individual weight evolution with input correlation were derived already by Gutig et al 2003 in paper "Learning Input Correlations through Nonlinear Temporally Asymmetric Hebbian Plasticity". The self-consistent weight evolution was done by Ocker et al 2015 in the paper "Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses", for the case of pairwise STDP rule which allows for weight heterogeneity and structure in connectivity (they chose a rule which is very sensitive to correlation in inputs). There is also work with higher-order SPDP rule, i.e. Monangie et al 2020 paper "Autonomous emergence of connectivity assemblies via spike triplet interactions".To my best of my knowledge this is the first theoretical study of effect of external input correlation on tightly-balance balanced plastic networks. Similar theoretical attempts have been done before for the cases of external and self-generated correlations, e.g. Gilson et al 2009 “Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity–strengthening correlated input pathways” and Gilson 2010 "STDP in recurrent neuronal networks". Above studies did not have condition for tight balance and this work is based on this assumption. It is not clear if prescribing a balance a priory could bring clarity to the issue, because temporal dynamics of correlation can play a big role in unstable dynamical states, e.g. trough large Fano factor.Major comment 2:Second claim is that that correlations in the input mildly, but significantly affect the evolution of synaptic weights. This claim is well demonstrated, but it is a hardly surprising, given the type of the multiplicative pairwise rule authors use.Major comment 3:Third claim is that their inhibitory-to-excitatory plasticity induce correlations between activity and synaptic weights, such that final weight depends on initial synaptic weight. This is certainly interesting result, and especially because it could explain one role of inhibitory-to-inhibitory plasticity. That being said the rule used is very peculiar. The RHS of the ODE is scaled with initial position and authors did not fully clarify what is the consequence of this choice? What is biological justification? It is very confusing for reader that initial position is a parameter of an ODE. We would be interested to see a comparison of this rule to the original Vogels at al 2011 "Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks" to properly understand the scaling choice. More importantly temporal evolution of the weights is not shown in this case.Major comment 4:Final claim is that this framework could be used to treat case of optogenetic input to subpopulation. We think that this claim is shown well, but it is the question if the study brings more clarity to the field than original Vogels 2011 study.Reviewer #3: The manuscript addresses the dynamics of plasticity under STDP in randomly connected neural networks in the balanced regime. The manuscript provides some analytical insights into this topic, backed up by numerical simulations. The topic is important, and results seem substantial and interesting. Nevertheless, there are some significant weaknesses in the analysis and presentation, which I recommend to address before the manuscript is considered again for publication.MAIN COMMENTSSince the theory is derived at the level of mean population activities, there is a hidden assumption that synaptic weights evolve uniformly. Is this true empirically in the simulations? How variable are the weights following plasticity?Related to this point, there is also an implicit assumption that weights evolve independently of other quantities. One example is the correlation between presynaptic firing rate and the synaptic weight, which is discussed in the section on inhibitory plasticity. But there are other possible correlations that could develop between synaptic strength and, for example: the in-degree and out-degree of involved neurons; under some rules competition between synapses belonging to the same neuron could also break some of the assumptions of the mean field theory.A full theory of the plasticity dynamics should address these questions, but I accept that there is value in deriving the predictions of a mean field theory that ignores such correlations, while comparing the predictions with numerical results. The presentation should acknowledge these limitations more explicitly. The appearance of a discussion of correlations between firing rates and synapses only in the section on inhibitory plasticity is unsatisfactory.Is there a way to understand why correlations did not play an important role in earlier sections?While the theory is derived for dynamics of the synaptic weights, the numerical analysis does not demonstrate that the dynamics indeed follow equation 6. Instead, simulations are used only to demonstrate agreement with the predictions on the steady state. Do the dynamics follow the prediction?I would also like to see some numerical support for statements that the network remains in an asynchrounous state following plasticity.Lines 359-365: What happens under the perturbation without plasticity? How does this depend on parameters? How can we see that the network is in a balanced state following plasticity (but possibly not before plasticity)?MINOR COMMENTSIn comparisons between simulations and the theory, for how long were the simulations run? What was the criterion used to identify that the weights reached steady state?Line 6: it will be appropriate to cite also Shaham and Burak (2017) and Darshan et al (2017) in the context of spatially correlated activity in the balanced state.In the discussion, it will be appropriate to cite Ravid-Tannenbaum and Burak (2016) in the context of the relationship between covariances and STDP dynamics.Lines 58-59: why is the assumption not essential? How does this agree with Ref. 34, which shows that heterogeneity in the connection probabilities can destroy the balanced state?PRESENTATIONIt is inconvenient that only some of the equations are numbered. This makes it difficult to refer to the unnumbered equations when discussion the manuscript.Equation 2: please define Gamma precisely, not only in limits, and either derive this result or provide a clear path from the results of previous papers to this expression, possibly in an appendix.The role of Twin in equation 2 is not sufficiently clear. If I understood this correctly, Twin should be introduced already when defining the spike count covariance.Lines 73-74: the statement that the second term in Equation 2 accounts for intrinsically generated variability is a bit confusing since the elements in the diagonal are negative. It will be helpful to clarify this.The logic of how x and S combine to generate the STDP rule is mathematically simple, yet for many readers it will be difficult to understand the structure of the rule from the equations. It will be helpful to provide a graphical representation of the STDP rules in the different cases listed in Table 1.Lines 140-148: I didn’t understand why root finding is involved in the numerical solution of the differential equations. Doesn’t equation 6 simply provide the required update of J, based on the values of r anc C that are obtained from equations 1-2?Figure 3: “and the system predicts a stable fixed point”. Should “system” be replaced by “theory”?Line 339: sentence ends prematurely.The last equation in page 1 of SI1 includes parameters whose meaning is not defined.In line 26 of SI1, units for dt are missing.In line 53 of SI1, in the equation rm = r_x/cx, should cx be c_x? (where _ stands for subscript)Lines 53-56: It is not easy to understand from the description how the daughter process spikes are obtained. I suggest to clarify that each daughter process is obtained by first taking all the spikes of the mother process, followed by a deletion of spikes with probability 1-c_x, and to mention explicitly that deletion is performed independently in each one of the daughter processes.It will be more convenient to group all the supporting text in one file with several sections.**********Have all data underlying the figures and results presented in the manuscript been provided?Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology
data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Reviewer #1: YesReviewer #2: YesReviewer #3: None**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: NoReviewer #3: NoFigure Files:While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at .Data Requirements:Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Reproducibility:To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see21 Oct 2020Submitted filename: rebuttal_letter_stdp_bns_final.pdfClick here for additional data file.30 Dec 2020Dear Dr. Josić,Thank you very much for submitting your manuscript "Balanced Networks under Spike-Time Dependent Plasticity" for consideration at PLOS Computational Biology.As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.As you can see, the three reviewers are mostly positive about your manuscript and find that it has improved. However, one of the reviewers still has the concern that the approach taken in this work is weaker than in previous works (especially the assumptions of infinite size networks and precise balance, which are more limiting that the assumptions made in other works). Therefore, the authors should make a clear case for the benefits of their approach.In particular:- Although the Introduction has improved in terms of putting this manuscript into the context of other published work, the link between descriptions of previous work and the goal of the current work is missing. This could be helped by elaborating more on existing work -- beyond just listing specific conditions under which previous frameworks hold (see also next comment).- There are multiple references missing, some are incorrectly cited, and some equation numbers do not appear correctly (see reviewer C comments). For example, the authors write in their response letter that they cited Montangie et al. in the Introduction but they do not. It’s certainly more relevant than the studies of triplet STDP in feedforward networks with a single postsynaptic neuron that the authors currently cite, because it focuses on the plasticity of recurrent networks. It could and should be mentioned also in the Discussion where they discuss how their work can be extended to include the impact of second and higher order moments on the evolution of the weights. Another example is the Ravid-Tannenbaum and Burak (2016) reference which similarly to Montangie et al (2020) considers correlations generated intrinsically within the network -- that other works (e.g. Gilson et al.) do not. These are just some examples of how the authors could better explain how their work fits in the context of published work.- Regarding notation and referencing to equations and figures, it might be a good idea to ensure that the entire paper is consistent as new things are added during the revision.- Rather than stating all the results at the start of the Results section, the authors should work on providing a map of what they do (and why), rather than what they find. The specifics of what they find are best left to the individual sections where the specific figures are discussed and are currently confusing for such an introductory paragraph. I realize this was done in request of reviewer A, but I believe the authors were asked to clarify/outline their approach and goals (rather than state the results before actually presenting them).- The authors highlighted parts of the paper in blue to indicate that they made changes but often times, only a single sentence in a paragraph is changed, or the order of two words is swapped. This leaves the false impression that many significant changes were made, when in fact, very few, if any changes were made. A clear example is the first paragraph of the Results, where only the first sentence was added and yet the entire paragraph is highlighted in blue. There are many such instances in the paper. It would be better and much clearer for the reviewers and editor to have only specific sentences (or words) that were changed highlighted, as the authors do for example near lines 326-328 and for the remainder of that section.Minor:- What is meant here: "Analytical treatments have been proposed for a number of cases, initially describing the impact of STDP on individual synaptic weights (Guettig)?” Do the authors mean in feedforward networks, or under some other special condition? Because any plasticity rule describes the impact of STDP on individual synaptic weights. This comment is related to the first comment above, namely to be clear about what other works did, and how this work is different from them.- Shouldn’t the Results section where the authors introduce the Kohonen rule (around line 221) have a subsection titled something like “Dynamics of balanced networks under the Kohonen rule” to match the corresponding section starting in line 252?- Some redundancies (there are probably others) that should be eliminated to make the narrative clear:e.g. (1) line 218: “Our theory explains the effect of such plasticity rules in balanced networks” and line 220: “the impact of the plasticity rule on the dynamics of the network”(2) lines 37 and 47: “how synaptic plasticity impacts (shapes) network dynamics”(3) lines 131 and 137: “focus on pairwise interactions” vs “considered interaction up to second order”We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.When you are ready to resubmit, please upload the following:[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Julijana Gjorgjieva, PhDGuest EditorPLOS Computational BiologyKim BlackwellDeputy EditorPLOS Computational Biology***********************As you can see, the three reviewers are mostly positive about your manuscript and find that it has improved. However, one of the reviewers still has the concern that the approach taken in this work is weaker than in previous works (especially the assumptions of infinite size networks and precise balance, which are more limiting that the assumptions made in other works). Therefore, the authors should make a clear case for the benefits of their approach.In particular:- Although the Introduction has improved in terms of putting this manuscript into the context of other published work, the link between descriptions of previous work and the goal of the current work is missing. This could be helped by elaborating more on existing work -- beyond just listing specific conditions under which previous frameworks hold (see also next comment).- There are multiple references missing, some are incorrectly cited, and some equation numbers do not appear correctly (see reviewer C comments). For example, the authors write in their response letter that they cited Montangie et al. in the Introduction but they do not. It’s certainly more relevant than the studies of triplet STDP in feedforward networks with a single postsynaptic neuron that the authors currently cite, because it focuses on the plasticity of recurrent networks. It could and should be mentioned also in the Discussion where they discuss how their work can be extended to include the impact of second and higher order moments on the evolution of the weights. Another example is the Ravid-Tannenbaum and Burak (2016) reference which similarly to Montangie et al (2020) considers correlations generated intrinsically within the network -- that other works (e.g. Gilson et al.) do not. These are just some examples of how the authors could better explain how their work fits in the context of published work.- Regarding notation and referencing to equations and figures, it might be a good idea to ensure that the entire paper is consistent as new things are added during the revision.- Rather than stating all the results at the start of the Results section, the authors should work on providing a map of what they do (and why), rather than what they find. The specifics of what they find are best left to the individual sections where the specific figures are discussed and are currently confusing for such an introductory paragraph. I realize this was done in request of reviewer A, but I believe the authors were asked to clarify/outline their approach and goals (rather than state the results before actually presenting them).- The authors highlighted parts of the paper in blue to indicate that they made changes but often times, only a single sentence in a paragraph is changed, or the order of two words is swapped. This leaves the false impression that many significant changes were made, when in fact, very few, if any changes were made. A clear example is the first paragraph of the Results, where only the first sentence was added and yet the entire paragraph is highlighted in blue. There are many such instances in the paper. It would be better and much clearer for the reviewers and editor to have only specific sentences (or words) that were changed highlighted, as the authors do for example near lines 326-328 and for the remainder of that section.Minor:- What is meant here: "Analytical treatments have been proposed for a number of cases, initially describing the impact of STDP on individual synaptic weights (Guettig)?” Do the authors mean in feedforward networks, or under some other special condition? Because any plasticity rule describes the impact of STDP on individual synaptic weights. This comment is related to the first comment above, namely to be clear about what other works did, and how this work is different from them.- Shouldn’t the Results section where the authors introduce the Kohonen rule (around line 221) have a subsection titled something like “Dynamics of balanced networks under the Kohonen rule” to match the corresponding section starting in line 252?- Some redundancies (there are probably others) that should be eliminated to make the narrative clear:e.g. (1) line 218: “Our theory explains the effect of such plasticity rules in balanced networks” and line 220: “the impact of the plasticity rule on the dynamics of the network”(2) lines 37 and 47: “how synaptic plasticity impacts (shapes) network dynamics”(3) lines 131 and 137: “focus on pairwise interactions” vs “considered interaction up to second order”Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: The authors have addressed all my comments from the original review.In the revised form the manuscript is an important contribution to the field, so support publishing it in PLoS CB.Minor corrections:Supporting figure 2 CPlease change the graphics to indicate more clearly that η = 1 is shown in the inset only, similar to panel B.Supporting figure 3Missing equation cross referenceReviewer #2: I think the manuscript has been greatly improved by the authors and my technical concerns were fully answered. Therefore I have no issues with the presented theory which gives us an alternative way of calculating rate-correlation-weights self-consistency, but it is hard for me to see how this approach can be better than previous attempts. In my opinion authors still have to provide a case study to show that this approach could give us a new insight to an open problem or could do the same tasks as the previous models, but better (more efficiently or more correctly).The reason I believe that this theory cannot do better than previous attempts is because it has a lot of constraints. The most problematic ones are the assumption of limit N tends to infinity, the assumption of precise balance and the assumption that there is a clear separation of time-scales. Those constraints could contradict the essential requirement of biological plausibility.The theory does not capture dynamics of finite size networks, which are the case of biological neural networks. Does the author assume that the transfer function of single neurons don't play any role? In this respect I am yet to be convinced that this approach would be better then Ocker et al 2015. Especially for calculating rates and correlation self-consistently (Ocker et al. 2016). Is the assumption of infinite networks really weaker than the assumption on the structure of input noise and spike statistics? Until there is a good use case I am inclined to believe that this approach mostly ignores single neuron dynamics and it would be less useful.Although biological networks keep the EI balance, there is a fundamental question on how do we model it? It is not a priori necessary to keep balance with 1/\\sqrt{K} scaling. This is the case for current based synapses and this scaling was first introduced for the purpose of theoretical simplification. If we have more biological realistic neurons with conductance based synapses, the balance is kept dynamically. Furthermore, there are networks which are not necessarily balanced, e.g. in development or diseases. Therefore, I would put more utility in the approach which can treat the problem of self-consistency without the balance condition, but can handle a balanced network as a special case. On this point I would like authors to present a case where their framework works better than Burkit AN et al. 2007, Gilson et al. 2009 or Ocker et. al 2015 approach.Reviewer #3: The revision addresses my comments, but in a few cases there are remaining minor issues that should be taken care of. Please see the attached file. My comments are posted over the pdf (five comments overall, in pages 26, 29, 30, 30, 32).**********Have all data underlying the figures and results presented in the manuscript been provided?Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology
data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Reviewer #1: YesReviewer #2: YesReviewer #3: Yes**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: NoReviewer #3: NoFigure Files:While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at .Data Requirements:Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Reproducibility:To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please seeSubmitted filename: Referee C comments.pdfClick here for additional data file.23 Feb 2021Submitted filename: rebuttal_letter_second_round.pdfClick here for additional data file.12 Apr 2021Dear Dr. Josić,We are pleased to inform you that your manuscript 'Balanced Networks under Spike-Time Dependent Plasticity' has been provisionally accepted for publication in PLOS Computational Biology. Please just remove/rephrase the term: "general theory of plasticity" in the abstract, introduction and elsewhere from the manuscript, as per the suggestion of Rev 2, because as you acknowledge, the proposed theory is only appropraite under certain assumptions.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.Best regards,Julijana Gjorgjieva, PhDGuest EditorPLOS Computational BiologyKim BlackwellDeputy EditorPLOS Computational Biology***********************************************************Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #2: Thank you for further clarifying your work. I agree with the major points. Most importantly, that there is still no efficient theoretical approach involving Fokker-Planck with colored noise and strong input correlation, and certainly not one applied to solve rate-correlation-weight self-consistency, which makes your work more universally applicable.Small remark.I'm not sure the term general theory of plasticity is the best.Reviewer #3: The revision addresses the comments raised in my previous review.**********Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #2: NoneReviewer #3: None**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #2: NoReviewer #3: No10 May 2021PCOMPBIOL-D-20-00683R2Balanced Networks under Spike-Time Dependent PlasticityDear Dr Josić,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!With kind regards,Katalin SzaboPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
Authors: Matthieu Gilson; Anthony N Burkitt; David B Grayden; Doreen A Thomas; J Leo van Hemmen Journal: Biol Cybern Date: 2010-09-29 Impact factor: 2.086
Authors: Chengcheng Huang; Douglas A Ruff; Ryan Pyle; Robert Rosenbaum; Marlene R Cohen; Brent Doiron Journal: Neuron Date: 2018-12-20 Impact factor: 17.173
Authors: Alfonso Renart; Jaime de la Rocha; Peter Bartho; Liad Hollender; Néstor Parga; Alex Reyes; Kenneth D Harris Journal: Science Date: 2010-01-29 Impact factor: 47.728
Authors: Nima Dehghani; Adrien Peyrache; Bartosz Telenczuk; Michel Le Van Quyen; Eric Halgren; Sydney S Cash; Nicholas G Hatsopoulos; Alain Destexhe Journal: Sci Rep Date: 2016-03-16 Impact factor: 4.379