The estimation of parameters controlling the electrical properties of biological neurons is essential to determine their complement of ion channels and to understand the function of biological circuits. By synchronizing conductance models to time series observations of the membrane voltage, one may construct models capable of predicting neuronal dynamics. However, identifying the actual set of parameters of biological ion channels remains a formidable theoretical challenge. Here, we present a regularization method that improves convergence towards this optimal solution when data are noisy and the model is unknown. Our method relies on the existence of an offset in parameter space arising from the interplay between model nonlinearity and experimental error. By tuning this offset, we induce saddle-node bifurcations from sub-optimal to optimal solutions. This regularization method increases the probability of finding the optimal set of parameters from 67% to 94.3%. We also reduce parameter correlations by implementing adaptive sampling and stimulation protocols compatible with parameter identifiability requirements. Our results show that the optimal model parameters may be inferred from imperfect observations provided the conditions of observability and identifiability are fulfilled.
The estimation of parameters controlling the electrical properties of biological neurons is essential to determine their complement of ion channels and to understand the function of biological circuits. By synchronizing conductance models to time series observations of the membrane voltage, one may construct models capable of predicting neuronal dynamics. However, identifying the actual set of parameters of biological ion channels remains a formidable theoretical challenge. Here, we present a regularization method that improves convergence towards this optimal solution when data are noisy and the model is unknown. Our method relies on the existence of an offset in parameter space arising from the interplay between model nonlinearity and experimental error. By tuning this offset, we induce saddle-node bifurcations from sub-optimal to optimal solutions. This regularization method increases the probability of finding the optimal set of parameters from 67% to 94.3%. We also reduce parameter correlations by implementing adaptive sampling and stimulation protocols compatible with parameter identifiability requirements. Our results show that the optimal model parameters may be inferred from imperfect observations provided the conditions of observability and identifiability are fulfilled.
This is a PLOS Computational Biology Methods paper.
Introduction
Data assimilation is increasingly important in quantitative biology to infer unmeasurable microscopic quantities from the observation of macroscopic variables. It has successfully obtained quantitative neuron models by synchronizing model equations to membrane voltage oscillations [1-3] and inferred the connectivity of neuron populations from electroencephalographic recordings of brain activity [4, 5]. Models constructed from time series analysis have been reported to accept multi-valued parameter solutions [6, 7]. The identification of the optimal solution, among all others producing equivalent outcomes, is currently a road block on the way to resolving the phenotype of neurons and biocircuits. A different, yet related problem, is that, under ordinary conditions, biocircuits may exhibit functional overlap [8, 9], redundancies [10] and compensation [11]. This further increases the need to determine whether experimental protocols exist which can yield actual biocircuit parameters. Criteria for identifying the true parameters of such systems would allow classifying neuronal phenotypes [12, 13], unknown cell types [2, 14], and understanding the effect of channelopathy on neuron dynamics [15] in Alzheimer’s disease [16-18], seizures [19, 20], and Parkinson’s disease [15, 21]. We now briefly review the theoretical challenges of estimating parameters with inverse methods before summarizing our solutions.Neuron-based conductance models are described by nonlinear differential equations:
The x1(t), …, x(t) are the state variables including: membrane voltage, ionic gate variables, synaptic currents; the p1, …, p are model parameters; and (t) is the control vector whose components are the current protocols injected in one or more neurons. Takens’ embedding theorem states that information about a dynamic system is preserved within the time series recording of its output over a finite duration [22, 23]. This warrants the existence of a unique parameter solution provided the following conditions are satisfied:ObservabilityThe system modelled by Eq 1 is observable if its initial conditions can be estimated from observations of its state dynamics over a finite time interval [24-26]. If the neuron membrane voltage, Vexp(t), is the state variable being measured, one defines a measurement function Vexp(t) = h(x1(t), …, x(t), p1, …, p) = x1(t) which relates Vexp(t) to the L-dimensional state vector x and the K-dimensional parameter vector . Since parameters may be viewed as constant state variables satisfying , the state of the system is a L + K-dimensional vector. A single measurement of Vexp at time t however does not contain all the information needed to determine all vector components. The missing information may be recovered by constructing an L + K-dimensional embedding vector that is either based on the derivatives of the observed state variable x1(t), …, x1(t)( or its delay coordinates x1(t), …, x1(t − (L + K)τ). This vector is then embedded in the time series or Vexp(t), …, Vexp(t − (L + K)τ) respectively. Takens’ theorem specifies that the embedding space must have at least 2(L + K) + 1 samples for the system to be observable [22, 23, 27] although simulations by Parlitz et al. [25, 26] have shown that an embedding space equal to the number of state variables is generally sufficient. The time series which are assimilated usually hold n = 10, 000 − 100, 000 data points [1-3] which amply fulfill the observability requirement, n ≫ 2(L + K) + 1, if L + K < 100 typically. Twin experiments have verified that the assimilation of large data sets [28-30] infers the original model parameters of well-posed problems [31].IdentifiabilityAny two pairs of parameter sets 1 ≠ 2 are identifiable if they result in different state trajectories 1(t) ≠ 2(t) given the same driving force, I(t), and same initial conditions 1(0) = 2(0). Parameter identifiability is highly dependent on the choice of driving force [32]. However, the driving force criteria that make parameters identifiable have not been studied so far, partly because most investigations have focused on self-sustaining oscillators [8, 33].Local minima in the cost functionVariational cost functions are often riddled with local minima [34] giving sub-optimal parameters solutions. The probability of parameter search arriving at such false solution is enhanced by the presence of experimental error particularly when this error becomes comparable or greater than the error introduced by sub-optimal parameters. In this situation, minimizing the cost-function alone is unable to resolve optimal from sub-optimal parameter solutions. A regularization method is thus needed to recover the optimal solution.Ill-defined problemsThe model equations of biological neurons are unknown [1, 2]. The guessed conductance models carry model error whose effect on parameter solutions needs evaluating. Secondly unknown models carry the risk of over-specifying ion channels and failing to meet identifiability criteria [5, 35, 36].Here we address the problem of multi-valued solutions in the optimization of neuron-based conductance models. The effects of experimental and model error on these solutions is demonstrated from general considerations on the cost function. We then use an exemplar conductance model to demonstrate the enhancement of convergence towards the optimum parameter solution. The model is a variant of the multichannel conductance models which were proven to successfully assimilate biological neurons ranging from songbird neurons [1, 2] and hippocampal neurons [3, 37] to respiratory neurons [3]. The exemplar model displays the same multiplicity of sub-optimal solutions encountered in all neuron-based conductance models including those derived from Hodgkin-Huxley equations [1, 2, 37, 38] or analog device equations [3, 39]. We began by performing random Monte-Carlo simulations of the posterior distribution function (PDF) of model parameters estimated from noisy data. We show that the interplay of model nonlinearity, experimental error and model error, shifts the maximum likelihood expectation (MLE) and standard deviation of estimated parameters. The realization of noise across the measurement window is found to shift the location of the local and global minima relative to one another on the data misfit error surface. Experimental error also tilts the principal axes of surfaces of constant data misfit error centered on each minimum. We use these findings to regularize convergence towards the optimum parameter solution when parameter search would otherwise stop at a local minimum near the global minimum. This novel method increases the probability of convergence towards the true global minimum from 67% to 94%. We also reduced the correlations between parameters by over an order of magnitude by increasing the duration of the assimilation window while keeping the size of the problem constant. For this we introduced an adaptive sampling rate which applied a longer time step during intervals of sub-threshold oscillations. Our simulations also show that models configured with sub-optimal parameters output membrane voltage oscillations which are always distinguishable from those of models configured with optimal parameters. Hence even biocircuits exhibiting functional overlap under normal conditions [6, 8, 9, 40] may have their parameters fully determined under appropriate external stimulation with the regularization method we introduce here.The paper is structured as follows. The first section describes the effects of experimental error and model error on the data misfit surface. We calculate the parameter offset δ as a function of the amplitude (σ) and realization (ζ) of additive noise and model error. The second section computes the posterior distribution functions of extracted parameters and investigates their shape, MLE and, covariance. The third section describes the regularization method that uses the above parameter offset to enhance the probability of convergence to the optimal parameter solution. The fourth section describes the adaptive sampling method we use to enhance parameter identifiability. The last section discusses predictions made by models configured with optimal and sub-optimal parameters. The results show that under appropriate conditions of stimulation, the oscillations produced by disparate sets of parameters are always distinguishable.
Results
Noise-induced shift in parameter solutions
One defines a least-squares cost function to measure the distance between the state variable of the membrane voltage in the model V(t, (0), ) and the experimentally observed membrane voltage Vexp(t). (0) are the initial conditions of the state variables for the model. The cost function is evaluated at each mesh point i = 0…n of the assimilation window:
where the x(t), l = 1…L are the state variables of the neuron-based conductance model and the p, k = 1…K are the parameters of the model. State variables are evaluated at discrete times t = iT/n, i = 0…n across the assimilation window of duration T. They typically include the membrane voltages, gate variables and synaptic currents of conductance models. The function u(t) is a Tikhonov regularization term [41] which smoothes convergence over successive iterations by eliminating positive values of the conditional Lyapunov exponents [42]. u(t) is also evaluated at discrete times like other state variables but under the constraint that it varies smoothly rather than according to Eq 1 (see Methods section).In order to separate the contributions of experimental error and model error, we introduce the useful membrane voltage, V(t), that is the voltage that would be measured by the ideal current clamp (Fig 1(a)). This approach allows us to separate experimental error, ϵexp(t) = Vexp(t) − V(t), from model error, ϵ(t, (0), ) = V(t, (0), ) − V(t). Experimental error, ϵexp(t), covers patch clamp noise, thermal fluctuations, stochastic processes associated with the opening and closing of ion channels, the binding of signalling molecules to receptors, and long term membrane potentiation [43]. We model this below with n + 1 random variables ϵ(t), i = 0…n, each of which follows a normal distribution, , with zero mean and standard deviation σ. Individual realizations of noise across the assimilation window are labelled ζ. The cost function in Eq. refeq:eq1 is only suitable for uncorrelated noise. Temporally correlated noise, or more generally temporally correlated measurements, would be treated in the same way by substituting the least square cost function with a cost function incorporating an error conditioning covariance matrix [44] accounting for correlations between measurements through finite off-diagonal terms. Unlike experimental error, model error depends on the model parameters. The cost function may thus be expanded with respect to model parameters as:
to separate the error contributions from model and measurements.
Fig 1
Data misfit surface perturbed by experimental and model error.
(a) Membrane voltage, Vexp(t), recorded in discrete time t, i = 0…n (cross symbols); useful membrane voltage, V(t), obtained from an ideal measurement apparatus (black line); membrane voltage state variable of the conductance model, V(t) (red line). Experimental error: ϵexp(t) = Vexp(t) − V(t). Model error: ϵ(t) = V(t) − V(t). (b) Lines of constant data misfit, δc = f(σ, ζ), about the global minimum . Different noise realizations, ζ1 (ζ2), shift the global minimum (). Noise also tilts the principal axes of the data misfit ellipsoid (red/blue arrows) and modifies the principal semi-axes (λ,λ). (c) RVLM neuron model membrane voltage Vexp (black line) induced by current injection I (blue line). Additive noise ϵ is incorporated in the model data. (d) Posterior distribution function π(p) of parameter p, k = 1…K.
Data misfit surface perturbed by experimental and model error.
(a) Membrane voltage, Vexp(t), recorded in discrete time t, i = 0…n (cross symbols); useful membrane voltage, V(t), obtained from an ideal measurement apparatus (black line); membrane voltage state variable of the conductance model, V(t) (red line). Experimental error: ϵexp(t) = Vexp(t) − V(t). Model error: ϵ(t) = V(t) − V(t). (b) Lines of constant data misfit, δc = f(σ, ζ), about the global minimum . Different noise realizations, ζ1 (ζ2), shift the global minimum (). Noise also tilts the principal axes of the data misfit ellipsoid (red/blue arrows) and modifies the principal semi-axes (λ,λ). (c) RVLM neuron model membrane voltage Vexp (black line) induced by current injection I (blue line). Additive noise ϵ is incorporated in the model data. (d) Posterior distribution function π(p) of parameter p, k = 1…K.One now considers how perturbations of the useful signal by experimental error and model error modify the cost function in the vicinity of a local/global minimum. Labelling the true global minimum at zero noise, , we compute the data misfit . This gives the perturbation of the cost function by noise. The first three terms in the expansion about the true minimum :
include the offset F representing signal noise entropy, a finite gradient arising from the interplay between model nonlinearity and the realization of noise, and the Hessian perturbed by experimental and model errors. These three terms are:
The surface of constant data misfit δc = f(σ, ζ) (Fig 1(b)), is a K-dimensional ellipsoid. Gradient (Eq 4) is responsible for shifting the centre of the ellipsoid from to a new location . This propels the new minimum to a different location in parameter space which depends on the noise realization, ζ (Fig 1(b)). The vector components of will in general remain finite due to the interplay of model nonlinearity with noise (Eq 6). The dominant contribution to the ∂V/∂p term will come from jumps in membrane voltage (-100mV ↔ +45mV) near action potentials that can be induced by minute changes in parameter values. Hence, noise weighted derivatives ∂V/∂p averaged across the assimilation window give finite gradient values G(ζ) which depend on noise realizations. Different noise realizations thus give different parameter offsets, (Fig 1(b)).Before proceeding with the calculation of the parameter offset, note the superposition of noise and model error in . The first term in H gives the curvature of the data misfit surface. This term determines how tightly constrained a parameter estimate is, also labelled parameter “sloppiness” by Gutenkunst et al. [7]. The second term in H gives the perturbation of this curvature by noise and model error. As noted above, the second derivative of V with respect to parameters p and p weighted by error does not cancel by summation across the assimilation window. As a result, noise and model error are expect to tilt the principal axes of the ellipsoid and change their semi-axes. Experimental and model error thus alter parameter correlations.The F term represents the signal noise entropy supplemented by correlations between noise and model error. The dominant first term is the random energy TdS that relates to noise entropy dS through the Johnson-Nyquist theorem [45, 46]:
where k is Boltzmann’s constant, R is the membrane resistance of the neuron, Δf is the bandwidth of noise and T is the noise-equivalent temperature.The noise-induced shift in δ is obtained through principal component analysis of the Hessian matrix. In the basis of its eigenvectors, the Hessian is a K × K diagonal matrix where the λ are the principal semi-axes of the data misfit ellipsoid. is the K × K orthonormal matrix of eigenvectors transforming δ into in the new basis and into . The data misfit may be written as:
where
The noise-induced offset follows from Eq 7 as . Through gradient , experimental error gives the first order contribution to the noise-induced parameter shift (Eq 7). Model error gives a second order contribution through its perturbation of . The tilt of the principal axes of the ellipsoid is given by the eigenvectors in matrix and their semi−axes are the λ eigenvalues.
Posterior distribution function of optimal parameters
To demonstrate the above results, we now compute the effect of noise amplitude on the PDF of optimal parameters. The next section will then evaluate the parameters arising from individual noise realizations rather than a statistical ensemble and calculate individual parameter offsets relative to when no noise is applied.We choose the conductance model of a rostral ventrolateral medulla (RVLM) neuron located at the base of the brain [47, 48]. This neuron accelerates heart rate when blood pressure increases for instance and balances the bradycardia action of vagal motoneurons [47]. The RVLM neuron has a wide complement of ion channels (Table 1), and as such is an appropriate neuron to model. The somatic compartment of RVLM neurons includes the following ion channels [48]: transient sodium channels (NaT), depolarization-activated potassium channels (K), leak channels (Leak), hyperpolarization-activated cation channels (HCN), and low threshold calcium channels (CaT). The RVLM model has 7 state variables (L = 7) and 41 parameters (K = 41). The biological parameters are the vector components of in Table 2. Model data, V(t), were then synthesized by using the RVLM model configured with to forward integrate the current protocol of Fig 1(c) (blue line)). We then conducted a “twin-experiment” to infer model parameters back from the model data (Fig 1(c)) and validate the ability of nonlinear optimization to recover the true parameter solution. The parameters were estimated using an interior point line parameter search algorithm [28] which was used earlier to build predictive neuron models [1, 2, 31]. The assimilation window had n = 10, 000 mesh points. The mesh size was Δt = 20μs (T = 200ms). All 41 parameters of the optimal solution are listed in Table 2. Each parameter estimate was found to be within 0.2% of its true value.
Table 1
Ion channels of the RVLM neuron.
Current densities with maximal conductances g, α ∈ {NaT, K, HCN, L}; sodium and potassium reversal potentials, E and E; hyperpolarized-activated cation reversal potential E = -43mV [69]; leakage potential E [70]. m and h are the state variables of the activation and inactivation gates of the NaT channel. n is the activation gate of potassium. z is the HCN activation gate. The Calcium current is given by the Goldman-Hodgkin-Katz equation Eq 13 [71].
ID
Channel
Current density
Maximal conductance
NaT
Fast and transient Na+ current
JNaT = gNaTm3h(ENa − V)
gNaT = 110mS.cm− 2
K
Transient depolarization activated K+ current
JK1 = gKn4(EK − V)
gK1 = 5mS.cm− 2
HCN
Hyperpolarization-activated cation current
JHCN = gHCNz(EHCN − V)
gHCN = 0.092mS.cm− 2
CaT
Low threshold Ca2+ current
JCaT = GHK
-
L
Leakage channels
JL = gL(EL − V)
gL = 0.066mS.cm− 2
Table 2
Parameters of the RVLM neuron model.
From left column to right column: parameter search interval, [, ]; true parameters used to synthesize model data, ; optimal parameters estimated at the true global minimum of the cost function, (σ = 0); sub-optimal parameters estimated at the global minimum shifted by noise, (σ = 0.5mV); sub-optimal parameters estimated at the local minimum, (σ = 0), nearest to the global minimum .
Ion
Parameter
Data
Estimates
pL, pU
ptrue
p0*
pσζ*
p0ℓ
C
μF.cm−2
1.0, 1.0
1.0
1.0
1.0
1.0
ENa
mV
42, 50
41
41.007
41.075
60.000
EK
mV
-90, -80
-100
-100.005
-100.763
-90.000
EH
mV
-30, -5
-43
-42.963
-42.793
-30.000
ELeak
mV
-110, -65
-65
-64.999
-64.964
-66.541
A
×104μm2
202—502
2.90
2.90
2.91
2.90
NaT
gNaT
mS.cm−2
100, 120
69
68.912
69.924
100.000
m
Vm
mV
-49, -27
-39.92
-39.921
-39.965
-30.931
δVm
mV
5, 32
10
10.000
9.949
15.850
δVτm
mV
5, 23.39
23.39
23.380
23.254
0.100
tm
ms
0.02, 0.7
0.143
0.143
0.157
0.815
εm
ms
0.012, 7
1.099
1.099
1.094
19.543
h
Vh
mV
-79, -39
-65.37
-65.365
-65.558
-52.863
δVh
mV
-35, -5
-17.65
-17.652
-17.629
-13.752
δVτh
mV
4, 43
27.22
27.218
27.670
14.107
th
ms
0.02, 90
0.701
0.701
0.684
0.502
εh
ms
1, 470
12.9
12.898
12.942
10.629
K
gK
mS.cm−2
0
6.9
6.905
6.736
2.232
n
Vn
mV
-69, -21
-34.58
-34.557
-34.763
-39.654
δVn
mV
5, 34
22.17
22.178
21.932
13.118
δVτn
mV
5, 34
23.58
23.588
23.851
21.556
tn
ms
0.01, 5.4
1.291
1.291
1.273
0.434
εn
ms
0.002, 23
4.314
4.311
4.248
6.416
CaT
pT
×10−4 cm.s−1
0, 80
1.035
1.035
0.210
0.130
Vq
mV
-80, -35
-65.5
-65.491
-64.483
-67.767
q
dVq
mV
5, 39
12.4
12.391
14.003
9.958
δVτq
mV
10, 57
27
27.123
28.911
14.985
tq
ms
0.02, 0.9
0.719
0.693
2.232
7.556
εq
ms
0.5, 97
13.05
13.059
11.759
8.370
r
Vr
mV
-90, -55
-86
-86.011
-73.916
-74.356
δVr
mV
-34, -5
-8.06
-8.065
-4.547
-3.962
δVτr
ms
3, 55
16.71
16.760
9.829
0.100
tr
ms
5, 190
28.17
28.120
27.435
55.095
εr
mV
0.5, 7000
288.68
287.067
319.355
1000.000
HCN
gH
mS.cm−2
0, 10
0.150
0.150
0.149
0.177
z
Vz
mV
-90, -40
-76
-76.001
-76.297
-79.121
δVz
mV
-30, -5
-5.5
-5.517
-5.430
-11.876
δVτz
mV
5, 40
20.27
20.273
21.861
100.000
tz
ms
0.1, 500
6.31
6.348
0.100
10.000
εz
mV
0.1, 5000
55.05
55.019
60.471
50.323
Leak
gL
mS.cm−2
0.01, 0.6
0.465
0.465
0.463
0.482
Ion channels of the RVLM neuron.
Current densities with maximal conductances g, α ∈ {NaT, K, HCN, L}; sodium and potassium reversal potentials, E and E; hyperpolarized-activated cation reversal potential E = -43mV [69]; leakage potential E [70]. m and h are the state variables of the activation and inactivation gates of the NaT channel. n is the activation gate of potassium. z is the HCN activation gate. The Calcium current is given by the Goldman-Hodgkin-Katz equation Eq 13 [71].
Parameters of the RVLM neuron model.
From left column to right column: parameter search interval, [, ]; true parameters used to synthesize model data, ; optimal parameters estimated at the true global minimum of the cost function, (σ = 0); sub-optimal parameters estimated at the global minimum shifted by noise, (σ = 0.5mV); sub-optimal parameters estimated at the local minimum, (σ = 0), nearest to the global minimum .We then synthesized experimental data by adding noise to the useful membrane voltage: Vexp(t) = V(t) + ϵ(t). We generated R = 1000 different time series with different noise realizations ζ to generate a statistical distribution of estimated parameters . Convergence to the optimum solution was secured by initializing the parameter search at .Fig 2(a) shows the distribution of estimated parameters centred on their mean value (σ = 0.75mV). The sloppiest parameters are characteristically the recovery time constants, and more specifically those of the Na channel (t), HCN channel (t), and low threshold Ca2+ channel (t). The effect of increasing noise amplitude from σ = 0 to 0.75mV is to broaden the distribution of estimated parameters. This is shown in Fig 2(b) and 2(c) for the HCN recovery time (t) and the maximum Calcium permeability (p). As noise increases from σ = 0 to 0.75mV the MLE of parameter t remains approximately constant and the standard deviation broadens symmetrically. In contrast, the MLE of parameter p increases monotonically as noise increases from σ = 0 to 0.75mV. The parameter distribution is asymmetrical even at low noise amplitude.
Fig 2
Probability distribution of estimated parameters.
(a) Scatter plot of parameters p, k = 1…41, estimated by assimilating the RVLM membrane voltage incorporating different realizations of Gaussian noise. Noise amplitude: σ = 0.75mV. The dependence of this distribution on noise amplitude is plotted for 2 parameters: (b) the recovery time t of HCN inactivation gate and, (c) the maximum permeability of the CaT ion channel, . (d,e) Probability density functions (PDF) of parameters t and calculated at increasing noise amplitudes σ = 0.25, 0.50 and 0.75mV. Statistical sample: 1000 parameter sets extracted for different noise realizations. The initial condition was
(f) Eigenvalue spectrum of the 41 × 41 covariance matrix of parameter estimates. The λ, κ = 1…41 are the semi-axes of the data misfit ellipsoid δc = f(σ, ζ) and the are the eigenvalues of covariance matrix Σ. Spectra are calculated at four noise amplitudes: σ = 0, 0.25, 0.50 and 0.75mV. (g) Relationship between the standard deviation of a parameter, σ, and the noise amplitude, σ.
Probability distribution of estimated parameters.
(a) Scatter plot of parameters p, k = 1…41, estimated by assimilating the RVLM membrane voltage incorporating different realizations of Gaussian noise. Noise amplitude: σ = 0.75mV. The dependence of this distribution on noise amplitude is plotted for 2 parameters: (b) the recovery time t of HCN inactivation gate and, (c) the maximum permeability of the CaT ion channel, . (d,e) Probability density functions (PDF) of parameters t and calculated at increasing noise amplitudes σ = 0.25, 0.50 and 0.75mV. Statistical sample: 1000 parameter sets extracted for different noise realizations. The initial condition was
(f) Eigenvalue spectrum of the 41 × 41 covariance matrix of parameter estimates. The λ, κ = 1…41 are the semi-axes of the data misfit ellipsoid δc = f(σ, ζ) and the are the eigenvalues of covariance matrix Σ. Spectra are calculated at four noise amplitudes: σ = 0, 0.25, 0.50 and 0.75mV. (g) Relationship between the standard deviation of a parameter, σ, and the noise amplitude, σ.We then used the 1000 parameter estimations to compute the PDFs and reveal the effects of model nonlinearity. The PDFs of the parameters representing the transition regions of the activation curves of K+ (δV) and HCN (δV) are plotted in Fig 2(d) and 2(e) respectively. These PDFs are compared to their Gaussian best fit (solid red line) at three noise amplitudes, σ = 0.25, 0.5, 0.75 mV. As observed for t, the MLE of parameter δV is independent of noise, the PDF remains approximately Gaussian at all noise amplitudes, and its standard deviation increases as noise amplitude increases (Fig 2(d)). In contrast, δV, like p above, has a non-Gaussian PDF, and its MLE shifts to a lower voltage as σ increases (Fig 2(e)).Lastly, we investigated the correlations between estimated parameters and investigated the effect of increasing noise amplitude on parameter correlations. For this we calculated the covariance matrix:
which is related to the Hessian through . R is the number of noise realizations and hence the statistical sample of parameter sets used to calculate the covariance matrix. We calculated the eigenvalues of Σ which are the squares of the principal half-lengths of the data misfit ellipsoid (Fig 2(f)). Clearly the RVLM model parameters exhibit correlations spanning several orders of magnitude. Most parameters are well-constrained. However not all correlations vanish as σ → 0. The two leftmost points (black circles) indicate pairs of parameters which remain correlated irrespective of noise amplitude. These parameters are the recovery time constants t, t and t already noted in Fig 2(a) to have a wider dispersion than the other parameters. Unsurprisingly, increasing noise amplitude increases parameter correlations. We also calculated the dependence of the standard deviation of the PDF, σ, as a function of the noise amplitude σ (Fig 2(g)) for arbitrarily chosen parameters. Note the sub-linear dependence tending to saturation.
Regularization of convergence by additive noise
Due to the nature of data assimilation, certain initial guesses of state variables and parameters may lead to sub-optimal solutions which are local minima of the data misfit function. The local minimum nearest to the global minimum was identified by running parameter searches initialized at random points in parameter space. This local minimum in the absence of additive noise is given in Table 2 as . We now switch on noise and study the effect of noise amplitude σ and noise realization ζ on the relative positions of and .Our regularization method is depicted schematically in Fig 3(a). This relies on the noise-induced shift in parameter solutions. We begin by choosing one realization of additive noise (ζ) before varying the noise amplitude in the range −0.5mV < σ < +0.5mV. A negative value of σ here implies a temporal realization of noise with negative amplitude but same Gaussian probability distribution. (i) Starting from σ = 0, the local and global minima, (pink star) and (red star), are separated by a saddle point in the cost function surface (open dot). (ii) As σ increases, the local and global minima shift relative to one another, getting closer or further apart depending on the sign of σ. When and (blue dots) approach one another, there exists a critical noise amplitude σ (iii) where the saddle point and the local minimum merge inducing a saddle-node bifurcation [49] towards the global minimum: . (iv) is then set as the new initial guess of the parameter search.σ is then ramped down to zero from σ to obtain the optimal parameter solution .
Fig 3
Regularization of parameter search.
(a) Profile of the data error misfit function δc plotted along a straight line passing through the global minimum (red star) and the nearest local minimum (magenta star). (I) In the absence of noise (σ = 0), a saddle point separates and (open dot). (II) Increasing noise amplitude up to a critical value σ < σ shifts the local solution, , and the global solution, (blue dots). (III) At σ, the barrier at the saddle point vanishes. Hence, the local minimum merges with the saddle point. (IV) Parameter search initialized at converges smoothly to the optimal solution as noise vanishes. In this way, parameter search is regularized. (b) Trajectory of the local solution parametrized by noise as the noise amplitude varies from σ = −0.5mV to +0.5mV. The noise amplitude is colour coded in each dot. The noise realization remains the same (ζ1). The 41-dimensional trajectory is projected onto the 2D plane (E, ε). At σ = −40μV, merges with (step III). (c) Same as in (b) but for a trajectory calculated with a different noise realization, ζ2. Here σ = +50μV. (d) Various trajectories of the solution during step IV. The different starting points are the shifts induced by different realizations of noise, ζ3, …, ζ8. (e) Probability of convergence to the optimal solution with noise regularization (red) and without (blue). The success rate was calculated from a statistical sample of 150 parameter solutions computed from random parameter initializations.
Regularization of parameter search.
(a) Profile of the data error misfit function δc plotted along a straight line passing through the global minimum (red star) and the nearest local minimum (magenta star). (I) In the absence of noise (σ = 0), a saddle point separates and (open dot). (II) Increasing noise amplitude up to a critical value σ < σ shifts the local solution, , and the global solution, (blue dots). (III) At σ, the barrier at the saddle point vanishes. Hence, the local minimum merges with the saddle point. (IV) Parameter search initialized at converges smoothly to the optimal solution as noise vanishes. In this way, parameter search is regularized. (b) Trajectory of the local solution parametrized by noise as the noise amplitude varies from σ = −0.5mV to +0.5mV. The noise amplitude is colour coded in each dot. The noise realization remains the same (ζ1). The 41-dimensional trajectory is projected onto the 2D plane (E, ε). At σ = −40μV, merges with (step III). (c) Same as in (b) but for a trajectory calculated with a different noise realization, ζ2. Here σ = +50μV. (d) Various trajectories of the solution during step IV. The different starting points are the shifts induced by different realizations of noise, ζ3, …, ζ8. (e) Probability of convergence to the optimal solution with noise regularization (red) and without (blue). The success rate was calculated from a statistical sample of 150 parameter solutions computed from random parameter initializations.Steps (i) to (iii) are demonstrated numerically in Fig 3(b) and 3(c). The parameter search was initialized at the local minimum where the cost function was . In contrast, the cost function at the global minimum was almost two orders of magnitude lower at . The state variables were initialized at the same values throughout. The data time series had n = 10, 000 points and Δt = 20μs. Two different noise realizations ζ1 and ζ2 were applied in Fig 3(b) and 3(c) respectively. Initializing the estimation procedure at , the parameter solution was calculated and projected in the two-dimensional plane (ε, E) as σ varied from 0 to +0.5 (red dots) and 0 to -0.5 (blue dots). ε is a parameter of the HCN activation gate which gives the difference in recovery times between the half-open and fully open state of the gate. E is the leak reversal potential. The same qualitative results are observed in other projection planes involving different pairs of parameters in Table 2. At σ = 0, the parameter solution remains the local minimum (Fig 3(b) and 3(c), magenta star). For σ > 0, the local and global minima move away from one another causing to shift monotonically away from as σ increases (red dots). In contrast, when σ < 0, the distance between the local and global minima decreases. At σ = −40μV, the saddle point vanishes followed by an abrupt transition from the local minimum to the global minimum . The effect of using a different noise realization ζ2 in Fig 3(c) is to change the path of the solution in parameter space. The saddle-node bifurcation also occurs at a different noise amplitude of σ = +50μV.Steps (iii) to (iv) are demonstrated in Fig 3(d). The optimal solution was recovered by ramping down σ from σ. The trajectories of converge to as σ is progressively decreased from σ. Fig 3(d) shows the trajectories calculated for 5 different noise realizations ζ1…ζ5. Fig 3(d) thus demonstrates the dependence of the noise-induced parameter offset on noise realization, as predicted by Eq 7.Therefore, the two-step procedure we have described is useful to regularize convergence towards the global minimum. The algorithm of the regularization method may be summarized as follows: (i) Solve the inverse problem using smooth data. The solution may be optimal or sub-optimal. (ii) Apply additive noise to the data and vary its amplitude while keeping its realisation constant until an abrupt step in both δ and δc is observed. (iii) Progressively reduce noise amplitude to zero to obtain the optimal parameter solution. Assimilations of the RVLM neuron model starting from 150 random initial guesses of parameters and state variables were found to converge to the optimum solution with a probability of 94.3% using noise regularization, and 67% without. In the other 5.7% and 33% of cases, convergence terminated at local minima. (Fig 3(e)).
Decorrelating parameters
Parameter uncertainty and correlations may arise from incomplete fulfilment of identifiability conditions if the stimulation protocol is ill-chosen. For conductance models, this means that the assimilation window must contain multiple action potentials as most model parameters control the dynamics of depolarization. In addition, current protocols must include (i) current steps of different durations to probe the recovery of ionic gates with different kinetics, and (ii) current steps of different amplitude to extract information from the depolarized, sub-threshold and hyperpolarized states of a neuron. These complex current protocols are required to decorrelate the model constraints (Eq 2) linearized at consecutive time points of the assimilation window. Increasing the window length also contributes to better constrained global parameter solutions. The drawback, however, is that as n increases beyond n ≈ 104 points, the cost function becomes highly irregular due to an increased number of local minima [50, 51]. In order to increase the length T of the assimilation window while keeping n < n, we introduce a smart sampling method which samples sub-threshold oscillations with a larger step size than action potentials. For membrane voltages above -65mV, we apply a mesh size of Δt1 = 10μs whereas sub-threshold oscillations are sampled with a mesh size Δt2 = nΔt1 (Fig 4(a)). The rationale for this is that sub-threshold oscillations are controlled by fewer parameters than the depolarized state. Since time intervals of membrane depolarization are few and far apart, this approach allows considerable increases in duration of the assimilation window while keeping n constant (see Methods).
Fig 4
Increasing the duration of the assimilation window reduces parameter uncertainty.
(a) An adaptive step size was used to increase the duration of the assimilation window while keeping the size the problem constant and equal to n = 10, 000 samples. The step size was Δt1 = 0.01ms during the depolarization time intervals (Vexp > −63mV) and Δt2 = mΔt1, m = 1, 2, 4, elsewhere. (b) Dependence of the parameter correlations as the duration of the assimilation window increase from T = 200ms (m = 1), 320ms (m = 2) to 382ms (m = 4). The eigenvalues of the covariance matrix were calculated from parameters estimated from randomly initialized parameters and state variables. Additive noise had amplitude σ = 0.25 mV. Posterior distribution function of two parameters chosen for controlling (c)
action potentials via the sodium conductance g and (d)
sub-threshold oscillations via calcium kinetics ε. Statistical sample for histograms (b,d): 1000 assimilations started at the global minimum with a unique noise realization.
Increasing the duration of the assimilation window reduces parameter uncertainty.
(a) An adaptive step size was used to increase the duration of the assimilation window while keeping the size the problem constant and equal to n = 10, 000 samples. The step size was Δt1 = 0.01ms during the depolarization time intervals (Vexp > −63mV) and Δt2 = mΔt1, m = 1, 2, 4, elsewhere. (b) Dependence of the parameter correlations as the duration of the assimilation window increase from T = 200ms (m = 1), 320ms (m = 2) to 382ms (m = 4). The eigenvalues of the covariance matrix were calculated from parameters estimated from randomly initialized parameters and state variables. Additive noise had amplitude σ = 0.25 mV. Posterior distribution function of two parameters chosen for controlling (c)
action potentials via the sodium conductance g and (d)
sub-threshold oscillations via calcium kinetics ε. Statistical sample for histograms (b,d): 1000 assimilations started at the global minimum with a unique noise realization.We first studied the effect of the length of the assimilation window on parameter correlations by computing the spectrum of eigenvalues of the covariance matrix (Fig 4(b)). The covariance matrix was generated by assimilating model membrane voltages with R = 1000 different realizations of additive noise of amplitude σ = 0.75mV. The assimilation window had 10, 001 data points but their time intervals varied. The spectrum of eigenvalues is plotted for increasingly wide assimilation windows corresponding to Δt1 = 10μs (T = 200ms), 20μs (T = 320ms), 40μs (T = 382ms). Fig 4(b) shows that increasing the duration of the assimilation window uniformly reduces correlations, , for all 41 parameters. Compare this with Fig 2(f) where
some parameters remain highly correlated even at σ → 0. Fig 4(c) and 4(d) show the progressive narrowing of the PDF of the g and ε parameters as T increases. Conductances such as g are already well constrained hence their PDF becomes marginally narrower as T increases. In contrast, the standard deviation of loosely constrained recovery time constants in Fig 2(a) decrease by an order of magnitude as the duration of the assimilation window increases from T = 200ms to 382ms (Fig 4(d)). We have therefore shown that long assimilation windows increase parameter identifiability and considerably reduce sloppiness.
Comparing model predictions with local and global parameters
We finally compare the predictions of models configured with 3 sets of parameters: , and , a vicinal location to the global minimum defined as the global minimum shifted by noise. These parameters are listed in Table 2. Fig 5(a) shows the locations of (purple dot) and (orange dot) on the data misfit surface relative to (red dot). The Euclidean norm was used to evaluate the distance in parameter space to the optimum solution. We show here that predictions made with sub-optimal parameters , are always discernible from those made with the optimal set .
Fig 5
Effect of optimal and sub-optimal parameters on model predictions.
(a) Value of the cost function at the site of local minima (purple/orange/blue dots) in the vicinity of the global minimum (red dot) plotted as a function of the distance to the global minimum defined by the Euclidean metric. The blue dots are the local minima situated further away from the global minimum. (b-d) Reference membrane voltage (black line) induced by the current protocol (dark blue line). The membrane voltage predicted by configuring the RVLM model with parameters: (a)
, (b)
, (c)
is shown as the red line. The difference between the predicted voltage and the reference voltage is the prediction error (cyan lines).
Effect of optimal and sub-optimal parameters on model predictions.
(a) Value of the cost function at the site of local minima (purple/orange/blue dots) in the vicinity of the global minimum (red dot) plotted as a function of the distance to the global minimum defined by the Euclidean metric. The blue dots are the local minima situated further away from the global minimum. (b-d) Reference membrane voltage (black line) induced by the current protocol (dark blue line). The membrane voltage predicted by configuring the RVLM model with parameters: (a)
, (b)
, (c)
is shown as the red line. The difference between the predicted voltage and the reference voltage is the prediction error (cyan lines).The predictions of the three RVLM neuron models configured with parameters , and are shown in Fig 5(b), 5(c) and 5(d) respectively (red lines). These are compared to the model data synthesized using (black line). The prediction error is the cyan line (Fig 5(b)–5(d)). Predictions obtained with are identical to the model data. Interestingly, prediction accuracy is maintained in spite of residual numerical error in . These computational errors do not diminish the predictive power of the model (Fig 5(b)). In contrast, predictions made by configuring the RVLM model with show systematic discrepancies at the site of action potentials (Fig 5(c)). Spike bursts are completely missed and the height of action potentials is incorrect. The sub-threshold dynamics is, however, represented with great accuracy. Similarly, predictions made with show some missing spikes and some additional ones (Fig 5(d)). These results suggest that the original parameters form the one and only set capable of predicting the experimental time series. Hence, the injected current is sufficiently discriminating for the identifiability condition to be validated. The membrane voltage time series encodes the single-valued parameter solution as prescribed by Takens’ theorem. We have further verified in S1 Fig that a current protocol consisting of long rectangular steps fail to constrain all model parameters. This demonstrates the importance of selecting external stimuli that probe the full dynamic range of the nonlinear system for parameters to be identifiable.
Discussion
The significance of parameter estimation methods for extracting information from biological systems has recently been discussed [6, 9]. An increasingly prevalent view among biologists is that parameters estimated from biological models are universally sloppy [7] and that disparate sets of parameters can generate identical neuronal oscillations [37, 52]. The notion that biocircuits must incorporate functional overlap is consistent with the observation of brain remodelling and ageing. For example, the brains of the elderly lose between 2% and 4% of their peak number of neurons without significant decrease in cognitive abilities [53]. Therefore, if the function of a biological system is underpinned by redundant degrees of freedom, can one reasonably expect to infer its internal structure from observations of its dynamics?The answer from nonlinear science is that the parameters and initial conditions that control neuronal oscillations can generally be inferred from the observation of its membrane voltage over a finite time interval [22, 23, 27]. However, there are conditions to satisfy. The condition of observability is satisfied by choosing a number of data points greater than L + K. This condition is easily met. Both Toth et al [31] and ourselves in Table 2 have demonstrated the system is observable by recovering the original parameters in twin experiments. The second condition—identifiability—requires the system to be driven by an external stimulus with the appropriate range of dynamics and current amplitudes to constrain all parameters. For example, parameters extracted from data acquired under simpler current injection (S1 Fig) are not identifiable and are poorly constrained in contrast to those listed in Table 2 (). A driving force with complex dynamics is therefore necessary to warrant identifiability. In addition, increasing the duration of the assimilation window matters to reduce correlations between parameters and increase identifiability as observed by others [37, 50]. We have achieved this in Fig 4 by introducing an adaptive step size within our gradient descent algorithm. A second advantage of using an adaptive step size is that it allows longer assimilations windows and longer current steps to be applied (500ms). This is essential to quantify the effect of slow decaying currents on the long term potentiation of neurons [54]. When the conditions of observability and identifiability are met, we have shown in Fig 5 that sub-optimal parameters (at local minima) always give sub-optimal predictions which are easily distinguished from predictions by the optimal set of parameters. Therefore, under these conditions single-valued parameter solutions may be obtained from the time series observations of the neuron membrane voltage.One more complication is the presence of local minima in the cost function. The global minimum becomes harder to distinguish from local minima when the noise-induced error in the cost function becomes comparable to the data misfit error at a local minimum. In Fig 3, we introduce a regularization method which makes constructive use of additive noise to bias the gradient descent algorithm towards the global minimum when it would otherwise remain stuck in a local minimum. This method is well suited to the assimilation of actual neuron data acquired by low noise amplifiers in well-controlled experimental preparations for which experimental error remains a perturbation of the useful signal [1, 2]. The assimilation of very noisy data may still be approached using statistical inference methods such as expectation maximization frameworks [37, 38] or path integral methods [55]. However these methods rely on prior knowledge of parameter distribution functions whereas the present variational approach does not.Modern data assimilation [34, 44, 56] introduces experimental and model error in the form of covariance products which weight each measurement with the error of the measuring apparatus. These approaches are not suitable for highly nonlinear systems where a Gaussian shaped probability density on data does not translate into a Gaussian shaped probability density on parameters. Moreover the same electrophysiological apparatus is used to record all data points in the time series. Given each measurement carries the same error, this approach is fact reduces to our least-squares cost function (Eq 2). The nonlinearity of the conductance model implies that Bayesian approaches are no longer applicable to estimating MLE and standard deviation of parameter PDFs [4, 5, 26, 32, 57–61]. Our work has studied separately the effect of experimental and model error. We found that both errors shift the parameter solution on the data misfit surface. However, the primary cause of the parameter offset is experimental error with a second order contribution from model error. Our results identify the interplay between model nonlinearity and the realization of noise across the assimilation window as the reason for the parameter offset and its dependence on noise realization. An important consequence of this noise-induced shift is that the parameter solution inferred in the presence of experimental error is invariably wrong.Our results show that while biocircuits may exhibit functional overlap in their parameters, their underlying configuration can still be inferred provided an external driving force is applied. Parameter identifiability is always relative to the degree of sophistication of external stimulation. Unsurprisingly, functional overlap between parameters is primarily observed in self-sustaining oscillators such as central pattern generators operating in the steady-state without external input [9, 10, 40]. For such systems, parameter overlap [6, 9] may be useful to compensate for loss of functionality [11], and parameter sloppiness may be pervasive [7]. However, recent experiments have shown that among all network configurations with apparent overlap, only a small subset of these was able to explain the adaptation of rhythmic outputs to temperature changes [62], and changes in pH levels [63]. There is no doubt that subjecting central pattern generators to a wider range of entrainments would further reduce the set of parameters compatible with the observed outputs, up to the point where a unique parameter solution would remain that characterises all electrical properties. There is therefore no theoretical limitation to inferring the underlying structure of ion channels or connectivity of small networks other than the ingenuity in designing stimulation protocols that fulfill identifiability conditions. Translated to the brain, redundancy may allow normal operation to continue with ageing but our work suggest that flexibility to adapt to external stimulation will decrease together with the size of its parameter space.In conclusion, parameter redundancy and compensation is relative to external stimulation. Long and dynamically complex stimulation protocols were shown to reduce correlation between estimated parameters. We also quantified the effects of noise and model error and made constructive use of the induced parameter offset to increase the probability of convergence to the optimal set of parameters.
Methods
Conductance model
We model the parasympathetic neuron of the rostral ventrolateral medulla (RVLM). The RVLM neurons play a key role in cardiac regulation by accelerating heart rate and increasing the force of contraction of the heart muscle. In this way, these neurons compensate the action of vagal tone which reduces heart rate [47]. RVLM neurons have a greater complement of ion channels than the textbook Hodgkin-Huxley neuron [31]. This makes these neurons a good choice for evaluating the accuracy of the parameter estimation method when building models of actual neurons. The ion channels of RVLM neurons include transient sodium (NaT), potassium (K), low threshold calcium (CaT) and the hyperpolarization-activated cation current (HCN) [48]. The equation of motion for the membrane voltage is:
where C is the membrane capacitance, V is the membrane potential, I(t) is the injected current protocol, A is the neuron surface area, and J are the voltage-dependent ionic current densities across the cell membrane. The equations of individual ionic currents are given in Table 1. These currents depend on maximum ionic conductances (g, g, g), sodium, potassium and HCN reversal potentials (E, E, E), and gate variables (m, h, n, p, q, s). The control term u(t)[Vexp(t) − V(t)] was added to the right hand side of Eq 10 to eliminate the occurrence of positive conditional Lyapunov exponents and smooth convergence [64]. Ionic gates are assumed to recover from changes in membrane voltage according to a first order equation:
where x ∈ {m, h, n, s} represents the state of activation and inactivation of the NaT, K and HCN ionic gates (Table 1). The (in)activation curve of individual gates is modelled as:
where V is the (in)activation voltage threshold of the gate, δV is the width to the transition region from closed to open states and, δV is the half-width-at-half-maximum of the bell-shaped voltage dependence of the recovery time. The recovery time is t + ε at the opening threshold of the gate and t in the depolarized and hyperpolarised states.The transient low threshold calcium current is given by the Goldman-Hodgkin-Katz (GHK) equation:
where p and q are the activation and inactivation variables of the CaT channel. is the maximal permeability, [Ca2+] and [Ca2+] are the intra- and extracellular calcium concentrations, z = 2 is the valence of Ca2+, F is Faraday’s constant, R is the ideal gas constant, and T = 298.15K. The GHK equation was expanded about V = 0 into a Horner polynomial of order n = 25 to approximate Eq 12 over the range of the membrane voltages.
Current protocols and model data
A set of current protocols I(t) consisting of current steps of different amplitudes and durations was synthesized to provide stimulation to the neurons (Fig 5, dark blue line). Each protocol was calibrated to induce depolarisation or hyperpolarisation over different time scales covering the recovery times of ion channels. Model data were synthesized by forward integration of these currents with the RVLM conductance model (Eqs 10–13) configured with the set of parameters set in Table 2. The model equations were numerically integrated using the LSODA solver [65] which is able to resolve stiff and potentially unstable nonlinear systems [66]. Additive Gaussian noise ϵ was generated with a pseudo random number generator and added to the model membrane voltage. In this way, we obtained both current and membrane voltage time series, I(t)) and Vexp(t), used in data assimilation. The base sampling rate was 100kHz (Δt = 10μs).
Nonlinear cost function optimization
The least-squares objective function constrained by model equations was minimized using interior point line parameter search [28]. The Lagrangian of the problem was constructed from the cost function, equality constraints and inequality constraints [29]. The Lagrangian was minimized under the Karush-Kuhn-Tucker conditions [30]. Equality constraints were obtained by linearizing the RVLM conductance model:
at specific times across the assimilation window. The rate of change, F(), of state variable l depends on all state variables , parameters and time t. Inequality constraints were specified by the search intervals of individual parameters p ≤ p ≤ p, k = 1…K which are listed in Table 2. The bounds of parameter search are the only user-specified inputs of the minimization problem. The Jacobian and Hessian matrices of the constraints and cost function were computed using symbolic differentiation (https://pypi.org/project/pydsi). Interior point optimization reformulates inequality constraints as logarithmic barriers whose height is reduced iteratively as the parameter search approaches the global minimum of the optimization surface [29]. Minimization was implemented iteratively using a Newton-type algorithm until first-order optimality conditions on the Lagrangian function L(x) are met.The equality constraints Eqs 10 and 11 were then discretized to connect the state variables evaluated at mesh points across the assimilation window. For this purpose mesh points were dynamically grouped according to the order of the interpolation formula and the variable step size, which we implemented to improve accuracy on parameter solutions. We linearized Eqs 10 and 11 according to Boole’s interpolation which is accurate to [67] in contrast to Simpson rule’s [31]:
Data points were grouped in sets of 5: {t, …, t}. The state variable at t was interpolated from evaluations of F() at the 5 evenly spaced points separated by Δt. When the step size is constant, state variables are thus evaluated every 4Δt.We introduce an adaptive step size that samples sub-threshold oscillations with a lower resolution than action potentials. We therefore consider sub-threshold step sizes of pΔt where p = 2, 4…. Our group of 5 points then spans a duration of 4pΔt within the adaptive step framework. The last point of one group is the same as the first point of the succeeding group. To warrant an integer number of groupings in the assimilation window, we chose n to be an integer multiple of 4p.As Eq 16 constrains only one of the four points in the group, this condition alone does not force the solution to pass through the other 3 data points. The use of Eq 16 alone may support rapid oscillatory solutions which are undesirable [32]. In order to constrain the other 3 other points of the group, one needs to introduce additional Hermite conditions [31, 68]:
In practice, we find it is sufficient to evaluate only 2 out of 3 Hermite constraints to obtain smooth and accurate solutions. This reduces the computational effort without compromising accuracy on solutions.The control variable u and its time derivative du/dt were bounded by 0 ⩽ u ⩽ 1mV and −1mV.ms−1 < du/dt < + 1mV.ms−1. The u(t) were computed as an additional state variable across the assimilation window. To regularize convergence, we smoothed the fast oscillations of u by applying the above Hermite conditions (Eq 18).The adaptive step size was implemented automatically assigning step size Δt during action potentials when Vexp > −65mV, and pΔt (p = 2 or 4) otherwise.Assuming G to be the number of data point groupings across the assimilation window, the problem overall had L × G constraints due to Boole’s rule and 2(L + 1) × G constraints from Hermite’s conditions.
Dependence of parameter identifiability on the complexity of the current injection protocol.
(a) Dispersion of extracted parameters of the RVLM neuron model in response to a complex current stimulation protocol (grey line). (b) Same as (a) for a simpler current protocol (blue line). (c) Complex (grey) and simple (blue) current protocols used to stimulate the neuron and to constraint the parameters obtained in (a) and (b). (d) Size-ranked eigenvalue spectra of the covariance matrices of parameters estimated using the two current protocols in (c).(TIF)Click here for additional data file.4 May 2020Dear Prof Nogaret,Thank you very much for submitting your manuscript "Estimation of neuron parameters from imperfect observations" for consideration at PLOS Computational Biology.As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.When you are ready to resubmit, please upload the following:[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Hermann CuntzAssociate EditorPLOS Computational BiologyDaniele MarinazzoDeputy EditorPLOS Computational Biology***********************Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: This fairly theoretical paper describes a process to estimate parameters of neuron models. The method is applied using surrogate data obtained from running a model of a rostral ventrolaterial medulla neuron. It contains several techniques to find back the global optimal estimate of the parameters of the model in question.I think the paper is fairly well written, and rigorously describes the math behind the method.However, in general I have two problems with the paper in its current form.Firstly, although the method is described very rigorously, I find the results obtained rather less convincing. The method is only applied on a single model. For the results of that single model not many statistics are used. Quite some results are based on a single observation (as an example I'd give Fig 5). It would be good to show that the results are more robusted over more models and/or more trials.Secondly, a big drawback of this paper is that it is based on surrogate data. Over the years many theoretical papers have been published with methods to optimize neuron parameters, but not many of these are actually used in real situations. The reason is that there is a world of difference between surrogate data and real experimental data. Just adding some noise to the surrogate data doesn't really represent the real world problems. Therefore I'd strongly suggest that the authors consider at least to add an example where they apply their method to real experimental data. I do understand that of course in that case some analysis can't be performed, because one doesn't know the original parameters, but it still allows for an analysis of number of solutions found, etc.line 102: using a least-square error function has as big drawback that it is very sensitive to tiny shifts in timings. E.g. if an AP is shifted by a couple of ms, it's exact shape could still be the same, but the error could be huge. Could you discuss this in the paper, and how this could be solved.line 193: at the moment you're adding noise to the voltage. I'd also suggest adding noise to the current, which would reflect reality better and which could lead to some shift of e.g. the AP timings etc.line 279: it's not very clear to me how this percentage was obtained. As I mentioned above, I also think these numbers are too precise and for a particular case, I'd rather see mean/std of these over a couple of trials, use cases.line 331: to say 'systematic' it has to be quantified betterline 352: typo: missing function 'of' a biologicalline 415: I think, based on the fact that this is very theoretical work, it is far too much a stretch to start draw conclusion from this about brain functionline 515: about the adaptive time step approach in general. The NEURON simulator has an adaptive time step approach that does exactly this in a more advanced way (i.e. decrease the time step at times when more data points are necessary). It would make sense to reference and discuss that method.About the choice of parameters. You do tune a lot of parameters at the same time. Especially the kinetic parameters can in generally be obtained from separate experiments / literature. It might make sense to discuss if it's better to fit everything at the same time, or break things up in different components.Reviewer #2: Comments to authors attached as pdf.Reviewer #3: Comments see attached document**********Have all data underlying the figures and results presented in the manuscript been provided?Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology
data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Reviewer #1: YesReviewer #2: YesReviewer #3: Yes**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: Yes: Robert P GowersReviewer #3: NoFigure Files:While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at .Data Requirements:Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Reproducibility:To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please seeSubmitted filename: paper-review-1.pdfClick here for additional data file.Submitted filename: main.pdfClick here for additional data file.25 May 2020Submitted filename: Reply_reviewers.pdfClick here for additional data file.15 Jun 2020Dear Prof Nogaret,We are pleased to inform you that your manuscript 'Estimation of neuron parameters from imperfect observations' has been provisionally accepted for publication in PLOS Computational Biology.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.Best regards,Hermann CuntzAssociate EditorPLOS Computational BiologyDaniele MarinazzoDeputy EditorPLOS Computational Biology***********************************************************Please make sure to address the final concerns of Reviewer #2.Reviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: The authors have addressed all the remarks.Reviewer #2: Review uploaded as attachment.Reviewer #3: Placeholder**********Have all data underlying the figures and results presented in the manuscript been provided?Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology
data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Reviewer #1: YesReviewer #2: YesReviewer #3: Yes**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: NoReviewer #2: Yes: Robert GowersReviewer #3: Yes: Alexander J StasikSubmitted filename: reviewer2-second-review.pdfClick here for additional data file.9 Jul 2020PCOMPBIOL-D-20-00043R1Estimation of neuron parameters from imperfect observationsDear Dr Nogaret,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!With kind regards,J&J GraphicsPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
Authors: Jingxin Ye; Daniel Rey; Nirag Kadakia; Michael Eldridge; Uriel I Morone; Paul Rozdeba; Henry D I Abarbanel; John C Quinn Journal: Phys Rev E Stat Nonlin Soft Matter Phys Date: 2015-11-02
Authors: Daniel Rey; Michael Eldridge; Uriel Morone; Henry D I Abarbanel; Ulrich Parlitz; Jan Schumann-Bischoff Journal: Phys Rev E Stat Nonlin Soft Matter Phys Date: 2014-12-22
Authors: Bina Santoro; Janet Y Lee; Dario J Englot; Sandra Gildersleeve; Rebecca A Piskorowski; Steven A Siegelbaum; Melodie R Winawer; Hal Blumenfeld Journal: Epilepsia Date: 2010-08 Impact factor: 5.864
Authors: Ryan N Gutenkunst; Joshua J Waterfall; Fergal P Casey; Kevin S Brown; Christopher R Myers; James P Sethna Journal: PLoS Comput Biol Date: 2007-08-15 Impact factor: 4.475