Literature DB >> 35166932

Mean-return-time phase of a stochastic oscillator provides an approximate renewal description for the associated point process.

Konstantin Holzhausen1,2, Lukas Ramlow1,2, Shusen Pu3, Peter J Thomas4, Benjamin Lindner5,6.   

Abstract

Stochastic oscillations can be characterized by a corresponding point process; this is a common practice in computational neuroscience, where oscillations of the membrane voltage under the influence of noise are often analyzed in terms of the interspike interval statistics, specifically the distribution and correlation of intervals between subsequent threshold-crossing times. More generally, crossing times and the corresponding interval sequences can be introduced for different kinds of stochastic oscillators that have been used to model variability of rhythmic activity in biological systems. In this paper we show that if we use the so-called mean-return-time (MRT) phase isochrons (introduced by Schwabedal and Pikovsky) to count the cycles of a stochastic oscillator with Markovian dynamics, the interphase interval sequence does not show any linear correlations, i.e., the corresponding sequence of passage times forms approximately a renewal point process. We first outline the general mathematical argument for this finding and illustrate it numerically for three models of increasing complexity: (i) the isotropic Guckenheimer-Schwabedal-Pikovsky oscillator that displays positive interspike interval (ISI) correlations if rotations are counted by passing the spoke of a wheel; (ii) the adaptive leaky integrate-and-fire model with white Gaussian noise that shows negative interspike interval correlations when spikes are counted in the usual way by the passage of a voltage threshold; (iii) a Hodgkin-Huxley model with channel noise (in the diffusion approximation represented by Gaussian noise) that exhibits weak but statistically significant interspike interval correlations, again for spikes counted when passing a voltage threshold. For all these models, linear correlations between intervals vanish when we count rotations by the passage of an MRT isochron. We finally discuss that the removal of interval correlations does not change the long-term variability and its effect on information transmission, especially in the neural context.
© 2022. The Author(s).

Entities:  

Keywords:  Interspike interval statistics; Phase description; Stochastic neuron models; Stochastic oscillations

Mesh:

Year:  2022        PMID: 35166932      PMCID: PMC9068687          DOI: 10.1007/s00422-022-00920-1

Source DB:  PubMed          Journal:  Biol Cybern        ISSN: 0340-1200            Impact factor:   3.072


Introduction

A number of biological systems of rather different nature display stochastic oscillations. The calcium concentration within cells (Skupin et al. 2008), the deflection of mechanical organelles like the hair bundle (Martin et al. 2003), the position of molecular motors (Plaçais et al. 2009), the membrane potentials of neurons (Bryant et al. 1973; Walter et al. 2006), and even the number of individuals in biological populations (McKane and Newman 2005) can all show a quasi- rhythmic behavior that is shaped and in some cases even only enabled by randomness. Stochastic models for such kind of oscillations are diverse as well, including harmonic oscillators with damping and fluctuations (Uhlenbeck and Ornstein 1930; Schimansky-Geier and Zülicke 1990), randomly perturbed limit-cycle systems (see, Ebeling et al. 1986 for an early example), and noisy excitable (Lindner et al. 2004) or heteroclinic systems (e.g., Giner-Baldo et al. 2017). Most models are complicated (multidimensional, nonlinear, and stochastic) and even the calculation of such fundamental statistics as the stationary probability density or the mean rotation period is difficult. Hence, researchers have attempted reduced descriptions that would capture the salient features of the system and enable, for instance, the analysis of coupled oscillator systems. In the deterministic case (without noise) the most successful simplification is a phase description: to every point in the multidimensional phase space we assign a phase, reducing in this way a multidimensional system to a one-dimensional description. The great success of this mapping is that weak interactions between nonlinear oscillators can be efficiently described in terms of the phase response curve (Hoppensteadt and Izhikevich 1997). To generalize the concept of a phase to the stochastic case is nontrivial, and different notions of phase have been suggested. The mean-return-time (MRT) phase by Schwabedal and Pikovsky (2013) is a generalization of the stroboscopic definition of a deterministic phase; while the asymptotic phase introduced by Thomas and Lindner (2014) is a generalization of the long-term properties of two phase points in the deterministic case. Here we focus on the first definition of phase, the MRT phase: Points in the phase space belong to the same phase (they are on the same isochron) if the mean time to return to the same curve after one rotation is equal to the mean period of the oscillator. To implement this condition according to this algorithmic definition is not as straightforward as it may sound. More recently, Cao et al. (2020) proposed an analytical definition for a special class of planar white-noise-driven oscillators, which is based on the well-known partial differential equation for the mean-first-passage time with an unusual jump condition. Another simplifying approach to oscillatory systems is to associate a point process with the repetitive features of the system: In neurons, for instance, upcrossings of a voltage threshold have been used to define a spike train or, equivalently, an ordered sequence of interspike intervals (ISIs); in heart dynamics, the intervals between heartbeats have been analyzed in a similar way. Besides the statistical distribution of the single intervals, its mean, variance, coefficient of variation, skewness, etc., correlations among the intervals have attracted attention because they may betray interesting dynamical features of the system or the driving stimuli. Most often, one focusses on the linear correlations as quantified by the serial correlation coefficient (SCC)where the average can be taken over the sequence of intervals (i.e., over the index i) or, equivalently, over an ensemble of spike trains (then i would be fixed). For a stationary sequence of intervals, the SCC compares the covariance between two intervals lagged by an integer k to the variance of the single interval (this yields a number between -1 and 1). If intervals are independent, as is the defining property of a renewal point process, for . (We always have by definition.) Note that this conclusion cannot be reversed: A point process with vanishing SCC can still display nonlinear correlations and there might be a statistical dependence among its intervals. Hence, strictly speaking, a process with is not necessarily a renewal process. Still, because is the almost exclusively used measure of nonrenewal behavior, we may still regard a spike train with vanishing linear correlations as being approximately renewal. In neurons, nonrenewal behavior, i.e., nonvanishing ISI correlations may emerge because of slow (Lindner 2004; Schwalger and Schimansky-Geier 2008) or quasi-rhythmic (Bauermeister et al. 2013) stochastic stimuli, in networks due to refractoriness of presynaptic neurons, short-term synaptic depression (Schwalger et al. 2015), and, last but not least, spike-frequency adaptation (Liu and Wang 2001; Chacron et al. 2001); see Farkhooi et al. (2009); Avila-Akerberg and Chacron (2011) for reviews on experimental data of the SCC and its implications for signal transmission. Interbeat intervals in heart dynamics show correlations as well due to the often highly nonlinear and complex dynamics, see, e.g., Kim et al. (2019); Goldberger et al. (2002). We note that the most studied stochastic model of spike generation, the one-dimensional integrate-and-fire model driven by white noise would generate a renewal process—the reset of the voltage after reaching a threshold would eliminate any memory of past intervals and the driving noise is uncorrelated by assumption and cannot carry any memory either. In contrast, multidimensional stochastic neuron models (which include, for instance, dynamical variables for spike-frequency adaptation and/or colored noise) can generate richer (nonrenewal) spike statistics. In this paper, we report the remarkable observation that counting rotations in terms of the MRT phase in planar white-noise-driven oscillators leads to a sequence of interphase intervals (IPIs), for which linear correlations vanish. Put differently, if we count spikes not with a standard threshold but rather by the passing of an MRT isochron, the associated point process will be (at least approximately) a renewal process. In the next section, we give the general rationale for this result. We then look at specific examples in Sect. 3. In Sect. 3.1 we analyze an isotropic noise-driven oscillator with two stable limit cycles that, counted with a conventional threshold, generates an ISI sequence with pronounced positive correlations. In Sect. 3.2 we test our idea for an integrate-and-fire model with spike-frequency adaptation that is well known for its negative ISI correlations (Liu and Wang 2001). Finally, in Sect. 3.3 we look at a conductance-based neuron model with channel noise that has weak positive ISI correlations if spikes are generated by upcrossings of a voltage threshold. In all cases, counting rotations as passings of the MRT isochron leads to an IPI sequence with vanishing correlation coefficients. We conclude the paper with a brief discussion of the implications of our result for modeling stochastic oscillations.

Model class and general result

Here we introduce the general model of a stochastic oscillator with white noise and recapitulate how phase lines and corresponding crossing times forming a point process can be defined. We discuss the salient feature of the mean-return-time phase and argue why linear correlations among the corresponding interphase intervals should vanish.

The general oscillator model

We consider an n-dimensional nonlinear stochastic system, given in terms of a system of Langevin equations:Here is the n-dimensional drift vector, is an matrix (where k can be larger than n), and a k-dimensional vector of white Gaussian noise processes with vanishing mean values and correlation functionsHere is the n-dimensional drift vector, is an matrix. For technical reasons, we require that the matrix be invertible everywhere (see Cao et al. (2020)). If the noise is multiplicative ( const), it is always interpreted in the sense of Itô. Furthermore, for certain types of models (integrate-and-fire neurons), an additional reset rule for the trajectory applies if it reaches certain boundaries. Much of what we discuss here is illustrated in terms of two-dimensional (planar) systems with but can be generalized to higher dimensions. At the risk of stating the obvious: the above system of Langevin equations describes a Markov process (irrespective of whether a reset rule applies or not). We assume that the system undergoes stochastic oscillations, i.e., it performs randomly timed rotations around a center core and remains within an annulus-like domain (cf. Fig. 1); for integrate-and-fire models this is a bit more complicated because the trajectory remains within a certain cutout of the annulus—the reset rule shortcuts a part of it but for the moment we leave this complication aside. In the general case, as a helpful construction, we impose reflecting inner and outer boundaries. Both boundaries of the domain are chosen such that reflections are rare events and the main share of probability lies far from the boundaries. In many cases we may also perform the limit in which the inner and the outer boundary shrink to zero or go to infinity, respectively (see Holzhausen et al. (2022); Holzhausen (2021) for some examples). Here we are not interested in the mathematical generality of the result but assume that the considered system is sufficiently non-pathological such that the exact values of the boundaries are not important (except for the behavior close to those boundaries). We furthermore assume that for the stochastic oscillator, the sets of constant phase (such as the MRT phase or the asymptotic phase) are given by simple manifolds that can be parametrized by polar coordinates as ; this has been the case for all examples studied by us and co-authors in the past (Thomas and Lindner 2014, 2015, 2019; Cao et al. 2020; Pérez-Cervera et al. 2021).
Fig. 1

Mapping from the stochastic oscillator dynamics in Cartesian coordinates (a) to effective angle-radius coordinates (b). For simplicity we consider a dynamics that is constrained to a ring-like domain by imposed reflecting boundaries on an inner and an outer circle (or topologically equivalent lines as in the sketch). Simple connections between these boundaries define a phase line . The return-time problem in Cartesian variables poses some subtle problems that can be removed by considering the passage-time problem between copies of the phase line in angle-radius coordinates ( in (b). We show two rotations (first in black, second in purple) in (a) and (b). Technically, the mapping from (a) to (b) may involve as an intermediate step the mapping to a true annulus and from the true annulus to polar coordinates in (b) (see Cao et al. (2020))

For a planar system we can define a simple connecting curve (or a connecting -dimensional manifold for an n-dimensional system) between the inner and the outer boundaries. We count rotations by the crossings of , or, put differently, the return to this curve after the completion of a rotation. The latter condition is important: Crossings of a curve are a subtle issue in dynamical systems driven by white noise because even if we restrict the crossings to be counted only when occurring into the direction of rotation, there will be infinitely many of them in a finite time if we count them in a naive way without the condition of the completed rotation (for the general problem of the number of crossings for a stochastic process, see Stratonovich (1967)). Mapping from the stochastic oscillator dynamics in Cartesian coordinates (a) to effective angle-radius coordinates (b). For simplicity we consider a dynamics that is constrained to a ring-like domain by imposed reflecting boundaries on an inner and an outer circle (or topologically equivalent lines as in the sketch). Simple connections between these boundaries define a phase line . The return-time problem in Cartesian variables poses some subtle problems that can be removed by considering the passage-time problem between copies of the phase line in angle-radius coordinates ( in (b). We show two rotations (first in black, second in purple) in (a) and (b). Technically, the mapping from (a) to (b) may involve as an intermediate step the mapping to a true annulus and from the true annulus to polar coordinates in (b) (see Cao et al. (2020)) Helpful in this respect is a mapping of the trajectory in Cartesian coordinates to a first-passage-time problem in a transformed space with an angle variable and a number of radius variables. This is evident in the two-dimensional case, in which we just transform to effective angle-radius variables, as illustrated in Fig. 1. The connecting curve in Cartesian coordinates can now be numbered, according to the number of completed rotations, e.g., . The return of the trajectory starting at to the very same curve in Cartesian coordinates is mapped in polar coordinates to a passage from the curve to a copy of the curve at . For certain obvious choices of the curve (spoke of a wheel or, in the neural models, a voltage threshold), we call the crossing times of the curves the spike times and the intervals between adjacent spike times the interspike intervals . These intervalsform an ordered sequence of stochastic variables that will in general be correlated, i.e., the correlation coefficient, defined in (1), displays nonvanishing values, for . We expect that correlations depend on the lag between two intervals and that these correlations vanish as the lag goes to infinity (). The stationary mean value of the intervals the mean rotation period, is independent of the specific choice of . To see this, consider the relation of to the winding number, i.e., the mean number of rotations per time unit obtained by time averaging over a long window (0, T):Here, we have used that what we find in the denominator is essentially the definition of the mean rotation period. (The effect of the start and final intervals and becomes negligible for .) Because the winding number cannot depend on the specific way we count the rotation, also (its inverse) cannot depend on the shape of (as long as it faithfully counts every single rotation at some point).

The MRT phase and the associated point process

The MRT phase is defined as a set of special phase lines , such that for all points which start with a given phase, i.e., on the isochron , the mean time to reach the very same isochron again after one rotation (or in polar coordinates the copy is equal to the mean rotation time of the oscillator, irrespective of the starting point on the isochron, ,Note that the target radius variable upon return to can have an arbitrary value; the defining property of this special isochron is that there is no dependence of the average interval on the radius of the starting point on . For planar oscillators, Cao et al. (2020) showed that this definition uniquely determines the phase mapping (apart from a trivial off-set of the phase, of course). We also note that in the limit of vanishing noise, this corresponds to the classical phase of the oscillator (if it exists).1 Subsequent passing of the isochrons at times can be used to define a sequence of special interspike intervals that we call in the following the interphase intervals (IPIs) :On the notation: we reserve the letter for a general interval (including the interspike interval), while is specifically the interval for the MRT-phase-line crossings, i.e., the IPI. The main result of our paper is that for the sequence in (6), there are no linear correlations, i.e., for . Why should this be the case?

Why we can expect that IPI correlations vanish

We focus now on the passages from an arbitrary phase line (not necessarily the MRT phase) to its shifted copy and from to , i.e., on the two subsequent passage times and . The covariance between these two intervals is the central piece of the correlation coefficient . We can write the stationary average of the product of the intervals as followsHere the variable parametrizes the crossing point on the curve and we have expressed the average by means of the stationary probability density of the two intervals and the initial and final points on the starting and the target line, respectively2. Obviously, describes the final point for but also the initial point for , and for the Markov process considered it is exactly this value that can carry memory between the intervals and (and also between and any higher interval with ). If the interspike interval sequence shows correlations, this is exclusively due to the fact that the final point of the first interval coincides with the initial point to the subsequent interval. If we choose the phase curve in such a way that the expected interval is always the same irrespective of the starting point, we eliminate the source of (linear) correlations— this is why we expect that correlations will vanish for a sequence of interphase intervals. In what follows we underpin this intuitive argument with a calculation. Because of the Markov property of the stochastic process , will depend only on the initial point but not on the previous initial point . Based on this property, we can simplify the probability density as followsHere we have systematically split up multivariate probability densities into conditional densities and lower-dimensional multivariate densities (according to the scheme ) and have then used the Markov property to reduce the number of conditions. The conditional probability density for the second interval does neither depend on the first interval nor on the initial point of the first interval, and thus we can replace this by the conditional probability density , which has fewer arguments. Similarly, the statistics of the second target point does not depend on the first interval and its initial point and this is why reduces to , etc. Inserting (8) into (7), we can write the averaged product as follows:We emphasize that this holds true for any phase line. If we specifically use the MRT phase line (switching from the I notation to the T notation), the conditional mean value of the return time becomes independent of the initial point on (this was the defining feature of this line), and we obtain :If we use this relation above, we can furthermore simplify the second set of integrals:With this, the above relation reduces toThis, of course, corresponds to a vanishing covariance and, consequently, a vanishing first correlation coefficient, . The above line of arguments can be repeated for intervals and with , and thus, we expect that linear correlations vanish at all lags, i.e., . Finally, we note that our argument does not exclude that nonlinear correlations among the intervals can still exist—our derivation applies only to the linear correlations but could not be extended, for instance, to the variances of the intervals because these follow a different phase line (as shown by Holzhausen et al. (2022) for one example).

Examples of stochastic oscillators

A planar oscillator with two stable limit cycles

We start with an example of a white-noise-driven isotropic (rotationally symmetric) planar oscillator, the so-called Guckenheimer–Schwabedal–Pikovsky oscillator (Guckenheimer 1975; Schwabedal and Pikovsky 2013)This system shows stochastic transitions between the two stable limit cycles of the deterministic system at and when overcoming an unstable limit cycle at (with ); cf. Fig. 2a for an example trajectory in the phase space.
Fig. 2

Guckenheimer–Schwabedal–Pikovsky oscillator. a A stochastic trajectory in Cartesian coordinates showing the switching between the two limit cycles (solid circles). b A spike sequence given by the crossing times of a spoke (top) and one of the components as a time series (bottom), revealing stochastic oscillations that are slower (for the inner limit cycle) and faster (for the outer limit cycle) leading to subsequences of shorter and longer intervals between spikes (see top panel)

Guckenheimer–Schwabedal–Pikovsky oscillator. a A stochastic trajectory in Cartesian coordinates showing the switching between the two limit cycles (solid circles). b A spike sequence given by the crossing times of a spoke (top) and one of the components as a time series (bottom), revealing stochastic oscillations that are slower (for the inner limit cycle) and faster (for the outer limit cycle) leading to subsequences of shorter and longer intervals between spikes (see top panel) If we count rotations in a simple manner by first upcrossings of (here N would be the rotation count or winding number), we obtain a sequence of stochastic intervals that is clearly positively correlated (see Fig. 3). Why do we see positive correlations? In simple terms, the speed is different on the two limit cycles—the difference is determined by the parameter in (13). Consequently, the ISIs on one of the limit cycles will be on average different to the one on the other limit cycle, and both will deviate from the mean ISI. If transitions between the two limit cycles are not too frequent, we will see a number of shorter ISIs belonging to the outer limit cycle followed by a subsequence of longer intervals belonging to the inner limit cycle. Put differently, adjacent intervals deviate in the same manner from the mean interval which corresponds to positive interval correlations. The mechanism is also illustrated in Fig. 2b: only one Cartesian component of the oscillator is shown here, clearly elucidating the difference in oscillation frequency and the resulting subsequences of adjacent intervals that are all shorter or all longer than the mean ISI. Indeed, as becomes evident in Fig. 3b, counting intervals by the passages through the spoke of a wheel leads to pronounced positive ISI correlations.
Fig. 3

Event cross sections in phase space (a) and serial correlations of subsequent ISIs and IPIs (b) of the planar oscillator with two stable limit cycles (Guckenheimer–Schwabedal–Pikovsky oscillator). Stochastic periods are measured with respect to three event cross sections in phase space: a threshold line (spoke), MRT isochron I and the twisted MRT isochron . The black dotted line indicates the reflecting boundaries of the annulus domain at and . Model parameters: , , , . Numerical simulations of the model were performed using an explicit Euler–Maruyama scheme with time step

The correlation lag is roughly given by the number of intervals it takes on average to switch between the limit cycles. The observed correlation could be analytically described by a theory that assumes Markovian switching of the firing between two rates and coefficients of variation (see Schwalger et al. (2012)). For the system at hand, some of us have recently derived an analytical expression for the MRT phase in the form of a parametrization of the isochron (Holzhausen et al. 2022):Here the mean rotation frequency (or, equivalently, the inverse of the mean rotation period) can also be calculated via (Holzhausen et al. 2022)We can now use this isochron to count rotations and create a sequence of IPIs. If we measure their SCC, all linear correlations are gone (see green line in Fig. 3b): for all in line with what we argued in Sect. 2.3. The isochron in this case is not a straight line between the inner and the outer boundary but it winds several times around the origin. This gives inner-laying points () of the same phase more head start compared to the faster moving points close to the outer limit cycle () that move with higher mean speed. And that is also the reason that the cause of positive correlations is now absent because the different speeds close to the outer and inner limit cycles are now compensated by the different starting points. Event cross sections in phase space (a) and serial correlations of subsequent ISIs and IPIs (b) of the planar oscillator with two stable limit cycles (Guckenheimer–Schwabedal–Pikovsky oscillator). Stochastic periods are measured with respect to three event cross sections in phase space: a threshold line (spoke), MRT isochron I and the twisted MRT isochron . The black dotted line indicates the reflecting boundaries of the annulus domain at and . Model parameters: , , , . Numerical simulations of the model were performed using an explicit Euler–Maruyama scheme with time step We also note that the sequence of IPIs is significantly more irregular than the ISIs. Below in the discussion section we uncover the general mechanism, why the CV should increase (decrease) when positive (negative) correlations are removed. Last but not least we report an interesting finding for a counting curve that is five times as twisted as the isochron (here we have used ).In this case, correlations between the respective intervals become slightly negative. This illustrates that for a Markov process, the geometry of the counting line for the spikes or events controls the correlation of the intervals. Choosing the MRT isochron as counting line leads to vanishing correlations but in principle both positive and negative correlations are possible.

An integrate-and-fire model with a spike-triggered adaptation current

We turn now to a simple yet very successful neuron model, the leaky integrate-and-fire (IF) model with an adaptation current (Treves 1993; Liu and Wang 2001; Chacron et al. 2001; Benda and Herz 2003) endowed with white Gaussian current noise (Schwalger and Lindner 2013). The equations of this system are as followswith an additional fire-and-reset rule: Whenever the voltage variable reaches a certain threshold a spike is fired at time , at the same time v(t) is reset to some reset value . In contrast, the adaptation variable a is increased by whenever a spike is fired. This does not require an additional reset rule but is incorporated directly into the dynamics of the adaptation variable by a sum over the delta functions. (The sum runs over the spike times of the IF model.) Further parameters are the mean input and the noise intensity D of the Gaussian white noise , that obeys the autocorrelation function . Here, we choose such that the deterministic neuron model () is mean-driven, i.e., there is no stable fixed point between and , and even in the absence of noise the neuron fires repetitively. The adaptive leaky integrate-and-fire model as a planar oscillator. Panel (a) shows the deterministic vector field according to (16). The limit cycle (including increase and reset) is represented by the thick black line. The reset () and threshold () are shown by  dotted and dashed vertical lines, respectively. Panel (b) shows three stochastic trajectories with corresponding ISIs and spike times . Dotted vertical lines in the upper panel indicate the times at which a spike would have been expected in the renewal case given that there has been a spike at , i.e., at . Because the first ISI is shorter than the average interval () the following intervals with are more likely to be prolonged () indicating negative interval correlations over several lags k. Parameters (a, b): , , and We can interpret this model as a two-dimensional oscillator, with the caveat that a certain part of the plane is cut out and the dynamics in this cutout part are replaced by the fire-and-reset condition—it is exactly the stereotypical shape of the action potential that is not modeled in an integrate-and-fire framework. We can still think of the deterministic dynamics of the model for as governed by a limit cycle (Schwalger et al. 2010; Schwalger and Lindner 2013). This limit cycle is represented by a thick black line in Fig. 4a: It extends only from the reset line to the threshold line and includes two infinitely fast parts, the increase of the adaptation variable by and the reset to the reset voltage . Indeed, in the deterministic system, all initial values will lead to a trajectory close to the limit cycle.
Fig. 4

The adaptive leaky integrate-and-fire model as a planar oscillator. Panel (a) shows the deterministic vector field according to (16). The limit cycle (including increase and reset) is represented by the thick black line. The reset () and threshold () are shown by  dotted and dashed vertical lines, respectively. Panel (b) shows three stochastic trajectories with corresponding ISIs and spike times . Dotted vertical lines in the upper panel indicate the times at which a spike would have been expected in the renewal case given that there has been a spike at , i.e., at . Because the first ISI is shorter than the average interval () the following intervals with are more likely to be prolonged () indicating negative interval correlations over several lags k. Parameters (a, b): , , and

The standard way of counting spikes and generating a sequence of ISIs is the passage of the voltage threshold; equivalently, we can think of the reset events as forming a point process. The ISIs are typically negatively correlated (see blue circles in Fig. 5b) as is well known from the theoretical literature (Liu and Wang 2001; Chacron et al. 2000; Schwalger et al. 2010; Schwalger and Lindner 2013; Shiau et al. 2015) and also from experimental recordings (see reviews by Farkhooi et al. (2009); Avila-Akerberg and Chacron (2011)).
Fig. 5

Isochron and interval correlations for the adaptive leaky integrate-and-fire model. a: Deterministic vector field including limit cycle, reset, and threshold (dark blue line) but over a larger domain compared to Fig. 4a and including the deterministic isochron with phase (green line) and the horizontal (yellow) line as another (rather arbitrary) example of how a spike sequence could be defined ( with vertical spacing and being the adaption variable of the limit cycle at the threshold). Because the phase of the isochron was chosen to be the isochron passes the limit cycle right at the threshold and accordingly at the reset point, which corresponds to . b: Interval correlations where the intervals are defined as the time between the successive crossing of the threshold, isochron, or certain horizontal lines, as shown in a. Serial correlations of the ISI (blue circles) are negatively correlated; IPI correlations (green circles) defined by the crossings of the isochrons vanish as expected; intervals defined by subsequent crossings of the horizontal lines are positively correlated (yellow circles). Parameters (a, b): , , and

We consider here a case of weak adaptation, for which the SCC is negative at all lags (Schwalger and Lindner 2013). Why are correlations between adjacent intervals negative? In Fig. 4 we have depicted three successive interspike intervals (b, top) together with their stochastic trajectories (b, bottom). The first trajectory (dark blue) starts close to the limit cycle and reaches the threshold quickly. This can either be seen from the top of panel (b) where is much shorter than the mean interval or from the bottom of panel (b) where the trajectory crosses the threshold above the limit cycle (even though it started close to the limit cycle). The latter is related to the length of the interval because the dynamics of the adaptation imply a simple exponential decay over the course of an ISI. Hence, if we find a larger value of the adaptation variable at the end of an ISI (compared to the limit cycle) that is because this ISI was shorter than the mean ISI. Now consider the second interval (green). The initial condition of the adaptation variable for this trajectory is determined by the length of the previous interval. In particular, since the first interval was shorter than the mean, the initial value of a for the second interval will be larger than on average (see Fig. 4b—the green trajectory starts above the limit cycle). From (16) it becomes evident that an increase in a will slow down the v dynamics. The second trajectory will thus, again on average, reach the threshold after some time that is larger than the mean ISI. This can be seen in the top part of panel (b), where the second interval is indeed prolonged . A weaker version of the same effect still applies to the third trajectory (yellow), i.e., the trajectory still starts slightly above the limit cycle. To summarize, an initial, shortened interval leads to an increase of the adaptation variable that prolongs the subsequent intervals—this is the mechanism by which all subsequent intervals are negatively correlated with the first interval. In line with the above explanation, we expect that these serial correlations vanish for the IPIs, i.e., the time between successive crossings of the MRT-isochron. Even the deterministic definition of the phase () has not been studied for this model to the best of our knowledge. For simplicity, we restrict ourselves in the following to the deterministic isochron that, for weak noise and in the mean-driven regime, we assume to be a good approximation for the MRT-isochron; we have extracted the phase isochron for the deterministic system as outlined in the appendix Sect. A. Isochron and interval correlations for the adaptive leaky integrate-and-fire model. a: Deterministic vector field including limit cycle, reset, and threshold (dark blue line) but over a larger domain compared to Fig. 4a and including the deterministic isochron with phase (green line) and the horizontal (yellow) line as another (rather arbitrary) example of how a spike sequence could be defined ( with vertical spacing and being the adaption variable of the limit cycle at the threshold). Because the phase of the isochron was chosen to be the isochron passes the limit cycle right at the threshold and accordingly at the reset point, which corresponds to . b: Interval correlations where the intervals are defined as the time between the successive crossing of the threshold, isochron, or certain horizontal lines, as shown in a. Serial correlations of the ISI (blue circles) are negatively correlated; IPI correlations (green circles) defined by the crossings of the isochrons vanish as expected; intervals defined by subsequent crossings of the horizontal lines are positively correlated (yellow circles). Parameters (a, b): , , and The resulting isochron for one specific phase is shown in Fig. 5a; as can be seen, there are several branches that belong to the same phase. This is a consequence of the reset rule, which, in the simplest case, can be understood as follows: Consider a point on the isochron that lies directly on the threshold ; due to the reset rule, a trajectory that starts at will be reset to (see Fig. 8d). This reset does not take any time; therefore, the return time to the isochron starting at or is the same and both points should belong to the same isochron. This argument is valid with one restriction: The deterministic system has to pass the threshold at , i.e., must hold true.
Fig. 8

Numerical procedure to determine the deterministic isochron. For a detailed description see the main text of Sec. A. Parameters: , , and

If we now count rotations for a weakly stochastic adaptive integrate-and-fire model by the passage of a (deterministic) isochron, we can construct a sequence of IPIs, for which the SCC vanishes at all lags k to a very good approximation (see Fig. 5b). Also the standard deviation of the IPIs is significantly smaller than that of the ISIs as can be seen from the distributions of the two types of intervals (see inset in Fig. 5b). Hence, for weak noise we confirm again our general result derived in Sect. 2.3. Interestingly, if we use an alternative phase definition that is very different from both the constant voltage or the deterministic isochron, namely, a set of horizontal lines (constant adaptation, yellow in Fig. 5a), for counting rotations, the serial correlation coefficient becomes positive (cf. yellow circles in Fig. 5b; additional features of the interval’s probability density are discussed in the appendix, Sect. B). This is yet another example for how the geometry of the counting lines determines the correlations of the corresponding interval sequences.

A Hodgkin–Huxley model with channel noise

As our last example, we consider the classical Hodgkin–Huxley model endowed with channel noise. Following Skaugen and Walløe (1979), at the molecular level we take the sodium channel to comprise three independent binary “m” gates and one independent binary “h” gate, leading to a channel state graph with eight vertices and 20 directed edges. Similarly, we take the potassium channel to comprise four independent binary “n” gates, leading to a channel state graph with five vertices and eight directed edges. See Fig. 10 in appendix C for illustration. Given a total population of sodium and potassium channels, we define the state vectorseach summing to unity. The net sodium conductance is (the fraction of sodium channels in the open state) multiplied by (the maximal sodium conductance); the net potassium conductance is . Each of the 28 directed edges in Fig. 10 represents a particular channel state transition, i.e., opening or closing a single gate. We take each such edge to be an independent source of fluctuations. In the large channel population limit, the resulting diffusion approximation (Fox and Lu 1994; Goldwyn and Shea-Brown 2011; Goldwyn et al. 2011) gives a system obeying the following set of Langevin equations (Pu and Thomas 2020, 2021):Here, C () is the capacitance, () is the applied current, the maximal conductance is (), (mV) is the associated reversal potential, for , and the Ohmic leak current is . The voltage-dependent drift matrices, () and (), and the noise coefficient matrix , and the matrix , are derived by Pu and Thomas (2020, 2021) and reproduced in Appendix C.
Fig. 10

Transition state diagram for the stochastic Hodgkin–Huxley model, redrawn with permission from Pu and Thomas (2021). A:  states and transitions. B:  states and transitions. States marked in green shading represent conducting states () and (). See App. C for voltage-dependent per capita transition rates for each directed edge (, , , , and ) Small blue numerals label the directed edges 1-8 ( channel) and 1-20 (-channel). Transitions involving the  channel fast activation (m) gates are marked with red arrows

For this 14-dimensional HH system we extract a sequence of interspike intervals (ISIs) and interphase intervals (IPIs) as follows. In order to find the sequence of voltage spikes, we set a threshold voltage of mV. Each spike time is determined as the upcrossing time of . Because all noise in the model is contained in the gating variables, rather than the voltage, the voltage is continuously differentiable and there is no ambiguity about the spike times. In order to find the interphase interval sequence, we track the times at which the simulated trajectory crosses the deterministic isochron that passes through the deterministic limit cycle trajectory at . Cao et al. provided a method for calculating the MRT isochrons for planar systems, but the method does not readily extend to a 14-dimensional phase space. However, in the small noise regime, we assume that the MRT isochron is close to the classical deterministic limit cycle isochron, which we use as an approximation. Thus we track the phase of the trajectory and mark one isophase crossing every time the phase advances by . See Appendix C for details. If we simulate the system for a large number of channels (implying a weak noise intensity) and measure spike times and corresponding ISIs by upcrossings of a voltage threshold, we observe a weak but significantly negative correlation (mean ± standard error of the mean (SEM)). Here we simulate the stochastic HH model using the same framework as Eq. 3 by Pu and Thomas (2021), and set . The mean and standard deviation are calculated from 400 simulations, where each single simulation contains more than 10,200 ISIs. If we measure IPIs using the deterministic phase (which for weak noise should be rather close to the stochastic MRT phase), we get a correlation coefficient at lag one of . We applied the one-sample t-test to test the null hypothesis that has a mean zero at the 5% significance level (and similarly). The test result rejects the null hypothesis for the ISIs, with a p-value , and accepts the null hypothesis for the IPIs, with a p-value . Hence, as for the other two systems we can confirm our general result. Negative ISI correlations in the stochastic Hodgkin–Huxley model and vanishing correlations for the associated IPI sequence. The inset shows the statistical distribution of for the ISIs (in light blue) and IPIs (in green) based on an ensemble of 400 trials (each containing more than 10,200 oscillations). Error bars indicate the mean ± SEM To further illustrate the significance or insignificance of the negative correlation coefficient at lag one, we compare the ISI and IPI statistics to that of the corresponding sequences of shuffled intervals. We recorded for each permutation and plotted a histogram. Figure 7 presents an example of the distributions of for ISIs and IPIs with 1000 randomly permutations. The mean() (in red) is the mean of the values, obtained from the 1000 permutations, which are almost 0 in both cases. Mean and mean. The actual ’s of the original (unshuffled) spike trains are plotted in a black bar, where and . The one-sample t-test suggested to accept the null hypothesis that “mean() (and mean()) has a mean zero” at the 5% significance level, with a p-value of 0.6357 for ISIs and 0.1731 for IPIs. Given the distributions of the with permutations, the z-score of observing is , whose probability is almost zero. For the IPIs, the z-score of observing is -1.0851, where we are in favor of the null hypothesis that is from the distribution of for the shuffled IPIs.
Fig. 7

Validating the significance of negative serial correlation coefficient. Each histogram plots the distribution of for 1000 randomly shuffled sequences of ISIs (top) or IPIs (bottom). Black bar: of the unshuffled sequence. Red: mean of the shuffled sequences

Validating the significance of negative serial correlation coefficient. Each histogram plots the distribution of for 1000 randomly shuffled sequences of ISIs (top) or IPIs (bottom). Black bar: of the unshuffled sequence. Red: mean of the shuffled sequences

Discussion and conclusions

We have found an interesting property of the recently introduced MRT phase in multidimensional oscillator models: Rotation counts of these systems form in general a non-renewal point process if standard threshold criteria are used; however, if the isochron of the MRT phase is used, at least linear correlations would vanish. This finding has been mathematically derived above but can also be intuitively understood as follows. For a Markov process correlations between adjacent passage intervals can arise only due to their shared point in space (which is the final point of the first interval and the initial point for the second interval). The correlation between the intervals can be regarded as a conditional mean value of the second interval, but if this mean value becomes independent of the initial point in space (the point on the MRT phase line), it becomes independent of the first interval. Counting according to the MRT phase gives us thus an (approximate) renewal process, which is a great simplification because for these processes many formulas for their basic statistics and relationships between different statistics are known (Cox 1962). It might even be possible to use this mapping (from the model’s phase space to the MRT phase) to find novel ways to calculate the serial correlation for the standard threshold counting, although we have to admit at this point that we have not yet an idea how to practically do this. One motivation for the calculation of the interspike interval’s correlation coefficient is the effect that has on the long-term variability of the spike train. In particular, the long-term asymptotics of the Fano factor of the counting process is given byHence, purely negative correlations over all lags, for instance, are known to reduce the Fano factor while positive correlations will increase it. The long-term Fano factor is also intimately related to the spike-train power spectrum via the relation (Cox and Lewis 1966)where is the firing rate of the neuron (the inverse of the mean ISI). Negative correlations, for instance, can lead to a considerable drop of power at low frequencies while positive correlations boost the spectrum in this range. These effects of correlations on the spontaneous power spectrum can be relevant for the transmission of weak time-dependent signals in the neural spike train (Chacron et al. 2004; Lindner et al. 2005; Blankenburg and Lindner 2016), because the spontaneous spectrum (the spectrum in the absence of a stimulus) serves as the background spectrum in the presence of a stimulus and may affect the signal-to-noise ratio. Does this mean that with the removal of negative correlations in the neuron model with adaptation, we have removed the potentially beneficial effect as well? We do not think that this is the case for the following reason. The long-term statistics of the count will not depend on the exact way we count phase rotations or spikes as long as we do not leave out events or introduce new ones. Hence, we expect that irrespective of the way we count rotations, the long-term values of the count’s mean and variance is always the same and, consequently, we have the same Fano factor in all cases, in particular:and thus we haveIn all our examples, we have checked this relation numerically and confirmed it. It also concisely explains why a renewalization of the spike train of an adapting neuron comes along with a reduction of the CV, while in the case of a system with bistable behavior and positive ISI correlations, the CV becomes larger when going over to the IPI sequence. For the special case of an integrate-and-fire dynamics with an adaptation current, the non-renewal dynamics has been related by Nesse et al. (2010, 2021) in another way to the variations of an independent (renewal-like) variable, the increments in the adaptation variable. The authors of these studies also speculate how this independent variable might be read out by a postsynaptic readout neuron via matched synaptic kinetics. Whether the relation to the increments of the adaptation variable is somehow related to our mapping to the MRT phase and the approximated renewalization is unclear at the moment but certainly worth further exploration. Likewise, it would be interesting how the phase concept and the vanishing of the correlation coefficient apply to generalized models of adapting neurons, for instance, models with subthreshold adaptation components (Shiau et al. 2015) or correlated Gaussian noise (Ramlow and Lindner 2021).
Table 1

Parameters used for stochastic Hodgkin–Huxley simulations

SymbolMeaningValue
CMembrane capacitance1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu F/cm^2$$\end{document}μF/cm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\bar{g}}_\text {Na}$$\end{document}g¯NaMaximal sodium conductance120 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu S/cm^2$$\end{document}μS/cm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\bar{g}}_\text {K}$$\end{document}g¯KMaximal potassium conductance36 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu S/cm^2$$\end{document}μS/cm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_\text {leak}$$\end{document}gleakLeak conductance0.3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu S/cm^2$$\end{document}μS/cm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_\text {Na}$$\end{document}VNaSodium reversal potential for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {Na}^{+}$$\end{document}Na+50 mV
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_\text {K}$$\end{document}VKPotassium reversal potential for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {K}^{+}$$\end{document}K+-77 mV
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$V_\text {leak}$$\end{document}VleakLeak reversal potential-54.4 mV
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_\text {app}$$\end{document}IappApplied current to the membrane10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$nA/cm^2$$\end{document}nA/cm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}$$\end{document}AMembrane Area\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$100\,\mu \text {m}^2$$\end{document}100μm2
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_\text {tot}$$\end{document}MtotTotal number of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {Na}^{+}$$\end{document}Na+ channels6,000
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N_\text {tot}$$\end{document}NtotTotal number of\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {K}^{+}$$\end{document}K+ channels18,00
  40 in total

1.  Suprathreshold stochastic firing dynamics with memory in P-type electroreceptors.

Authors:  M J Chacron; A Longtin; M St-Hilaire; L Maler
Journal:  Phys Rev Lett       Date:  2000-08-14       Impact factor: 9.161

2.  Interspike interval statistics of neurons driven by colored noise.

Authors:  Benjamin Lindner
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2004-02-27

3.  Biophysical information representation in temporally correlated spike trains.

Authors:  William H Nesse; Leonard Maler; André Longtin
Journal:  Proc Natl Acad Sci U S A       Date:  2010-12-03       Impact factor: 11.205

4.  Emergent collective behavior in large numbers of globally coupled independently stochastic ion channels.

Authors: 
Journal:  Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics       Date:  1994-04

Review 5.  Nonrenewal spike train statistics: causes and functional consequences on neural coding.

Authors:  Oscar Avila-Akerberg; Maurice J Chacron
Journal:  Exp Brain Res       Date:  2011-01-26       Impact factor: 1.972

6.  Statistical structure of neural spiking under non-Poissonian or other non-white stimulation.

Authors:  Tilo Schwalger; Felix Droste; Benjamin Lindner
Journal:  J Comput Neurosci       Date:  2015-05-05       Impact factor: 1.621

7.  Decreases in the precision of Purkinje cell pacemaking cause cerebellar dysfunction and ataxia.

Authors:  Joy T Walter; Karina Alviña; Mary D Womack; Carolyn Chevez; Kamran Khodakhah
Journal:  Nat Neurosci       Date:  2006-02-12       Impact factor: 24.884

8.  How noisy adaptation of neurons shapes interspike interval histograms and correlations.

Authors:  Tilo Schwalger; Karin Fisch; Jan Benda; Benjamin Lindner
Journal:  PLoS Comput Biol       Date:  2010-12-16       Impact factor: 4.475

9.  Spontaneous oscillation by hair bundles of the bullfrog's sacculus.

Authors:  Pascal Martin; D Bozovic; Y Choe; A J Hudspeth
Journal:  J Neurosci       Date:  2003-06-01       Impact factor: 6.167

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.