Literature DB >> 33265798

Change-Point Detection Using the Conditional Entropy of Ordinal Patterns.

Anton M Unakafov1,2,3,4,5, Karsten Keller1.   

Abstract

This paper is devoted to change-point detection using only the ordinal structure of a time series. A statistic based on the conditional entropy of ordinal patterns characterizing the local up and down in a time series is introduced and investigated. The statistic requires only minimal a priori information on given data and shows good performance in numerical experiments. By the nature of ordinal patterns, the proposed method does not detect pure level changes but changes in the intrinsic pattern structure of a time series and so it could be interesting in combination with other methods.

Entities:  

Keywords:  change-point detection; conditional entropy; ordinal pattern

Year:  2018        PMID: 33265798      PMCID: PMC7513234          DOI: 10.3390/e20090709

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Most of real-world time series are non-stationary, that is, some of their properties change over time. A model for some non-stationary time series is provided by a piecewise stationary stochastic process: its properties are locally constant except for certain time-points called change-points, where some properties change abruptly [1]. Detecting change-points is a classical problem being relevant in many applications, for instance in seismology [2], economics [3], marine biology [4], and in many other science fields. There are many methods for tackling the problem [1,5,6,7,8]. However, most of the existing methods have a common drawback: they require certain a priori information about the time series. It is necessary to know either a family of stochastic processes providing a model for the time series (see for instance [9] where autoregressive (AR) processes are considered) or at least to know which characteristics (mean, standard deviation, etc.) of the time series reflect the change (see [7,10]). In real-world applications, such information is often unavailable [11]. Here, we suggest a new method for change-point detection that requires minimal a priori knowledge: we only assume that the changes affect the evolution rule linking the past of the process with its future (a formal description of the considered processes is provided by Definition 4). A natural example of such change is an alteration of the increments distribution. Our method is based on ordinal pattern analysis, a promising approach to real-valued time series analysis [12,13,14,15,16,17,18]. In ordinal pattern analysis, one considers order relations between values of a time series instead of the values themselves. These order relations are coded by ordinal patterns; specifically, an ordinal pattern of an order describes order relations between successive points of a time series. The main step of ordinal pattern analysis is the transformation of an original time series into a sequence of ordinal patterns, which can be considered as an effective kind of discretization extracting structural features from the data. A result of this transformation is demonstrated in Figure 1 for order . Note that the distribution of ordinal patterns contains much information on the original time series making them interesting for data analysis, especially for data from nonlinear systems (see [19,20]).
Figure 1

A part of a piecewise stationary time series with a change-point at (marked by a vertical line) and corresponding ordinal patterns of order (below the plot).

For detecting a change-point in a time series with values in , one generally considers x as a realization of a stochastic process X and computes for x a statistic that should reach its maximum at . Here, we suggest a statistic on the basis of the conditional entropy of ordinal patterns introduced in [21]. The latter is a complexity measure similar to the celebrated permutation entropy [12] with particularly better performance (see [20,21]). Let us provide an “obvious” example only to motivate our approach and to illustrate its idea. Consider a time series 1: the increasing (coded by 0) and the decreasing (coded by 1). Both ordinal patterns occur with the same frequency before and after the change-point. However, the transitions between successive ordinal patterns change at (throughout the paper, To detect change-points, we use a test statistic for for For simplicity and in view of real applications, in Example 1, we define ordinal patterns and the statistic immediately for concrete time series. However, for theoretical consideration, it is clearly necessary to define the statistic for stochastic processes. For this, we refer to Section 2.2. To illustrate applicability of the statistic, let us discuss a real-world data example. Note that here multiple change-points are detected as described below. Here, we consider electroencephalogram (EEG) recording 14 from the sleep EEG dataset kindly provided by Vasil Kolev (see Section 5.3.2 in [ The  statistic was first introduced in [18], where we have employed it as a component of a method for sleep EEG discrimination. However, no theoretical details of the method for change-point detection were provided there. This paper aims to fill in this gap and provides a justification for the statistic. Numerical experiments given in the paper show better performance of our method than of a similar one based on the Corrected Maximum Mean Discrepancy (CMMD) statistic developed by one of the authors and collaborators [23,24]. A numerical comparison with the classical parametric Brodsky–Darkhovsky method [11] suggests good applicability of the method to nonlinear data, in particular if there is no level change. This is remarkable since our method is only based on the ordinal structure of a time series. Matlab 2016 (MathWorks, Natick, MA, USA) scripts implementing the suggested method are available at [25].

2. Methods

This section is organized as follows. In Section 2.1, we provide a brief introduction into ordinal pattern analysis. In particular, we define the conditional entropy of ordinal patterns and discuss its properties. In Section 2.2, we introduce the statistic. In Section 2.3, we formulate an algorithm for detecting multiple change-points by means of the statistic.

2.1. Preliminaries

Central objects of the following are stochastic processes on a probability space with values in . Here, and , allowing both finite and infinite lengths of processes. We consider only univariate stochastic processes to keep notation simple, however—with the appropriate adaptations—there are no principal restrictions on the dimension of a process. is stationary if, for all with , the distributions of and coincide. Throughout this paper, we discuss detection of change-points in a piecewise stationary stochastic process. Simply speaking, a piecewise stationary stochastic process is obtained by “gluing” several pieces of stationary stochastic processes (for a formal definition of piecewise stationarity, see, for instance, ([26], Section 3.1)). In this section, we recall the basic facts from ordinal pattern analysis (Section 2.1.1), present the idea of ordinal-patterns-based change-point detection (Section 2.1.2), and define the conditional entropy of ordinal patterns (Section 2.1.3).

2.1.1. Ordinal Patterns

Let us recall the definition of an ordinal pattern [14,17,18]. For  and As one can see, there are different ordinal patterns of order d. Given a stochastic process is called the random sequence of ordinal patterns of order For simplicity, we say that A stochastic process the probability of the ordinal pattern The idea of ordinal pattern analysis is to consider the sequence of ordinal patterns and the ordinal patterns distribution obtained from it instead of the original time series. Though implying the loss of nearly all the metric information, this often allows for extracting some relevant information from a time series, in particular, when it comes from a complex system. For example, ordinal pattern analysis provides estimators of the Kolmogorov–Sinai entropy [21,27,28] of dynamical systems, measures of time series complexity [12,18,29], measures of coupling between time series [16,30] and estimators of parameters of stochastic processes [13,31] (see also [15,32] for a review of applications to real-world time series). Methods of ordinal pattern analysis are invariant with respect to strictly-monotone distortions of time series [14] do not need information about range of measurements, and are computationally simple [17]. This qualifies it for application in the case that no much is known about the system behind a time series, possibly as a first exploration step. For a discussion of the properties of ordinal patterns sequence, we refer to [13,31,33,34,35]. For the following, we need two results stated below. (Corollary 2 from [33]). Each process Probability distributions of ordinal patterns are known only for some special cases of stochastic processes [13,33,35]. In general, one estimates probabilities of ordinal patterns by their empirical probabilities. Consider a sequence of ordinal patterns. For any the frequency of occurrence of an ordinal pattern among the first ordinal patterns of the sequence is given by Note that, in Equation (2), we do not count with in order to be consistent with the conditional entropy following below and considering two successive ordinal patterns. A natural estimator of the probability of an ordinal pattern i in the ordinal-d-stationary case is provided by its relative frequency in the sequence :

2.1.2. Stochastic Processes with Ordinal Change-Points

Sequences of ordinal patterns are invariant to certain changes in the original stochastic process X, such as shifts (adding a constant to the process) ([15], Section 3.4.3) and scaling (multiplying the process by a positive constant) [14]. However, in many cases, changes in the original process X affect also the corresponding random sequences of ordinal patterns and ordinal patterns distributions. On the one hand, this impedes application of ordinal pattern analysis to non-stationary time series. Namely, most of ordinal-patterns-based quantities require ordinal-d-stationarity of a time series [12,15,16] and may be unreliable when this condition fails. On the other hand, one often can detect change-points in the original process by detecting changes in the sequence of ordinal patterns. Below, we consider piecewise stationary stochastic processes that are processes consisting of several stationary segments glued together. The time points where the signals are glued correspond to abrupt changes in the properties of the process and are called change-points. The first ideas of using ordinal patterns for detecting change-points were formulated in [23,24,34,36,37,38]. The advantage of the ordinal-patterns-based methods is that they require less information than most of the existing methods for change-point detection: it is assumed that the stochastic process is not from a specific family and that the change does not affect specific characteristics of the process. Instead, we consider further change-points with the following property. Let This approach seems to be natural for many stochastic processes and real-world time series. Note that a change-point where a change in mean occurs need not be ordinal, since the mean is irrelevant for the distribution of ordinal patterns ([15], Section 3.4.3). However, there are many methods that effectively detect changes in mean; the proposed method here is intended for use in a more complex case, when there is no classical method, or it is not clear, which of them to apply. We illustrate Definition 4 by two examples. Piecewise stationary autoregressive processes considered in Example 3 are classical and provide models for linear time series. Since many real-world time series are nonlinear, we introduce in Example 4 a process originated from nonlinear dynamical systems. These two types of processes are used throughout the paper for empirical investigation of change-point detection methods. A first order piecewise stationary autoregressive where  for all A classical example of a nonlinear system is provided by the logistic map on the unit interval: with with Let us include some observational noise by adding standard white Gaussian noise ϵ to an orbit: where Orbits of logistic maps, particularly with observational noise, are often used as a studying and illustrating tool of nonlinear time series analysis (see [ where  with for all The NL and AR processes have rather different ordinal patterns distributions, being the reason for using them for empirical investigation of change-point detection methods in Section 3.

2.1.3. Conditional Entropy of Ordinal Patterns

Here, we define the conditional entropy of ordinal patterns, which is a cornerstone of the suggested method for ordinal-change-point detection. Let us call a process for ordinal- if for all the probability of pairs of ordinal patterns does not depend on t for (compare with Definition 3). Obviously, ordinal--stationarity implies ordinal--stationarity. For an ordinal--stationary stochastic process, consider the probability of an ordinal pattern to occur after an ordinal pattern . Similarly to Equation (1), it is given by: for . If , let . The conditional entropy of ordinal patterns of order For brevity, we refer to as the “conditional entropy” when no confusion can arise. The conditional entropy characterizes the mean diversity of successors of a given ordinal pattern . This quantity often provides a good practical estimation of the Kolmogorov–Sinai entropy for dynamical systems; for a discussion of this and other theoretical properties of conditional entropy, we refer to [21]. Here, we only note that the Kolmogorov–Sinai entropy quantifies unpredictability of a dynamical system. One can estimate the conditional entropy from a time series by using the empirical conditional entropy of ordinal patterns [18]. Consider a sequence of ordinal patterns of order with length . Similarly to Equation (2), the frequency of occurrence of an ordinal patterns pair is given by for . The empirical conditional entropy of ordinal patterns for is defined by As a direct consequence of Lemma 1, the empirical conditional entropy approaches the conditional entropy under certain assumptions. Namely, the following holds. For the sequence

2.2. A Statistic for Change-Point Detection Based on the Conditional Entropy of Ordinal Patterns

We now consider the classical problem of detecting a change-point on the basis of a realization x of a stochastic process X having at most one change-point, that is, it holds either or (compare [6]). To solve this problem, one estimates a tentative change-point as the time-point that maximizes a test statistic . Then, the value of is compared to a given threshold in order to decide whether is a change-point. The idea of ordinal change-point detection is to find change-points in a stochastic process X by detecting changes in the sequence of ordinal patterns for a realization of X. Given at most one ordinal change-point in X, one estimates its position by using the fact that characterize the process before the change; correspond to the transitional state; characterize the process after the change. Therefore, a position of a change-point can be estimated by an ordinal-patterns-based statistic that, roughly speaking, measures dissimilarity between the distributions of ordinal patterns for and for . Then, an estimate of the change-point is given by A method for detecting one change-point can be extended to an arbitrary number of change-points using the binary segmentation [43]: one applies a single change-point detection procedure to the realization x; if a change-point is detected, then it splits x into two segments in each of which one is looking for a change-point. This procedure is repeated iteratively for the obtained segments until all of them either do not contain change-points or are too short. The key problem is the selection of an appropriate test statistic for detecting changes on the basis of a sequence of ordinal patterns of a realization of the process for . We suggest to use the following statistic: for all with . The intuition behind this statistic comes from the concavity of conditional entropy (not only for ordinal patterns but in general, see Section 2.1.3 in [44]). It holds Therefore, if the probabilities of ordinal patterns change at some point , but do not change before and after , then tends to attain its maximum at . If the probabilities do not change at all, then for L being sufficiently large, Inequality (9) tends to hold with equality. More rigorously, when segments of a stochastic process before and after the change-point have infinite length, the following result takes place. Let Corollary 2 is a simple consequence of Theorem A1 (Appendix A.1). Another important property of the statistic is its close connexion with the classical likelihood ratio statistic (see Appendix A.2 for details). Let us now rewrite Equation (8) in a straightforward form. Let and be the frequencies of occurrence of an ordinal pattern and of an ordinal patterns pair (given by Equations (2) and (5), respectively). By setting and , we get using Equation (6) This statistic was first introduced and applied to the segmentation of sleep EEG time series in [18]. To demonstrate the “nonlinear” nature of the statistic, we provide Example 5 concerning transition from a time series to its surrogate. Although being in a sense tailor-made, this example shows that discerns changes that cannot be detected by conventional “linear” methods. The question whether a time series is linear or nonlinear often arises in data analysis. For instance, linearity should be verified before using such powerful methods as Fourier analysis. For this, one usually employs a procedure known as surrogate data testing [ Consider a time series obtained by gluing a realisation of a noisy logistic process NL Although the idea that ordinal structure is a relevant indicator of time series linearity/nonlinearity is not new [

2.3. Algorithm for Change-Point Detection via the CEofOP Statistic

Consider a sequence of ordinal patterns of order with length , corresponding to a realization of some piecewise stationary stochastic process. To detect a single change-point via the statistic, we first estimate its possible position by where  is a minimal length of a sequence of ordinal patterns that is sufficient for a reliable estimation of empirical conditional entropy. From the representation CEofOPstat, it follows that, for a reasonable computation of the CEofOP statistic, a reliable estimation of eCE before and after the assumed change-point is required. For this, the stationary parts of a process should be sufficiently long. We take Note that this does not impose serious limitations on the suggested method, since condition ( In order to check whether is an actual change-point, we test between the hypotheses: parts and of the sequence come from the same distribution; parts and of the sequence come from different distributions. This test is performed by comparing to a threshold h, such that, if the value of is above the threshold, one rejects in favour of . The choice of the threshold is ambiguous: the lower h, the higher the possibility of false rejection of in favour of (false alarm, meaning that the test indicates a change of the distribution although there is no actual change) is. On the contrary, the higher h, the higher the possibility of false rejection of the is. As it is usually done, we consider the threshold h as a function of the desired probability of false alarm. To compute , we shuffle blocks of ordinal patterns from the original sequence, in order to create new artificial sequences. Each such sequence has the same length as the original, but the segments on the left and on the right of the assumed change-point should have roughly the same distribution of ordinal patterns, even if the original sequence is not stationary. This procedure uses the ideas described in [49,50] and is similar to block bootstrapping [51,52,53,54]. The scheme of detecting at most one change-point via the statistic, including the computing of a threshold is provided in Algorithm 1. functionDetectSingleCP(, ) ← ; if then return 0;           ▷ sequence is too short, no change-point can be detected end if ← ; ← ;         ▷ number of bootstrap samples for computing threshold for do           ▷ computing threshold by reshuffling ← randomly shuffled blocks of length from ; ← ; end for ← Sort(); ▷ sort the maximal values of for bootstrap samples in decreasing order h ← if then return 0; else return ; end if end function To detect multiple change-points, we use an algorithm that consists of two steps: preliminary estimation of boundaries of the stationary segments with a threshold computed for doubled nominal probability of false alarm (that is, with a higher risk of detecting false change-points). verification of the boundaries and exclusion of false change-points: a change-point is searched for a merging of every two adjacent intervals. Details of these two steps are displayed in Algorithm 2. Step 1 is the usual binary segmentation procedure as suggested in [43]. Since this procedure detects change-points sequentially, they may be estimated incorrectly. To improve localization and eliminate false change-points, we introduce Step 2 following the idea suggested in [11]. functionDetectAllCP(, ) ; ; ;                       ▷ Step 1 repeat ← DetectSingleCP(), ; if then Insert to the list of change-points after and renumber change-points ; ← ; else k ← ; end if until ; k ← 0;                                   ▷ Step 2 repeat ← DetectSingleCP(, ); if then ← ; k ← ; else Delete from the change-points list and renumber change-points ; ← ; end if until ; return ; end function

3. Numerical Simulations and Results

In this section, we empirically investigate performance of the method for change-point detection via the statistic. We apply it to the noisy logistic processes and to autoregressive processes (see Section 2.1.2) and compare performances of change-point detection by the suggested method and by the following existing methods: The ordinal-patterns-based method for detecting change-points via the CMMD statistic [23,24]: A time series is split into windows of equal lengths , empirical probabilities of ordinal patterns are estimated in every window. If there is a ordinal change-point in the time series, then the empirical probabilities of ordinal patterns should be approximately constant before the change-point and after the change-point, but they change at the window with the change-point. To detect this change, the CMMD statistic was introduced. (Note that the definition of the CMMD statistic in [23] contains a mistake, which is corrected in [24]. The results of numerical experiments reported in [23] also do not comply with the actual definition of the CMMD statistic (see Section 4.2.1.1 and 4.5.1.1 in [22] for details). In the original papers [23,24], authors do not estimate change-points, but only the corresponding window numbers; for the algorithm of change-point estimation by means of the CMMD statistic, we refer to Section 4.5.1 in [22]. Two versions of the classical Brodsky–Darkhovsky method [11]: the Brodsky–Darkhovsky method can be used for detecting changes in various characteristics of a time series , but the characteristic of interest should be selected in advance. In this paper, we consider detecting changes in mean, which is just the basic characteristic, and in correlation function which reflects relations between the future and the past of a time series and seems to be a natural choice for detecting ordinal change-points. Changes in mean are detected by the generalized version of the Kolmogorov–Smirnov statistic [11]: where the parameter regulates properties of the statistic, is basically used (see [11] for details). Changes in the correlation function are detected by the following statistic: Note that we consider the statistic We use orders of ordinal patterns for computing the statistic ( provides worse results because of reduced sensitivity, while higher orders are applicable only to rather long time series due to condition (12)). For the CMMD statistic, we take and the window size . There are no special reasons for this choice except the fact that is sufficient for estimating probabilities of ordinal patterns of order inside the windows, since (Section 9.3 [15]). Results of the experiments remain almost the same for . Nominal probability of false alarm has been taken for all methods (in the case of the CMMD statistic, we have used the equivalent value , see Section 4.3.2 in [22] for details). In Section 3.1, we study how well the statistics for change-point detection estimate the position of a single change-point. Since we expect that performance of the statistics for change-point detection may strongly depend on the length of realization, we check this in Section 3.2. Finally, we investigate the performance of various statistics for detecting multiple change-points in Section 3.3.

3.1. Estimation of the Position of a Single Change-Point

Consider N = 10,000 realizations with for each of the processes listed in Table 1. A single change occurs at a random time uniformly distributed in . For all processes, length of sequences of ordinal patterns is taken, with .
Table 1

Processes used for investigation of the change-point detection.

Short NameComplete Designation
NL, 3.953.98, σ=0.2 NL(3.95,3.98),(0.2,0.2),t*
NL, 3.953.80, σ=0.3 NL(3.95,3.80),(0.3,0.3),t*
NL, 3.954.00, σ=0.2 NL(3.95,4.00),(0.2,0.2),t*
AR, 0.10.3 AR(0.1,0.3),t*
AR, 0.10.4 AR(0.1,0.4),t*
AR, 0.10.5 AR(0.1,0.5),t*
To measure the overall accuracy of change-point detection via some statistic S as applied to the process X, we use three quantities. Let us first determine the error of the change-point estimation provided by the statistic S for the j-th realization of a process X: where is the actual position of the change-point and is its estimate obtained by using S. Then, the fraction of satisfactorily estimated change-points (averaged over N realizations) is defined by: where is the maximal satisfactory error, we take . The bias and the root mean squared error (RMSE) are respectively given by A large and a bias and close to zero are standing for a high accuracy of the estimation of a change-point. Results of the experiments are presented in Table 2 and Table 3 for NL and AR processes, respectively. For every process, the best values of performance measures are shown in bold.
Table 2

Performance of different statistics for estimating change-point in noisy logistic (NL) processes

NL, 3.953.98NL, 3.953.80NL, 3.954.00
Statistic σ=0.2 σ=0.3 σ=0.2
sE B RMSE sE B RMSE sE B RMSE
CMMD0.3469816530.50−513060.68−13206
CEofOP,d=2 0.4614711080.62−32670.8133147
CEofOP,d=3 0.61533970.65 1 2560.882099
CEofOP,d=4 0.47 −2 9820.46−4111620.832130
BDexp 0.62 78 351 0.78 −6 145 0.89 43 96
BDcorr 0.44856560.71132020.7743189
Table 3

Performance of different statistics for estimating change-point in autoregressive (AR) processes.

StatisticAR, 0.10.3AR, 0.10.4AR, 0.10.5
sE B RMSE sE B RMSE sE B RMSE
CMMD0.3261616260.54−143680.68−48184
CEofOP,d=2 0.427410960.6762440.823129
CEofOP,d=3 0.3912618380.68 0 2340.86 0 110
CEofOP,d=4 0.08102866230.46−17616780.74−27214
BDexp 0.00>103>1040.00>104>1040.00>104>104
BDcorr 0.79 31 151 0.92 21 73 0.97 21 50
Let us summarize: for the considered processes, the CEofOP statistic estimates change-point more accurately than the CMMD statistic. For the NL processes, the CEofOP statistic has almost the same performance as the Brodsky–Darkhovsky method; for the AR processes, performance of the classical method is better, though CEofOP has lower bias. In contrast to the ordinal-patterns-based methods, the Brodsky–Darkhovsky method is unreliable when there is a lack of a priori information about the time series. For instance, changes in NL processes only slightly influence the correlation function and does not provide a good indication of changes (cf. performance of and in Table 2). Here, note that level shifts before and after a time point do not change . Meanwhile, changes in the AR processes do not influence the expected value (see Example 3), which does not allow for detecting them using (see Table 3). Therefore, we do not consider the statistic in further experiments. Note that performance of the statistic is only slightly better for than for , and for even decreases, although one can expect better change-point detection for higher d. As we show in the following session, this is due to the fact that the performance of the statistic depends on the length L of the time series. In particular, = 20,480 is not sufficient for applying the statistic with .

3.2. Estimating Position of a Single Change-Point for Different Lengths of Time Series

Here, we study how the accuracy of change-point estimation for the three considered statistics depends on the length L of a time series. We take 50,000 realizations of NL, , and AR, for realization lengths L = 24 W, 28 W, …, 120 W. Again, we consider a single change at a random time . Results of the experiment are presented in Figure 6.
Figure 6

Measures of change-point detection performance for NL (a,b) and AR (c,d) processes with different lengths, where L is the product of window numbers given on the x-axis with window length .

In summary, performance of the CEofOP statistic is generally better than for the CMMD statistic, but strongly depends on the length of time series. This emphasizes importance of condition (12). From the results of our experiments, we recommend choosing d, satisfying . In comparison with the classical Brodsky–Darkhovsky method, CEofOP has better performance for NL processes (see Figure 6a,b), and lower bias for AR processes (see Figure 6d).

3.3. Detecting Multiple Change-Points

Here, we investigate how well the considered statistics detect multiple change-points. Methods for change-point detection via the CEofOP and the CMMD statistics are implemented according to Section 2.3 and Section 4.5.1 in [22], respectively. We consider here CEofOP only for , since it provided the best change-point detection in previous experiments. The Brodsky–Darkhovsky method is implemented according to [11] with only one exception: to compute a threshold for it, we use the shuffling procedure (Algorithm 1), which in our case provided better results than the technique described in [11]. We consider here two processes, and , with change-points being independent and uniformly distributed in for with , , , and W. For both processes, we generate realizations with . We consider unequal lengths of stationary segments to study methods for change-point detection in more realistic conditions. As we apply change-point detection via a statistic S to realization , we obtain estimates of the number of stationary segments and of change-points positions for . Since the number of estimated change-points may be different from the actual number of changes, we suppose that the estimate for is provided by the nearest . Therefore, the error of estimation of the k-th change-point provided by S is given by To assess the overall accuracy of change-point detection, we compute two quantities. The fraction of satisfactory estimates of a change-point , is given by where is the maximal satisfactory error; we take . The average number of false change-points is defined by: Results of the experiment are presented in Table 4 and Table 5, and the best values are shown in bold.
Table 4

Performance of change-point detection methods for the process with three change-points .

StatisticNumber of False Change-PointsFraction sEk of Satisfactory Estimates
1st Change2nd Change3rd ChangeAverage
cMMD1.170.4650.6420.7470.618
CEofOP 0.62 0.753 0.882 0.930 0.855
BDcorr 1.340.2960.7370.7510.595
Table 5

Performance of change-point detection methods for the process with three change-points .

StatisticNumber of False Change-PointsFraction sEk of Satisfactory Estimates
1st Change2nd Change3rd ChangeAverage
CMMD1.170.3400.6400.3340.438
CEofOP 1.120.3680.8340.5170.573
BDcorr 0.53 0.783 0.970 0.931 0.895
In summary, since distributions of ordinal patterns for NL and AR processes have different properties, results for them differ significantly. The CEofOP statistic provides good results for the NL processes. However, for the AR processes, its performance is much worse: only the most prominent change is detected rather well. Weak results for two other change-points are caused by the fact that the CEofOP statistic is rather sensitive to the lengths of stationary segments (we have already seen this in Section 3.2), and in this case they are not very long.

4. Conclusions and Open Points

In this paper, we have introduced a method for change-point detection via the CEofOP statistic and have tested it for time series coming from two classes of models with quite different behavior, namely piecewise stationary noisy logistic and autoregressive processes. The empirical investigations suggest that the method proposed provides better detection of ordinal change-points than the ordinal-patterns-based method introduced in [23,24]. Performance of our method for the two model classes considered is particularly comparable to that for the classical Brodsky–Darkhovsky method, but, in contrast to it, ordinal-patterns-based methods require less a priori knowledge about the time series. This can be especially useful in the case of considering nonlinear models where the autocorrelation function does not describe distributions completely. Here, the point is that with exception of the mean much of the distribution is captured by its ordinal structure. Thus (together with methods finding changes in mean), the CEofOP statistic can be used at least for a first exploration step. It is remarkable that our method behaves well with respect to the bias of the estimation, possibly qualifying it to improve localization of change-points found by other methods. Although numerical experiments and tests to real-world data cannot replace rigorous theoretical studies, the results of the current study show the potential of the change-point detection via the CEofOP statistic. However, there are some open points listed below: A method for computing a threshold h for the statistic without shuffling the original time series is of interest, since this procedure is rather time consuming. One possible solution is to utilize Theorem A1 (Appendix A.1) and to precompute thresholds using the values of . However, this approach requires further investigation. The binary segmentation procedure [43] is not the only possible method for detecting multiple change-points. In [8,55], an alternative approach is suggested: the number of stationary segments is estimated by optimizing a contrast function, then the positions of the change-points are adjusted. Likewise, one can consider a method for multiple change-point detection based on maximizing the following generalization of statistic: where is an estimate of number of stationary segments, , and are estimates of change-points. Further investigation in this direction could be of interest. As we have seen in Section 3.2, CEofOP statistic requires rather large sample sizes to provide reliable change-point detection. This is due to the necessity of the empirical conditional entropy estimation (see Section 2.3). In order to reduce the required sample size, one may consider more effective estimates of the conditional entropy—for instance, the Grassberger estimate (see [56] and also Section 3.4.1 in [22]). However, elaboration of this idea is beyond the scope of this paper. We did not use the full power of ordinal time series analysis, which often considers ordinal patterns taken from sequences of equidistant time points of some distance . This generalization of the case with successive points allows for addressing different scales and so to extract more information on the distribution of a time series [57], also being useful for change-point detection. In this paper, only one-dimensional time series are considered, though there is no principal limitation for applying ordinal-patterns-based methods to multivariate data (see [28]). Discussion of using ordinal-patterns-based methods for detecting change-point in multivariate data (for instance, in multichannel EEG) is therefore of interest. We have considered here only the “offline” detection of changes, which is used when the acquisition of a time series is completed. Meanwhile, in many applications, it is necessary to detect change-points “online”, based on a small number of observations after the change [1]. Development of online versions of ordinal-patterns-based methods for change-point detection may be an interesting direction of a future work.
Table A1

Values of for an autoregressive process (coefficient 100 here is only for the sake of readability).

ϕ1 0.000.100.200.300.400.500.600.700.800.900.99
ϕ2
0.0000.020.070.150.260.400.560.740.951.181.44
0.100.0200.020.060.140.250.370.530.710.911.13
0.200.070.0200.020.060.130.230.360.510.680.88
0.300.150.060.0200.010.060.130.220.340.490.66
0.400.260.140.060.0100.010.060.120.220.330.48
0.500.400.250.130.060.0100.010.050.120.210.33
0.600.560.370.230.130.060.0100.010.050.120.21
0.700.740.530.360.220.120.050.0100.010.050.12
0.800.950.710.510.340.220.120.050.0100.010.05
0.901.180.910.680.490.330.210.120.050.0100.01
0.991.441.130.880.660.480.330.210.120.050.010
Table A2

Values of for an autoregressive process.

ϕ1 0.000.100.200.300.400.500.600.700.800.900.99
ϕ2
0.0000.040.150.330.560.851.181.551.952.402.88
0.100.0400.040.140.310.530.801.121.481.892.34
0.200.150.0400.030.130.290.510.771.081.441.85
0.300.330.140.0300.030.130.280.490.751.061.43
0.400.560.310.130.0300.030.120.270.480.741.06
0.500.850.530.290.130.0300.030.120.270.480.74
0.601.180.800.510.280.120.0300.030.120.270.48
0.701.551.120.770.490.270.120.0300.030.120.28
0.801.951.481.080.750.480.270.120.0300.030.13
0.902.401.891.441.060.740.480.270.120.0300.03
0.992.882.341.851.431.060.740.480.280.130.030
  6 in total

1.  A nonparametric method for the segmentation of the EEG.

Authors:  B E Brodsky; B S Darkhovsky; A Y Kaplan; S L Shishkin
Journal:  Comput Methods Programs Biomed       Date:  1999-09       Impact factor: 5.428

2.  Improved Surrogate Data for Nonlinearity Tests.

Authors: 
Journal:  Phys Rev Lett       Date:  1996-07-22       Impact factor: 9.161

3.  Permutation entropy: a natural complexity measure for time series.

Authors:  Christoph Bandt; Bernd Pompe
Journal:  Phys Rev Lett       Date:  2002-04-11       Impact factor: 9.161

4.  Effect of additive and multiplicative noise on the first bifurcations of the logistic model.

Authors: 
Journal:  Phys Rev A Gen Phys       Date:  1986-04

5.  Momentary information transfer as a coupling measure of time series.

Authors:  Bernd Pompe; Jakob Runge
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2011-05-19

6.  Detecting dynamical changes in time series using the permutation entropy.

Authors:  Yinhe Cao; Wen-Wen Tung; J B Gao; V A Protopopescu; L M Hively
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2004-10-27
  6 in total
  1 in total

1.  Ordinal Pattern Dependence in the Context of Long-Range Dependence.

Authors:  Ines Nüßgen; Alexander Schnurr
Journal:  Entropy (Basel)       Date:  2021-05-26       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.