Jonathan M Lilly1. 1. NorthWest Research Associates, Redmond, WA 98009, USA.
Abstract
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.
Entities:
Keywords:
eddy detection; generalized Morse wavelets; non-stationary time series; satellite altimetry; signal detection in noise; wavelet transform
A common problem in time series analysis is the need to detect and describe signals that are non-sinusoidal in nature. In such cases, continuous wavelet analysis provides an attractive alternative to Fourier analysis. For signals that are close to being sinusoidal, a method known as ‘wavelet ridge analysis’ [1-5] provides a powerful tool for detection and quantitative analysis. At the other extreme, for signals that are nearly singular in nature, the ‘modulus maxima’ method [2,6] has proved useful. These popular methods represent the signal as being supported entirely on nearly horizontal, or nearly vertical, curves on the time/scale plane, respectively.A third class of signals is neither nearly sinusoidal nor nearly singular, but is composed of self-similar events that are localized in time and that may be considered as barely oscillatory or even non-oscillatory. That is, the signal is considered to be composed of isolated events that themselves resemble wavelets. In contrast to the wavelet ridges and the modulus maxima curves, signals of this type are supported only at isolated points distributed, like stars or dust, sparsely throughout the time/scale plane. Because individual wavelets are good approximations for phenomena ranging from heartbeats recorded by an electrocardiogram to propagating wave packets to climate oscillations, one may expect signals of this type to be fairly widespread.A particular example comes from oceanography and involves satellite observations of the so-called ‘coherent eddies’, swirling (10–100) km vortex structures that are ubiquitous features of the ocean circulation. Such features, which are frequently modelled as having sea surface height anomalies that are Gaussian in shape, are observed along the narrow ground tracks of satellite altimeter instruments. This leads to time series in which nearly Gaussian bumps or depressions of varying scales are embedded, together with noise as well as other sources of sea surface height variability. While such ‘along-track’ observations are occasionally used to study eddies [7-10], a far more common approach, as in the watershed study of [11], is to rely on mapped data products. Because the altimeter records typically have about 5 km resolution in the along-track direction, but about 100 km resolution in the cross-track direction, the creation of mapped fields involves a horizontal smoothing that reduces the along-track resolution by an order of magnitude.Inspired by this problem, yet imagining that its solution may be of general interest, the following model for a time series is proposed. The real-valued time series x(t) is represented as containing time-offset, phase-shifted and rescaled copies of some time-localized complex-valued function ψ(t), together with measurement noise that is assumed to be Gaussian and stationary,
where ℜ{⋅} denotes the real part and N is the total number of events, taken to be finite herein. The complex-valued parameter c=|c|ei with sets the amplitude |c| and phase ϕ of the nth event, t is its temporal location, and ρ sets the event scale. Here x(t) is a noise process understood to represent all variability not captured by the summation. The goal of the analysis is to estimate the four signal parameters |c|, ϕ, t and ρ for each n, to the extent possible given the noise and interference from other nearby events.The representation (1.1) will be referred to as the element model, meaning that the signal is believed to be composed of manifestations of the particular function ψ(t), the element function, which is considered to be known. Note that this model contains a Fourier series plus noise as a special case. Choosing ψ(t)=ei, the model becomes , where φ≡ϕ−t/ρ is a modified phase that renders the time shift parameter t redundant. Because a Fourier series is a very common and powerful representation of signal variability, and because the element model (1.1) generalizes this to permit the signal to be composed of non-sinusoidal elements, each characterized by four parameters rather than three, this model is likely to be useful for cases in which the Fourier representation is not appropriate.The element model is directly inspired by continuous wavelet analysis. If ψ(t) is taken to be a wavelet or integral of a wavelet, (1.1) can be interpreted as limiting the signal reconstruction to isolated points on the time/scale plane. The general approach to analysing a time series that is believe to match the element model has three steps: (i) detecting wavelet transform maxima characterizing the individual events, (ii) determining the level of significance by examining the time/scale distribution of transform maxima arising due entirely to noise, and (iii) ensuring the appropriateness of this model through a criterion for verifying that each event is sufficiently isolated from the others. Thus, unlike the method of wavelet thresholding [12], one is not simply looking for statistically significant coefficients, but rather for significant features which are also a good match to the specified element function. An illustration that this method is able to extract a small number of isolated events from a real-world satellite altimetry dataset, leaving behind apparently unstructured noise, is presented in figure 1, and will be discussed in detail later.
Figure 1.
The results of applying the element analysis method to a small segment along-track altimeter data. In (a), along-track sea surface height anomaly measurements are shown from the Labrador Sea, an area of known small-scale eddy activity. Thirty-seven repeated observations along a single ground-track during the year 2007 are shown, offset from one another by 15 cm with the earliest observations at the bottom. Reconstructed signals due only to a relatively small number (67) of statistically significant and isolated events based on the element model (3.1), determined as described in the text, are shown in (b). Here the black dots denote the centres of the detected events, while the grey lines repeat the information from (a). Finally (c) plots the differences between the original signals and the reconstructions, which appear to be largely devoid of meaningful features. (Online version in colour.)
The results of applying the element analysis method to a small segment along-track altimeter data. In (a), along-track sea surface height anomaly measurements are shown from the Labrador Sea, an area of known small-scale eddy activity. Thirty-seven repeated observations along a single ground-track during the year 2007 are shown, offset from one another by 15 cm with the earliest observations at the bottom. Reconstructed signals due only to a relatively small number (67) of statistically significant and isolated events based on the element model (3.1), determined as described in the text, are shown in (b). Here the black dots denote the centres of the detected events, while the grey lines repeat the information from (a). Finally (c) plots the differences between the original signals and the reconstructions, which appear to be largely devoid of meaningful features. (Online version in colour.)In order to be a suitable model for a variety of signals, it is essential that the element function ψ(t) be capable of taking on a broad range of forms. Here, the generalized Morse wavelets, or simply the Morse wavelets for brevity, are an attractive choice. These wavelets were introduced by Daubechies & Paul [13], then examined further in [14-18]. Their fundamental position within the wavelet pantheon is now clear. Recently, it has been shown by Lilly & Olhede [18] that the Morse wavelets effectively encompass all other types of commonly used analytic wavelets within a single unified family. Analytic delta-functions and complex exponentials are also included as limiting cases. Therefore, using the generalized Morse wavelets as signal elements provides more flexibility than using all these other types of functions put together. Furthermore, their simple frequency-domain form means that analytic expressions for key properties may readily be derived [17], and thus the wavelets’ dependence on controlling parameters is well understood. While a Gaussian is the element function of greatest immediate interest to the eddy detection problem, it is not much more difficult to create a general method that can use any Morse wavelet as an element function, as is done here.The proposed method joins a diverse set of methods already in use in the literature for structure detection and analysis in time series. A straightforward wavelet-based approach is to simply specify a sequence of filtration and/or reconstruction steps that tend to have the effect of isolating structures of interest for a particular problem [19,20]. The present method is distinguished from such approaches in that it begins by positing a model (1.1) for what the signal is actually like. This allows for the construction of a method for inferring event properties with a small number of adjustable parameters, making the element analysis method highly automatable and scalable. Another, non-wavelet-based approach applies statistical tests to sliding windows of a given length to determine whether they are likely to contain signal structures [21,22]; detected events can then be classified using objective methods. That approach assumes that typical duration of an event is known, but its form is unknown; in element analysis, we treat the opposite case in which the form is considered to be known but the duration is unknown.A sophisticated and powerful approach related to the one proposed here is basis pursuit [23]. In that method, one attempts to find the most compact representation of the signal by considering a variety of complete or overcomplete representations. Basis pursuit can be implemented as a denoising method by incorporating a penalty function into the optimization, see §5 of [23]. Basis pursuit is intended as a general-purpose tool, with the goal of obtaining a compact representation of any structures present in the signal, whatever they may be. In element analysis, it is assumed that there is a physical motivation for believing that the signal consists of isolated events of a known form. The goal is not to reconstruct all signal structure, but rather to infer the properties of those events. For this specific problem, element analysis has the powerful features of being able to assess the significance of the detected events against the null hypothesis of white or power-law noise, and to reject unsuitable events. Thus, the assumptions and objectives of element analysis are different from those of basis pursuit and other existing structure-detection methods. The method developed here therefore complements those already in use.The structure of the paper is as follows. Essential background on wavelet analysis and the Morse wavelets is presented in §2. The basic idea of element analysis is introduced in §3. The means for assessing statistical significance and the degree of isolation are created in §4. The application to the data shown in figure 1 is discussed in §5, and the paper concludes with a discussion. All software related to this paper is distributed as a part of a freely available toolbox of Matlab functions, called jLab, available at the author’s website, http://www.jmlilly.net. Descriptions of relevant routines from this toolbox are given in appendix A. Supplementary text and figures may be found online at https://doi.org/10.6084/m9.figshare.c.3741266.
Background
This section presents relevant background on wavelet analysis using the generalized Morse wavelets. This involves briefly reviewing key material from the literature, especially [15-18], together with additional details when necessary. The definition of the continuous wavelet transform is reviewed in §2a, while the essential properties of the generalized Morse wavelets are discussed in §2b. An interpretation of the meaning behind maxima of the wavelet transform with the inverse scale normalization employed here is given in §2c; this is important, as it identifies the optimization principle on which the element analysis is based.
The continuous wavelet transform
In this paper, we will consider a time series to be built up from members of a two-parameter family of functions termed generalized Morse wavelets [15,17,18]. The Morse wavelets, represented as ψ(t), are defined in the frequency domain for β≥0 and γ>0 as
where ω is angular or radian frequency, and a is a real-valued normalizing constant chosen as
in which the ‘e’ appearing in the numerator is Euler’s number, e≈2.71828. The parameter β, called the order, controls the low-frequency behaviour, while γ, called the family, controls the high-frequency decay. Differentiating Ψ(ω) with respect to ω, one finds that the Morse wavelets obtain their maximum value at the frequency
which is known as the peak frequency. The choice of a in (2.2) sets the maximum value of the frequency-domain wavelet to Ψ(ω)=2, for reasons to be seen subsequently.Functions having no support on negative frequencies, such as the Morse wavelets, are said to be analytic. Analyticity implies that the time-domain wavelets ψ(t) must be complex-valued, because the contribution from each complex-valued exponential ei in (2.1) cannot be cancelled by those at other frequencies. This means the analytic wavelets are naturally grouped into even or cosine-like and odd or sine-like pairs, allowing them to capture phase variability.The wavelet transform of a square-integrable signal x(t) with respect to the wavelet ψ(t) is defined in the time domain, or the frequency domain, respectively, as
where X(ω) is the Fourier transform of x(t), with , and where the asterisk denotes the complex conjugate; note that the conjugation in the last expression may be omitted since Ψ(ω) is real-valued. The time-domain expression is the inner product[1] between the signal x(t) and shifted, rescaled versions of the wavelet. The frequency-domain form is found by inserting the Fourier representations of x(t) and ψ(t), then using where δ(ω) is the Dirac delta function, or from Plancherel’s formula. The scale variable s specifies a stretching or compression of the wavelet in time. The rescaled frequency-domain wavelet Ψ(sω) obtains a maximum at ω≡ω/s, referred to here as the scale frequency.Note that for convenience herein write the time series of interest as x(t), as if it were observed in continuous time. In reality, this is not the case, and the time series x(t) is only available as the discrete sequence x≡x(Δn) where Δ is the sampling interval. We will discuss discrete effects only when necessary, e.g. when discussing numerical implementation. In practice, the discrete effects may be neglected provided we choose the scale s sufficiently large compared with Δ.In the above, we have chosen to normalize the time-domain wavelets with 1/s as opposed to the more common . The normalization guarantees that the wavelet maintains constant energy, since . Thus, this normalization is appropriate if one wishes for the modulus-squared wavelet transform to reflect the energy of the analysed signal x(t). However, we find it is generally more useful to describe time-localized signals by their amplitude, and for this the 1/s normalization is more appropriate. To see this, we note that compressing or stretching the signal x(t) in time by some factor ρ as in x(t/ρ), but without modifying the signal amplitude, rescales the wavelet transform as
as one finds from a change of variables. Thus, rescaling time in the input signal as x(t/ρ) rescales both the time and the scale of the wavelet transform, but without changing its magnitude. The transform values of the amplitude-rescaled signal cx(t/ρ) then reflect the value of c, independent of the choice of temporal rescaling ρ, a desirable result that is not true with the normalization. A special case of this result is that the peak magnitude of the wavelet transform of a sinusoid always takes on the same value regardless of the frequency ω. Because of the choice of a in (2.2), the maximum magnitude of the wavelet transform of this sinusoid obtains a value of |c|, which occurs at scale frequency ω=ω or scale s=ω/ω.The zeroth-order functions ψ0,(t), with β=0, require special comment. These functions are well defined by (2.1), but are technically not wavelets because wavelets are zero mean by definition. The time-mean value of ψ(t) is , which from (2.1) is seen to vanish for β>0 but not for β=0. We will therefore refer to ψ(t) defined by (2.1) for any β≥0 and positive γ as Morse functions rather than wavelets, whereas the Morse wavelets strictly occur for β>0. The amplitude coefficient a given by (2.2) is of the form 00 at β=0, which by mathematical convention is taken to equal unity. This gives a0,=2, consistent with the limiting value of a as β tends to zero, as is readily shown. The zeroth-order Morse functions are therefore seen to be one-sided bandpass filters of the form Ψ0,(ω)=2e−. For these zeroth-order functions ψ0,(t), we also need a different way of assigning a reference frequency, since the peak frequency ω≡(β/γ)1/ vanishes in this case. Instead, we define ω0, as the half-power point, i.e. the frequency at which Ψ0,(ω)=2e− is equal to half of its maximum value of Ψ0,(0)=2. Solving Ψ0,(ω0,)=1 then leads to .
Properties of Morse wavelets
The Morse wavelets can present a wide range of time-domain forms, as shown in figure 2 for a variety of values of β and γ. The functions become more oscillatory as one moves across columns, as β increases, and also moving down rows as γ increases. As these parameters decrease, the functions become increasingly localized in the time domain, appearing more as isolated events or impulses rather than as oscillations. Increasing β with fixed γ appears to pack more oscillations into the same envelope, whereas increasing γ with fixed β additionally modifies the function shape, with the function modulus curves becoming less strongly concentrated about its centre. In fact, incrementing β by one is essentially equivalent to performing a time derivative, because
as can be seen directly from (2.1). Thus, all wavelets with β>0 in the same γ family can be generated by repeatedly differentiating (or fractionally differentiating) the zeroth-order generalized Morse functions ψ0,(t), which are shown separated from the others in the left-hand column of figure 2. Varying γ, on the other hand, leads to qualitatively different families. The most familiar of these is γ=2, for which the zeroth-order function consists of the analytic part of a Gaussian, with derivatives of this analytic Gaussian occurring for higher-order β. For more details on the roles of β and γ in shaping the wavelet properties, see [17,18].
Figure 2.
Examples of time-localized signals of the Morse wavelet form. Each row corresponds to a particular value of γ, and each column to a particular value of β, as indicated. The real parts, imaginary parts and absolute values are shown as solid, dashed and heavy solid lines, respectively. The zero value is marked by the horizontal dotted lines, while the dotted vertical lines mark the time , defined in (2.8). The signals in the β=0 class, to the left of the double solid line, are not classified as wavelets since they are not zero mean. For these β=0 functions, L is not defined, so the vertical dotted lines mark the roughly comparable interval of , corresponding to the choice . For small β and γ, the signals have an impulsive rather than oscillatory character, becoming more oscillatory towards the lower right as β and γ increase.
Examples of time-localized signals of the Morse wavelet form. Each row corresponds to a particular value of γ, and each column to a particular value of β, as indicated. The real parts, imaginary parts and absolute values are shown as solid, dashed and heavy solid lines, respectively. The zero value is marked by the horizontal dotted lines, while the dotted vertical lines mark the time , defined in (2.8). The signals in the β=0 class, to the left of the double solid line, are not classified as wavelets since they are not zero mean. For these β=0 functions, L is not defined, so the vertical dotted lines mark the roughly comparable interval of , corresponding to the choice . For small β and γ, the signals have an impulsive rather than oscillatory character, becoming more oscillatory towards the lower right as β and γ increase.In addition to the peak frequency ω, a second fundamental quantity is a non-dimensional measure of the wavelet’s time-domain width, denoted P. From the definition
one sees that P is the square root of the second moment of the wavelet, after demodulation by its own peak frequency. The first equality follows from the Fourier transform , and the fact that evaluates to may be verified directly, or see §III-B of [17]. Because it is the product of a time-domain width and the wavelet’s peak frequency ω, P could be called the time-bandcentre product. The third expression in (2.7) suggests that P could also be interpreted as a non-dimensional inverse bandwidth, see [18].A dimensional measure of the wavelet’s time-domain width will also be used. Note that P/ω is a measure of the time-domain half-width of the s=1 or ‘mother’ wavelet. Thus, we may introduce a measure of the duration of the scale s wavelet, termed the wavelet footprint, as
using ω≡ω/s. Through numerical calculation, we find a window of this width typically captures ≈95% of the total wavelet energy. As discussed in appendix B, the wavelet footprint is closely related to a more familiar quantity, the wavelet’s time-domain standard deviation.A more complete description of the wavelet properties is provided by the wavelet moments (§III-A of [17]), which will be used in several mathematical derivations herein. The relevant aspects of the wavelet moments are discussed in the electronic supplementary material, §S1.
Optimization principle
In this section, we examine the optimization principle on which the 1/s or amplitude-normalized wavelet transform is based. To do so, we first examine the more familiar or energy normalization. Let us say that we attempt to fit a rescaled and shifted version of a wavelet ψ(t) to the real-valued time series x(t) by minimizing the total error
and we therefore seek the coefficients c, τ and s that minimize this error. The choice of c that minimizes the error will be denoted c(τ,s). Setting the partial derivatives of ϵ(c,τ,s) with respect to the real and imaginary parts of c equal to zero, one finds
where is the energy of the scale s=1 or ‘mother’ wavelet. This states that the best-fit coefficient at each time and each scale is proportional to the continuous wavelet transform with a or energy normalization. Inserting this expression into (2.9) leads to
and since the first term is constant, the error is minimized for that choice of time offset τ and scale parameter s that maximize the squared modulus of the energy-normalized wavelet transform, in other words, for (τ,s) being a maximum point of the wavelet transform modulus.Thus, the maxima points of the energy-normalized wavelet transform give the local best fits—in the sense of minimizing the time-integrated error—between the observed signal x(t) and time-shifted, rescaled versions of the wavelet. While this might appear a compelling argument to use this normalization, when we carry out the analysis described herein using the energy normalization on real-world data, the results are poor. The reason is that the energy-normalized wavelet transform is overly influenced by variability at adjacent times. In fact, attempting to explain as much variability as possible using a single wavelet is not a suitable principle for analysing time series containing multiple, potentially interacting events. Because longer wavelets can capture more energy, the transform has a tendency to achieve a maximum when it is long enough to span several nearby events; but the objective here is to detect the events individually.The quantity is that portion of the total signal energy that can be explained by a single wavelet located at time τ and scale s; note that it has units of energy, like . The related quantity
is therefore proportional to the energy density in a time interval of duration s, or the power captured by a wavelet located at a particular time/scale point. This is the same as the wavelet transform with an amplitude or 1/s normalization. Therefore, maxima points of the amplitude-normalized wavelet transform identify the time offsets τ, scaling factors s and complex-valued coefficients c that maximize the energy density over an interval proportional to their own duration. Thus, the 1/s-normalized wavelet transform is based on the principle of optimizing power.
Element analysis
In this section, element analysis using the Morse wavelets is developed. It is shown that if the element function ψ(t) in (1.1) is chosen to be a Morse function, then analysing the signal x(t) with any Morse wavelet in the same γ family leads to a straightforward way of inferring the event properties. Firstly, in §3a, the use of the Morse functions as signal elements is introduced, and transform maxima points are defined. In §3b, it is shown that the wavelet transform of a Morse function with another Morse wavelet can itself be expressed as a modified Morse wavelet. This fact lets us derive, in §3c, a simple expression for the entire wavelet transform of a time series represented by the element model. In that section, we also find expressions for the time/scale points at which transform maxima should occur, and the values of those maxima, given the properties of the underlying signal elements. Thus, properties of observed transform maxima can be inverted to obtain estimates of the element properties, as shown in §3d. Finally, an illustration using a synthetic dataset is given in §3e, the examination of which motivates the development of statistical significance and degree of isolation criteria in the next section.
Generalized Morse functions as signal elements
Here we propose the use of the Morse functions ψ(t) as element functions, leading to a signal model of the form
where the properties of the element function are set by μ and γ, and where x(t) is a noise process defined subsequently and which is assumed to be zero mean. The parameter μ plays the role of β in the element function, while ρ plays the role of the scale s; we reserve β and s to refer later to the analysing wavelet. Note that μ, unlike β, can be equal to zero. Taking the wavelet transform of x(t) using a (β,γ) Morse wavelet leads to
where ε(τ,s) denotes the wavelet transform of the noise process x(t). Here, we have written the real part in (3.1) as , then noted that the wavelet transform of the anti-analytic function with the analytic wavelet ψ(t) vanishes identically, as can readily be seen from the frequency-domain form of the wavelet transform in (2.4).We define transform maxima points as time/scale locations at which the wavelet transform modulus takes on a local maximum, that is, a point at which
The basic idea of element analysis is that the values of the wavelet transform at these points can be used to estimate the coefficients c, scales ρ and temporal locations t of the N events comprising the signal in the model (3.1). There are three aspects to this analysis. Firstly, we show how in the absence of noise, and assuming the N events are sufficiently well separated in time and in scale, the event properties t, ρ and c may be recovered from the maxima points of the wavelet transform. Secondly, we examine the wavelet transform of noise, and establish the rate at which ‘false positive’ maxima occur due to idealized noise processes. This leads to the establishment of a threshold cut-off associated with a particular density of spurious maxima points associated with the noise. Thirdly, we enforce the condition that the remaining, statistically significant maxima points are well separated using a region-of-influence condition.
The Morse transform of another Morse function
The wavelet transform of the μth-order Morse function ψ(t/ρ) with a βth-order Morse wavelet in the same γ family has a simple expression, and is given by
where ζ(τ,s) is a modified wavelet function defined as
The derivation may be found in the electronic supplementary material, §S2. The wavelet transform of a Morse function with a Morse wavelet in the same γ family is therefore itself expressible as a modified Morse wavelet. Furthermore, (3.4) shows that taking the wavelet transform of the rescaled Morse function ψ(t/ρ) implies rescaling both the time and the scale of the wavelet transform of the original function ψ(t), but without changing the transform amplitude.The main feature in (3.5) is the appearance of a wavelet with order (β+μ). Both β and μ correspond to powers of ω in the frequency domain, which can be combined because the wavelet transform corresponds to a multiplication in the frequency domain. The scale dependence reveals two distinct effects: a more involved dependence of the amplitude on the scales s and ρ than the usual 1/s, and more significantly a rescaling of the wavelet’s time argument that is itself a function of the transform scale s. To understand the scale dependence of (3.5) in more detail, we examine the large-scale and small-scale limits to find
which has an illuminating interpretation. When s≫ρ, the analysing wavelet ψ(t/s) is much broader than the signal element ψ(t/ρ), and consequently the wavelet smooths the signal, spreading the transform out over the wavelet scale s. However, when s≪ρ, the wavelet is much narrower than the signal, and the transform scale remains fixed at the scale ρ of the analysed signal, simply decaying in magnitude as s decreases further.
Transform values at transform maxima
The ζ(τ,s) function allows us to determine the values of the wavelet transform at maxima points, and relate these to the properties of the signal elements. In terms of ζ(τ,s), we have
as a compact expression for the wavelet transform of the element model presented in (3.2). The expected value of the squared modulus of the wavelet transform is then approximately given by
if one neglects the interactions between different terms in the summation; here E{⋅} denotes the statistical expectation. The cross-terms between the noise and the wavelet transforms of the element functions vanish in expectation on account of the zero mean assumption. We assume that the events are sufficiently well separated such that (3.8) is a good approximation within a certain time/scale region surrounding each transform maxima, as discussed in detail in §4d.Under the approximation (3.8), if the function |ζ(τ,s)| decays monotonically from its maximum value, then if noise is neglected there will be exactly one transform maxima for each of the N events. There are two caveats to this. Firstly, low-level maxima may arise due to the event–event interactions that are neglected by (3.8), as discussed in more detail later. Secondly, for some extreme parameter choices with large values of γ and small values of β, the wavelet modulus may not decay monotonically in time from the wavelet centre. In those wavelets, one sometimes sees small sidelobe maxima, see e.g. the wavelet in figure 2. If |ζ(τ,s)| does not decay monotonically, then one would expect to see minor maxima on the flanks of each primary maxima associated with the N signal elements. Both of these issues lead to weak spurious maxima. For well-separated signal elements, these will typically either be below the noise level, or may be easily rejected with an amplitude cut-off.We now find the scale locations and transform values associated with the maximum points of the wavelet transform of a Morse function. The maximum value of |ζ(τ/ρ,s/ρ)| for all times and all scales is found to occur at time τ=0 and normalized scale , with a value of
To see this, we note that maximum of |ζ(τ/ρ,s/ρ)| with respect to variations in time occurs at τ=0. At this time, ζ(0,s/ρ) takes on the real and positive value
introducing . This follows by combining (3.5) with the expression for ψ(0) given in the electronic supplementary material, §S1. Differentiating with respect to , one finds that this quantity obtains a global maximum for any at the value given by (3.9).The maximum value of the Morse wavelet transform of another Morse function is found, inserting (3.9) into (3.10), to be given by
where we have defined for future reference the scale weighting function
The maximum value is seen to be independent of the scale ρ of the transformed function.
Inferring element properties from maxima points
Under the assumption that the noise process x(t) vanishes, and subject to the caveats regarding spurious minor maxima discussed above, there will be one maximum point of the transform modulus associated with each of the N events. The nth maximum point will be located at time t and scale , and from (3.7) we find that the wavelet transform at this point is
Thus, one may work backwards from the set of observed time/scale maxima, at locations denoted by determined from the transform as in (3.3), to infer or estimate the element properties (t,ρ,c). Defining as the transform at the nth observed maxima, we have
where the hatted quantities , and indicate inferences for the values of element properties based on the transform maximum points. Thus, the properties of the events can be read off directly from the maxima of the wavelet transform, provided the element function is considered as known.Because the implementation used here refers to wavelets by their frequencies rather than their scales, expression (3.14) mapping into needs to be modified. The scale frequency characterizing scale s of the transform is ω=ω/s, while ω=ω/ρ is the frequency at which the scale ρ element function obtains a maximum value. Substituting s=ω/ω and ρ=ω/ω into from (3.14) for the scale location of a maximum point, one finds
as the relationship between the frequency band of the nth observed maximum of the wavelet transform, and the inferred frequency characterizing the corresponding element function.
Examples of transform maxima
An illustration is presented in figure 3. Here, the original signal shown in the upper panel is of the form (3.1), with N=6 events using first-order Gaussian wavelet ψ1,2(t) as the element function, a 200 point interval between successive events, and with amplitude and scale coefficients given shortly. Note that the sampling interval in this example is set to unity. We see from figure 3a that the events vary from left to right from an even, or cosine-like form, to an odd or negative sine-like form. The scale increases from left to right, while the maximum excursion decreases somewhat. To this signal, a realization of unit variance Gaussian white noise has been added. The modulus of the wavelet transform of the resulting noisy signal with a ψ2,2(t) wavelet is shown in the lower panel. It is seen that the element function scale appears to be increasing at a linear rate along the logarithmic scale axis, while the peak value of the transform modulus appears constant.
Figure 3.
An illustration of signal reconstruction using element analysis applied to a synthetic signal in noise. A 12000 point signal (a) consisting of a series of six impulsive events, constructed from a ψ1,2(t) wavelet as described in the text, is shown as the heavy black line. This signal is added to a realization of unit-variance Gaussian white noise, resulting in the grey line. The wavelet transform of this noisy signal with a ψ2,2(t) wavelet is shown in (b), in which the y-axis shows the transform period 2π/ω on a logarithmic scale. Black circles mark the locations of six statistically significant and isolated maxima, identified as described in the text, while grey dots mark the locations of all other transform maxima. The locations of the six significant maxima in the absence of noise are shown with grey squares, but these are usually not visible because they are overlapped by the black circles. The heavy grey curves in (b) denote locations that are contaminated by edge effects, defined as time-scale locations within an interval from the beginning or the end of the time series. Black lines delineate the λ=1/2 region of influence around each maximum, as defined later in §4d. In (a), the red line shows the reconstruction based on statistically the six significant and isolated maxima, which is seen to be virtually identical to the original. The dotted line shows a reconstruction that does not taken into account the isolation criteria for the maxima, resulting in misfit near the second event; this line is elsewhere obscured by the red line.
An illustration of signal reconstruction using element analysis applied to a synthetic signal in noise. A 12000 point signal (a) consisting of a series of six impulsive events, constructed from a ψ1,2(t) wavelet as described in the text, is shown as the heavy black line. This signal is added to a realization of unit-variance Gaussian white noise, resulting in the grey line. The wavelet transform of this noisy signal with a ψ2,2(t) wavelet is shown in (b), in which the y-axis shows the transform period 2π/ω on a logarithmic scale. Black circles mark the locations of six statistically significant and isolated maxima, identified as described in the text, while grey dots mark the locations of all other transform maxima. The locations of the six significant maxima in the absence of noise are shown with grey squares, but these are usually not visible because they are overlapped by the black circles. The heavy grey curves in (b) denote locations that are contaminated by edge effects, defined as time-scale locations within an interval from the beginning or the end of the time series. Black lines delineate the λ=1/2 region of influence around each maximum, as defined later in §4d. In (a), the red line shows the reconstruction based on statistically the six significant and isolated maxima, which is seen to be virtually identical to the original. The dotted line shows a reconstruction that does not taken into account the isolation criteria for the maxima, resulting in misfit near the second event; this line is elsewhere obscured by the red line.The scale frequencies ω for the six events shown here are chosen to vary over a decade from ω=2π/100 to ω=2π/1000, with a logarithmic spacing such that for all n. The coefficient phase, defined as ϕ in c=|c|ei, is set to ϕ=(n−1)π/10, and varies from zero to π/2 as n varies from 1 to 6. The coefficient amplitude is chosen such that |cψ1,2(t)|=2 for all n, and is given by |c|=5.39, see the electronic supplementary material, §S1. The apparent slight decrease in amplitude in figure 3a is actually a consequence of the changing phase.The grey dots together with the black circles denote maxima points of the wavelet transform, determined using a numerical approximation to the conditions (3.3) described in appendix A. On account of the noise, there are many such maxima. However, six of these maxima, those denoted by the black circles, are found to be both highly statistically significant as well as isolated from one another and from the time series edges, using criteria to be developed in what follows. From the transform values at these points, we form estimates of the element properties using (3.14). Then using these inferred properties, the original signal is reconstructed by inserting the hatted values into (3.1) with ψ1,2(t) as the element function. The resulting reconstruction, shown as the red curve, is virtually identical to the original signal, despite the fact that the original signal is almost totally obscured by the noise. Similar results are obtained for the same signal added to a realization of unit-variance red noise, computed by cumulatively summing discrete Gaussian white noise, as presented in the electronic supplementary material, figure S1.
Significance and isolation
In this section, we determine two criteria that must be applied to the transform maxima in order to identify meaningful events within the context of the element model. The first is a measure of statistical significance, and the second is a measure of isolation from other transform maxima. It will be assumed that the noise has a power-law spectrum, a form that encompasses both white noise and fractional Brownian motion. The expected value of the modulus-squared wavelet transform—or wavelet spectrum—of power-law noise is derived in §4a. The next step is to find the distribution of transform maxima due entirely to the presence of noise, as this will allow the significance of detected events to be determined. This is accomplished in §4b with the help of a Monte Carlo method that sidesteps the need to take the wavelet transform of noise realizations, and that instead allows the covariance properties of the wavelet spectrum to be simulated directly. Properties of transform maxima arising from noise are then examined in §4c and used to establish statistical significance. The final step in the algorithm is to determine whether the detected events are sufficiently isolated from one another such that the element model appears to be suitable. This is addressed in §4d with the identification of new type of region associated with the Morse wavelet transform of another Morse function, referred to as the region of influence.
The wavelet transform of noise
Now we consider the wavelet transform of the noise x(t), which is assumed to be zero mean, stationary and Gaussian. Owing to the assumption of stationarity, the noise process has a Cramér spectral representation of the form
where X(ω) is an orthogonal increment process, i.e. vanishes unless ω=ν. The spectrum of x(t) is defined in terms of its orthogonal increment process as
with δ(ω) again being the Dirac delta function. Using the spectral representation of x(t), its wavelet transform is given by
and the expected value of the squared modulus of this quantity is found to be
Using (4.2), and noting that Ψ(ω) is real-valued, this becomes
which is independent of time τ, and is found by projecting the Fourier spectrum onto rescaled versions of the squared Fourier-domain wavelet. For brevity, we will refer to this expected modulus-squared wavelet transform simply as the wavelet spectrum of the noise.Herein we will consider both Gaussian white noise as well as Gaussian red noise having a power-law spectrum. The latter is important because many time series, geophysical time series especially, have signals embedded in red background noise (e.g. [24]). The red noise case will be considered first. Assume that a stationary process has the power-law spectrum
with A setting the spectral level and α controlling the spectral slope. The corresponding wavelet spectrum is found to be
provided , as shown in the electronic supplementary material, §S3. In the above, we have introduced the function
where the final expression follows from the definition of the gamma function, or see the electronic supplementary material, §S3. Thus, the wavelet spectrum depends on scale frequency as , which differs from the Fourier spectrum by a factor of ω. This difference can be traced to our choice of the 1/s normalization, which we have argued is more appropriate for interpreting the values of transform maxima. For this reason, the ‘wavelet spectrum’ with the 1/s normalization should be understood as not being strictly comparable to the Fourier spectrum.The power-law spectrum (4.6) corresponds for to a random process x(t) consisting of fractional Brownian motion [25], see also [26] and references therein. Although fractional Brownian motion is itself not stationary, as assumed here, a damped version of fractional Brownian motion known as the Matérn process is stationary [26]. The Matérn process has a spectrum that approximates the power law form (4.6) for ω sufficiently greater than zero, as controlled by a damping parameter, and with the slope parameter in the range . Here, we will just consider that the noise is stationary and has a spectrum that is equal to or closely approximated by (4.6) over the frequency range of interest, without specifying the type of the noise process.Next we consider Gaussian white noise, which can be considered a special case of the power-law spectrum (4.6) with α=0. Whereas both the Matérn process and fractional Brownian motion are defined on continuous time, Gaussian white noise is a discrete process and its spectrum is therefore periodized. With a sampling interval of Δ=1, the noise variance is related to the physically realizable spectrum supported over plus or minus the Nyquist frequency as
Using this result, (4.7) becomes for α=0
which links the spectral amplitude A to the transform variance for white noise case.A comparison of the Fourier and wavelet spectra is shown in figure 4. Spectra of three signals are presented: the noisy and original signals from figure 3, and their difference which is a time series of unit-variance white noise. In both plots, we see that signal dominates noise for periods greater than about 100 data points, whereas noise dominates at smaller scales. As mentioned earlier, the time average of the modulus-squared wavelet transform with the 1/s normalization is not an approximation to the Fourier spectrum. In both panels, the dashed line shows the prediction for unit-variance noise. In the one-sided presentation of the Fourier spectral levels employed here, spectral densities are doubled, so the unit variance signal x(t) has a spectral level of two. The prediction for the wavelet transform of noise is given by evaluating (4.10) with the choices σ=1, β=2 and γ=2. The realized and predicted noise levels match closely for both the Fourier spectrum and the wavelet transform.
Figure 4.
The estimated one-sided Fourier spectra (a) and wavelet spectra (b) for the clean and noisy versions of the time series shown in figure 3a, together with the spectra of the noise only. The Fourier spectra in (a) have been computed using the adaptive multitaper method of [27] with a time-bandwidth product set to 20. Panel (b) shows the time averages of the magnitude-squared wavelet transforms using a ψ2,2(t) Morse wavelet, with the heavy black curve being the time average of the square of the transform shown in figure 3b. The x-axis, which is the same for both (a) and (b), is presented in terms of period instead of frequency. The dotted lines show the predicted spectral levels for unit variance Gaussian white noise, given by a value of two for the one-sided Fourier spectra in (a) and (4.10) for the wavelet spectra in (b).
The estimated one-sided Fourier spectra (a) and wavelet spectra (b) for the clean and noisy versions of the time series shown in figure 3a, together with the spectra of the noise only. The Fourier spectra in (a) have been computed using the adaptive multitaper method of [27] with a time-bandwidth product set to 20. Panel (b) shows the time averages of the magnitude-squared wavelet transforms using a ψ2,2(t) Morse wavelet, with the heavy black curve being the time average of the square of the transform shown in figure 3b. The x-axis, which is the same for both (a) and (b), is presented in terms of period instead of frequency. The dotted lines show the predicted spectral levels for unit variance Gaussian white noise, given by a value of two for the one-sided Fourier spectra in (a) and (4.10) for the wavelet spectra in (b).
Distribution of transform maxima in noise
In order to assess the confidence of detected transform maxima, it is necessary to know the rate at which spurious maxima occur due entirely to the background noise. The distributions of transform maxima in noise can be determined using Monte Carlo simulations, in which one simulates a large time series of power-law noise x(t), takes its wavelet transform ε(τ,s), and then searches for transform maxima. This is computationally expensive, particularly because of the need to work with noise time series much longer than the time series of interest in order to obtain stable statistics. Fortunately, the desired statistics can be obtained in a more direct manner.The task of determining the distribution of transform maxima due to noise may be simplified by recognizing that apart from discretization effects, suitably normalized transform maxima are expected to exhibit a universal distribution across scales. If at each scale, we normalize transform maxima by the expected root-mean-square magnitude of the wavelet transform of the noise
then one expects that the distribution of normalized transform maxima values over a time interval that is the same duration as the scale s wavelet, e.g. as measured by the wavelet footprint L(s), should be independent of the scale s. This conjecture of an approximately universal distribution of transform maxima across scales for a particular choice of α, β and γ will be verified shortly. If it holds, one would only need to determine the temporal density and amplitude distribution of events at one scale, and then extrapolate to any other scale.The covariance between the wavelet transform of the noise ε(τ,s) and itself at another time and another scale is given by the function
using the fact that ε(τ,s) is both zero mean and stationary. Here u is the time shift between the two versions of ε(τ,s), while r is the ratio of their scales. For power-law noise, this becomes
as shown in the electronic supplementary material, §S4. This expression contains three parts: an s-dependent coefficient, proportional to the wavelet spectrum of the noise ; an r-dependent coefficient in square brackets; and a modified version of the (2β−2α,γ) wavelet containing all the u-dependence. As a check, it is shown in the electronic supplementary material, §S4, that if r=1 and u=0, one recovers the wavelet spectrum , as expected.Now let x(τ,s) be a 5-vector consisting of the noise transform ε(τ,s) at time τ and scale s, as well as the noise transform at the four adjacent points on the time/scale plane,
where the superscript ‘T’ denotes the transpose. Here, Δ is the sampling interval, and we let the ratio between successive scales, r, take on the value of the scale discretization used in the wavelet transform, see appendix C. The covariance structure of the vector x(τ,s), normalized by the local transform variance , is given by the 5×5 matrix
which from stationarity is independent of time τ. For power-law noise, the entries of this matrix can be immediately written down in terms of the transform covariance function Ξ(u,s,r) as
in which subscripts have been omitted on Ξ(u,s,r) for clarity. In deriving the above, we have made use of the symmetry Ξ(u,s,r)=Ξ*(−u,rs,1/r), apparent from the definition (4.12), as well as the choice Δ=1.The distribution of transform maxima due entirely to noise can now be determined as follows. Decomposing (s)= using the Cholesky decomposition leads to a lower triangular matrix . With being realizations of a 5-vector containing independent, unit variance, complex-valued Gaussian white noise, we create y(s)≡ at each scale and note that E{yy}=(s) by construction. In other words, y(s) has the same covariance structure as we would observe by grouping the wavelet transform of power-law noise at point (τ,s) with its four neighbours into the vector x(τ,s). The probability that the first element of y(s), denoted y1, is greater in magnitude than the other four elements is the same as the probability of there being a transform maxima in ε(τ,s) at scale s. Similarly, the amplitude distribution of y1 given that it is the largest-magnitude element in y(s) will be the same as the amplitude distribution of normalized maxima values in ε(τ,s). Thus, rather than simulating noise and taking its wavelet transform, we can simulate y(s) directly by creating realizations of a 5-vector of noise and then performing a matrix multiplication—a considerable simplification.
Simulations of noise distributions
An example of this approach to simulating the distribution of maxima in noise is shown in figure 5 for a white noise time series analysed with a ψ2,2(t) wavelet, as in the example of figure 3. For each of the 59 scale bands shown in that transform, we compute (s) from (4.16) with β=2 and γ=2, and with α=0 corresponding to a white noise process x(t). We then simulate 12 000×104 realizations of y(s) at each scale, following the steps in the previous paragraph. The histogram of the amplitudes of |y1| when it is the largest element in the vector simulates the histogram of the normalized amplitudes of transform maxima . These histograms are shown in figure 5a for 57 scale bands, excluding the first and the last, as transform maxima cannot be numerically detected there. The histogram is computed in 100 bins linearly spaced between zero and three. The y-value of each curve is normalized such that the curve sums to the total number of maxima points detected in a time series one wavelet footprint L(s) in duration. In other words, both the x- and y-axes have been normalized in accordance with the universal distribution proposed in the second paragraph in the previous section.
Figure 5.
Normalized histograms (a) and false detection rates (b) for transform maxima within unit-variance Gaussian white noise transformed with a ψ2,2(t) wavelet, as was used in figure 3, computed by simulating the y(s) vector as described in the text. Panel (a) shows the distribution of the normalized event magnitude with a bin size of about 0.03, and with the curves for different scales normalized such that they sum to the total number of events expected in a time series of length L(s). In (b), the cumulatively summed distributions corresponding to these curves are shown, but summed from large values to small in order to indicate the rate of an event occurring that is larger than a particular value. As an example, the horizontal line indicates a rate of one event per 100L(s), which occurs at a value of , marked by the vertical line. Fifty-two curves are shown in each panel, corresponding to all 59 scale frequency bands used in figure 3, apart from the first and the last where no maxima can be detected. These plots are created by simulating 104M realizations of y(s), where M=12 000 is length of the time series in figure 3. The dots are a comparison from explicitly taking the wavelet transform of noise at the three smallest scale bands and searching for transform maxima within the second band, using a times series of length 2000 M; see text for details.
Normalized histograms (a) and false detection rates (b) for transform maxima within unit-variance Gaussian white noise transformed with a ψ2,2(t) wavelet, as was used in figure 3, computed by simulating the y(s) vector as described in the text. Panel (a) shows the distribution of the normalized event magnitude with a bin size of about 0.03, and with the curves for different scales normalized such that they sum to the total number of events expected in a time series of length L(s). In (b), the cumulatively summed distributions corresponding to these curves are shown, but summed from large values to small in order to indicate the rate of an event occurring that is larger than a particular value. As an example, the horizontal line indicates a rate of one event per 100L(s), which occurs at a value of , marked by the vertical line. Fifty-two curves are shown in each panel, corresponding to all 59 scale frequency bands used in figure 3, apart from the first and the last where no maxima can be detected. These plots are created by simulating 104M realizations of y(s), where M=12 000 is length of the time series in figure 3. The dots are a comparison from explicitly taking the wavelet transform of noise at the three smallest scale bands and searching for transform maxima within the second band, using a times series of length 2000 M; see text for details.The curve for the second scale band, shown as the black line, is approximately Gaussian in shape, with a mean value of about 1.36 indicating that a typical transform maxima has a non-normalized value somewhat larger in magnitude than the expected transform amplitude σ(s), an intuitive result. However, it is rare to find values of exceeding 2, with only about 10% of the transform maxima having larger magnitudes. A slight tendency for positive skewness is apparent, as may be expected due to the fact that is non-negative.Examining the curves from all the scales, we see that normalizing the amplitudes by σ(s) and the densities by L(s) has indeed virtually collapsed all the curves together, in agreement with the proposed universal distribution. The most significant difference is that as one proceeds to larger scales, a higher degree of scatter is observed. This occurs because within y(s), the first component y1 becomes increasingly correlated with the other four components as s increases; thus, the effective sample size of a fixed-length simulation decreases, increasing the variance. Within an intermediate band of scales, from bands 3 to 25, there is a tendency for the central peak to decrease slightly as scale increases, although this tendency does not appear to continue indefinitely. The conjecture of a universal distribution therefore appears to be a close but not quite perfect approximation. The minor dependence of the maxima statistics on s are attributed to discretization effects, which are correctly captured by the simulations based on (s).For comparison, the distribution of transform maxima points for the second scale band are also computed by explicitly taking the wavelet transform of a noise time series. A real-valued Gaussian white noise time series x(t) of length 12 000×2000 is transformed with the ψ2,2(t) wavelet using only the first three scale bands, or the three smallest-scale wavelets, used in figure 3b. Transform maxima may then be identified in the second band, and their normalized distributions are plotted in figure 5a as black dots. The agreement with the calculation based on simulating y(s) in the second scale band, shown as the black line, is excellent. Normalized distribution curves for other choices of β and γ, which are not shown, are similar in form to those shown in figure 5a, and are generally roughly Gaussian in shape with a slight positive skewness.In figure 5b, the cumulative distributions associated with these histogram curves are shown, but summed in the reverse direction from large values to small values. This quantity is known in the literature as the survival function or complementary cumulative distribution function; in the context of this analysis, it will be shown to indicate a false detection rate. The curves in figure 5b give the rate at which transform maxima larger than a particular value occur. The highest value for all the curves, near zero amplitude, indicates that a transform maxima with any amplitude occurs at a rate of about 0.040–0.045 events over one wavelet footprint L(s), or one maxima every 22–25 footprints. The horizontal line marks a rate of 0.01 events per L(s) or one maxima every 100 footprints, and is found to be the rate at which events larger in magnitude than about 1.7 σ(s) occur. Such curves can be used to set an amplitude cutoff for a tolerable false detection rate. Most of the differences between the rate curves occur for small-amplitude transform maxima; for amplitudes greater than about unity, the curves are all virtually indistinguishable.The results of this section can be used to asses the statistical significance of transform maxima. This is illustrated in figure 6 for the example presented earlier in figure 3. Here, we plot the scale locations and magnitudes of all the transform maxima detected in the example, shown here with their non-normalized magnitudes in (a) and normalized magnitudes in (b); these are the grey dots together with the black circles. For consistency with the spectra shown in figure 4, we plot the effective period rather than the scale itself. The distributions and associated false detection rates appropriate for this length M=12 000 time series are then determined by simulating 1000×12 000 y(s)-vectors for each scale s. The curves show the resulting expected detection rates for a time series of length M=12 000. The rates here are expressed as events per time series of length M, such that 1/1000 means that an event of the indicated magnitude or larger is expected at a particular scale only once per 1000 M or 1.2×107 data points. Because there are 57 scale bands being analysed, events of a larger magnitude are expected to occur at any scale at a rate of one per 1000/57 M or roughly one per 20 M.
Figure 6.
The distribution of transform maxima for the example shown in figure 3, shown in two different ways. In (a), grey dots mark the periods of all detected maxima plotted against their magnitudes . In (b), the y-axis now shows as defined in (4.11), the transform maxima value normalized by the expected noise level at each scale. Note that the x-axis in both panels is the scale frequency expressed as a period, 2π/ω. In both panels, the black dots show the scale locations and magnitudes of six significant and isolated maxima of the noisy wavelet transform, as shown earlier in figure 3, while the grey squares show the actual scale locations and magnitudes of the six events in the noise-free signal. The coloured shading shows the density of transform maxima observed in a large noise simulation using the y(s)-vector method, as described in the text. The individual contours show curves of false detection rates inferred from the coloured shading. For example, the heavy black curve labelled ‘rate: 1/1000’ means that at each scale level, a transform maxima of this amplitude or larger is expected to occur only once per 1000 realizations of a time series of this length (M=12000). The coloured shading corresponds to the density seen in seen in figure 5a, while the contours correspond to detection rates seen in figure 5b, both scaled appropriately for each scale level in this plot as described in the text.
The distribution of transform maxima for the example shown in figure 3, shown in two different ways. In (a), grey dots mark the periods of all detected maxima plotted against their magnitudes . In (b), the y-axis now shows as defined in (4.11), the transform maxima value normalized by the expected noise level at each scale. Note that the x-axis in both panels is the scale frequency expressed as a period, 2π/ω. In both panels, the black dots show the scale locations and magnitudes of six significant and isolated maxima of the noisy wavelet transform, as shown earlier in figure 3, while the grey squares show the actual scale locations and magnitudes of the six events in the noise-free signal. The coloured shading shows the density of transform maxima observed in a large noise simulation using the y(s)-vector method, as described in the text. The individual contours show curves of false detection rates inferred from the coloured shading. For example, the heavy black curve labelled ‘rate: 1/1000’ means that at each scale level, a transform maxima of this amplitude or larger is expected to occur only once per 1000 realizations of a time series of this length (M=12000). The coloured shading corresponds to the density seen in seen in figure 5a, while the contours correspond to detection rates seen in figure 5b, both scaled appropriately for each scale level in this plot as described in the text.Choosing the 1/1000 rate as our cut-off, we find seven events exceeding this level of significance, corresponding to the maxima associated with the six events of the noise-free signal, plus one duplicate maxima associated with the second event. This duplicate happens because the numerical algorithm has located two transform maxima very closely spaced together, a not uncommon occurrence. From these seven statistically significant transform maxima, we estimate the properties of the underlying events using (3.14), and reconstruct the signal by inserting these into the element model (3.1). The result, shown as the black dotted curve in figure 3a, is very close to the original signal for most of the record and is therefore not visible. However, in the vicinity of the second event, it overshoots the original signal on account of the duplicate maximum. This difficulty is one of the reasons an isolation criterion is required, as developed in the next section.In this section, an example of assessing statistical significance of events in a white noise background has been presented. As another example, the case of α=1 red noise is addressed in the electronic supplementary material, figures S1, S2 and S3, which are the red noise analogues of figures 3–6, respectively. See the captions of those figures for further discussion.
Regions of influence
The final step is to introduce conditions for guaranteeing that the transform maxima are sufficiently isolated from one another, as well as from any regions of missing data. There several reasons for doing so. Firstly, the element method depends upon the assumption that the events in the signal model (3.1) are sufficiently isolated such that in the vicinity of transform maxima, the events may be regarded independently from one another. In real-world applications, there may be sources of variability for which this is not the case, and the properties of such events are not expected to be accurately recoverable. Therefore such events should be rejected from the event detection results on account of being insufficiently isolated. Secondly, discretization effects and/or noise can often lead to multiple closely spaced transform maxima associated with the same event, which should not be taken to represent independent events. In such cases, it is desirable to have a means of determining the primary transform maxima, and rejecting the others. Finally, domain edges or missing data can also contribute to creating spurious maxima.In this section, we will make use of an expansion of the time-domain wavelets as approximately consisting of a modulated Gaussian,
which has been examined in detail by Lilly & Olhede [18]. The K quantities, termed the wavelet cumulants, are terms in a Taylor series of the natural logarithm of the wavelet, which here has been truncated after the second-order term. These cumulants are given by
see appendix B and §III-A of [17], or the electronic supplementary material, §S1. K1; plays the role of a frequency, is the Gaussian’s standard deviation, and ϵ3; is an error term that is implicitly defined as a residual. Because the wavelet magnitude |ψ(t)| has a roughly Gaussian profile, the second expansion of the logarithm of the wavelet is a much better approximation than would be obtained by the second-order Taylor series of the wavelet itself.A solution to determining whether the events are well isolated from one another is based on the expected region of influence associated with a transform maxima. We will identify the curve at which the wavelet transform modulus has fallen off to some fraction λ of its peak value. Assuming an event with scale ρ located at time τ=0, we are interested in the (τ,s) curve satisfying
where again and . Knowledge of the Morse wavelets allow us to readily obtain a closed-form expression for an approximation to this curve. Inserting the cumulant expansion (4.17) into the expression for ζ(τ,s,ρ) as a wavelet given earlier in (3.5), (4.19) becomes
after also making use of (3.11). Here, we have chosen to ignore the error term ϵ3; in the cumulant expansion arising from terms higher than second order. This rearranges to give
as an approximation to the region of influence for a (μ,γ) Morse function analysed with a (β,γ) wavelet, and using the 1/s scale normalization in the wavelet transform.This region of influence expression can readily be evaluated numerically. The right-hand side of (4.21) is real-valued for the region of scales over which the numerator in the natural logarithm exceeds the denominator. While the exact locations of the crossover points of these two curves do not have convenient analytic expressions, analysing their behaviours shows that the range of scales for which (4.21) is real-valued occurs within the somewhat broader range
as shown in appendix D. Therefore, to compute the curves, we determine the two endpoint scales in (4.22), form an array of normalized scales over this range, compute (4.21), and then omit any end regions in which is found to take on imaginary values.The regions of influence are employed in the element analysis as follows. After identifying a set of transform maxima, and excluding those falling below a certain significance level based on the noise model, we then exclude those that are not sufficiently isolated. To do that, we choose a certain λ level, for example, , and compute the approximate regions of influence for each transform maximum by appropriately shifting and rescaling (4.21). The transform maxima are sorted in order of decreasing amplitude, and a maxima point is rejected if any larger-amplitude maxima points are found to occur within its own region of influence, as this would indicate that it is not well isolated. The remaining maxima are said to be isolated at the particular λ level.Finally, to deal with edge effects and the influence of missing data, the following approach is adopted. All gaps are first linearly interpolated over, and locations of missing data are recorded. When transform maxima are detected, the fraction of data that is missing or is outside of the time-series boundaries is determined over a time period one wavelet footprint in duration centred on each maxima. Transform maxima containing more than some percentage, say 10%, missing data are then rejected. This approach allows missing data segments of any length to be dealt with, while at the same time using as much information as possible from the data. The missing data condition should be applied before the isolation criterion, in order to prevent spurious maxima arising from missing data effects from interfering with physically meaningful maxima.In the example of figure 3, as described earlier, seven statistically significant maxima are detected. However, computing the regions of influence using (4.21) based on the known transform maxima location together with β, μ and γ, we find one of the two maxima in the vicinity of the second event is not well isolated at the level. Rejecting this event, we are left with the six events shown as black circles in figures 3 and 6. In the former figure, the regions of influence around the six significant and isolated transform maxima are shown. In real-world applications, this isolation criterion is found to be crucial for obtaining good performance.Limiting the reconstruction using (3.1) to these six points, we obtain the red curve shown in figure 3. Despite the very noisy appearance of the analysed signal in figure 3, the original events are detected with a very high degree of statistical significance, and the reconstruction is virtually identical to the original signal. This illustrates that the element analysis can accurately extract signals of the form (3.1) even in the presence of relatively large noise levels.A natural question is the extent to which events may be obscured by other nearby events. This is explored in the electronic supplementary material, figure S4[2] for the noise-free signal shown in figure 3, plus a set of closely spaced smaller-magnitude events. It is seen that the smaller-amplitude events may be detected provided they are not too close to the larger-amplitude events, with the region of influence of the larger-amplitude events providing some guidance as to the shielding region. A more complete investigation of such effects is beyond the scope of this paper.
Application
An application to real-world data is presented in figure 1. The dataset analysed here is a small segment of along-track data from the TOPEX/Poseidon/Jason/Ocean Surface Topography Mission satellite altimeters, and consists of 5216 valid data points. We use the Integrated Multi-Mission Ocean Altimeter Data for Climate Research, Version 3 dataset [28]. The ground tracks in this dataset are repeated exactly every 9.92 days, and have an along-track resolution of about Δ=5.7 km at the latitude considered here. The quantity measured is sea level anomaly relative to an unknown temporal mean.One year’s worth of data are shown from a particular track in the Labrador Sea, a small marginal sea located between Greenland and Canada. The Labrador Sea is a well-known area of energetic coherent eddies [29-32], which were the subject of a study using along-track data in an early ad hoc prototype of the method developed here [9]. The particular track chosen crosses the Labrador Sea from southwest to northeast, passing within about 12 km of the site of the historical ‘Bravo’ mooring, see fig. 24 of [9]. Southwestern locations are at the left, and northeastern locations are at the right. The gap in the lower left of this figure is due to the seasonal advance of sea ice from the coastal Labrador Current, which interferes with the altimetric measurements. The upwards bumps seen in the central part of the track are the signatures of long-lived coherent eddies, and are the structures we wish to detect and quantify.In keeping with the use of a Gaussian as a model for eddies, as is standard in the literature, the element function used for this dataset is the analytic Gaussian ψ0,2(t). The time series are analysed using a ψ1,2(t) wavelet, with additional parameter settings as given in appendix C. The noise is taken to be Gaussian white noise. From the variance within the highest frequency band, which corresponds to a period 2π/ω of 5.5 data points, we infer from (4.10) a noise standard deviation of σ=3.2 cm. This is used to assess statistical significance levels using the simulation method described in §4d. A very high level of significance is chosen, such that in each frequency band, events with a false detection rates greater than one event per 1000 realizations of this dataset are rejected. Using the region of influence condition, maxima are rejected if they are not isolated at the level, and also if they contain more than 10% missing data.The above steps lead to a small number of detected events, 67 altogether or less than two per track, that are determined to be highly statistically significant as well as isolated from one another and from missing data segments. Reconstructions based on the element analysis method are shown in the central panel. These are seen to explain virtually all of the meaningful structure. A persistent eddy feature of about 20 km in radius is clearly observed in the upper half of the central panel. The residuals (originals minus reconstructions) are shown at the right, and appear virtually devoid of meaningful structure, showing that the model is indeed a good fit to the observations. Moreover, as the detected events reconstruct the data using only 5% (4×67/5216≈0.05) as many coefficients as there are datapoints, the information within the data is represented with a high degree of compression.In an earlier study using a prototype version of this method, the detected events in alongtrack altimetry were analysed in detail to understand the physical properties of coherent eddies in this region, see §5–6 of [9]. In particular, validation against in situ velocity measurements was carried out in order to ensure that the detected events were physically meaningful, see fig. 33 in that paper. As the events here appear qualitatively similar to the previously identified events, they are also likely to be physically meaningful. The earlier study used data from 1992 until about mid-2000, follow-up study using a longer data record would be valuable in order to assess interannual variability in eddy statistics. However, this would require a considerable amount of more work and is outside the scope of the present paper, which is limited to the development of the method. Further analysis of the detected events is left to a sequel.This application shows that coherent eddy properties as small as (10) km can indeed be extracted from the along-track dataset. The data segment analysed here represents only one-thousandth of 1% of the over 400 million data points within the entire along-track altimeter dataset. Considering the vast size of the complete dataset underscores the need for a fully automated method, and justifies the effort that has been required in its development. The element method makes possible a global eddy census along the lines of [11], but with the ability to resolve features an order of magnitude smaller than has previously been possible.
Conclusion and discussion
A method has been developed for analysing time series that consist of rescaled, phase-shifted, isolated replicates of a specified time-localized function. Time series of this type could be described as being ‘impulsive’ in nature, as opposed to singular or oscillatory. The method, termed element analysis, is inspired by the continuous wavelet transform, and uses the generalized Morse wavelet family as both a basis and an analysis tool. The element model is intended as a third major category of wavelet-based signal model, complementing the wavelet ridge and modulus maxima methods by allowing signals to be supported only at isolated points on the time/scale plane. Particular innovations are the creation of a simplified framework for efficient simulation of maxima statistics, as well as the identification of an approximate form for the regions of influence on the time/scale plane. While the method was formulated for real-valued time series, the extension to complex-valued time series is straightforward.The method was applied to the detection of coherent eddy events in along-track satellite altimetry, with encouraging results. Furthermore, there is good reason to believe that this method may be useful in a wide range of problems. In addition to being suitable for eddy detection, the method is also appropriate for strongly modulated wave packets or other impulse-like events, common features in many physical systems. Moreover, the method can be seen in some respects as a generalization of a Fourier series representation, with the beneficial aspect of allowing for time localization in signal features. In comparison with the wavelet thresholding method, element analysis is more specific because it requires the support points to be isolated from one another, thus filtering out events for which the proposed signal form is not a good match.There are a number of ways that the method presented here could be extended. Firstly, while the shape of the detection rate was found to collapse with a suitable normalization, as shown in figure 5b, the total detection rate—the y-intercept of these curves—takes on a range of values. When examined over the β and γ plane (not shown), there emerges what appears to a meaningful pattern in the total detection rate, but the reasons for this variation are not clear. Thus, better understanding the distribution of noise events from first principles is a direction for future work. Secondly, it would be straightforward to work out expressions for the bias and variance associated with the estimated event properties in the presence of noise, if desired. Thirdly, there is the question of how to choose an element wavelet, i.e. determining the values of μ and γ that best capture signal structure, and the sensitivity of the analysis to this choice. Finally, the model proposed here, while fairly flexible, could be generalized still further by allowing each event to be composed of a superposition of the higher-order orthogonal versions of the Morse wavelets that emerge from the localization region formalism [13,15,16]. This may involve augmenting the element method with the polarization analysis of [16,33]. Alternatively, higher-order wavelets could be used as the basis for a more refined metric for the quality of a fit, by excluding events which project too strongly onto the next few orthogonal wavelets in the vicinity of a transform maximum. This could represent an additional means of classifying the properties of the detected events, which would aid in their interpretation.
Authors: Mark P Wachowiak; Renata Wachowiak-Smolíková; Michel J Johnson; Dean C Hay; Kevin E Power; F Michael Williams-Bell Journal: Philos Trans A Math Phys Eng Sci Date: 2018-08-13 Impact factor: 4.226
Authors: Ruei-Lung Lin; Hilaree N Frazier; Katie L Anderson; Sami L Case; Adam O Ghoweri; Olivier Thibault Journal: Aging Cell Date: 2022-06-19 Impact factor: 11.005
Authors: Pablo Rojas; Jenny A Plath; Julia Gestrich; Bharath Ananthasubramaniam; Martin E Garcia; Hanspeter Herzel; Monika Stengl Journal: Netw Neurosci Date: 2019-09-01