Literature DB >> 33267219

State and Parameter Estimation from Observed Signal Increments.

Nikolas Nüsken1, Sebastian Reich1, Paul J Rozdeba1.   

Abstract

The success of the ensemble Kalman filter has triggered a strong interest in expanding its scope beyond classical state estimation problems. In this paper, we focus on continuous-time data assimilation where the model and measurement errors are correlated and both states and parameters need to be identified. Such scenarios arise from noisy and partial observations of Lagrangian particles which move under a stochastic velocity field involving unknown parameters. We take an appropriate class of McKean-Vlasov equations as the starting point to derive ensemble Kalman-Bucy filter algorithms for combined state and parameter estimation. We demonstrate their performance through a series of increasingly complex multi-scale model systems.

Entities:  

Keywords:  continuous-time data assimilation; correlated noise; ensemble Kalman filter; multi-scale diffusion processes; parameter estimation

Year:  2019        PMID: 33267219      PMCID: PMC7514994          DOI: 10.3390/e21050505

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

The research presented in this paper has been motivated by the state and parameter estimation problem for particles moving under a stochastic velocity field, with the measurements given by partial and noisy observations of their position increments. If the deterministic contributions to the velocity field are stationary, and the position increments of the moving particle are exactly observed, then one is led to a standard parameter estimation problem for stochastic differential equations (SDEs) [1,2]. In [3], this setting was extended to the case where the deterministic contributions to the velocity field themselves undergo a stochastic time evolution. Furthermore, while continuous-time observations of position increments are at the focus of the present study, the assimilation of discrete-time observations of particle positions has been investigated in [4,5] under a so-called Lagrangian data assimilation setting for atmospheric fluid dynamics. The assumption of exactly and fully observed position increments is not always realistic and the case of partial and noisy observations is at the center of the present study. With access to partial and noisy observations of position increments leads to correlations between the measurement and model errors. The theoretical impact of such correlations on state and parameter estimation problems has been discussed, for example, in [6] in the context of linear systems, and in [7] for nonlinear systems. In particular, one finds that the appropriately adjusted data likelihood involves the gradient of log-densities, which is nontrivial from a computational perspective, and which prevents a straightforward application of standard Markov chain Monte Carlo (MCMC) or sequential Monte Carlo (SMC) methods [8]. In this paper, we instead follow an alternative Monte Carlo approach based on appropriately adjusted McKean–Vlasov filtering equations, an approach pioneered in [9] in the context of the standard state estimation problem for diffusion processes. McKean–Vlasov equations, first studied in [10], are a class of SDEs in which the right-hand side depends on the law of the process itself. We rely on a particular formulation of McKean–Vlasov filtering equations, the so-called feedback particle filters [11], utilising stochastic innovation processes [12]. Our proposed Monte Carlo formulation avoids the need for estimating log-densities, and can be implemented in a numerically robust manner relying on a generalised ensemble Kalman–Bucy filter approximation applied to an extended state space formulation [13]. The ensemble Kalman–Bucy filter [14,15] has been introduced previously as an extension of the popular ensemble Kalman filter [13,16,17] to continuous-time data assimilation under the assumption of uncorrelated measurement and model errors. While the McKean-Vlasov formulation is essentially mathematically equivalent to the more conventional one based on the Kushner-Stratonovitch equation [7], these two approaches differ significantly in structure, suggesting different tools for their analysis as well as numerical approximations. More broadly speaking, the McKean–Vlasov approach to filtering is appealing since its Monte Carlo implementations completely avoid the need for resampling characteristic of standard SMC methods. Furthermore, a wide range of approximations are possible within the McKean–Vlasov framework with some of them, such as the ensemble Kalman–Bucy filter, applicable to high-dimensional problems. The McKean–Vlasov approach also arises naturally when analysing sequential Monte Carlo methods [18]. In Section 6, we apply the proposed algorithms to a series of state and parameter estimation problems of increasing complexity. First, we study the state and parameter estimation problem for an Ornstein–Uhlenbeck process [2]. Two further experiments investigate the behaviour of the filters for reduced model equations, with the data being collected from underlying multi-scale models. There we distinguish between the averaging and homogenisation scenarios [19]. Finally, we look at examples of nonparametric drift estimation [3] and parameter estimation for the stochastic heat equation [20]. We finally mention that SMC methods for correlated noise terms in discrete-time have been discussed, for example, in [21] and in the context of the ensemble Kalman filter in [22]. Similar ideas have also been pursued in a more applied context in [23].

2. Mathematical Problem Formulation

We consider the time evolution of a random state variable in -dimensional state space, , as prescribed by an SDE of the form for time , with the drift function depending on unknown parameters . Model errors are represented through standard -dimensional Brownian motion , , and a matrix . We also introduce the associated model error covariance matrix . We will generally assume that the initial condition is fixed, that is, a.s. for given . In terms of a more specific example, one can think of denoting the position of a particle at time moving in dimensional space under the influence of a stochastic velocity field, with deterministic contributions given by f and stochastic perturbations by . In the case , the SDE (1) reduces to an ordinary differential equation with given initial condition . We assume throughout this paper that (1) possesses unique, strong solutions for all parameter values a. See, for example, [2] (Section 3.3) for sufficient conditions on the drift function f. The distribution of is denoted by , which we also abbreviate by . We use the same notation for measures and their Lebesgue densities, provided they exist. A wide class of drift functions can be written in the form where Data and an observation model are required in order to perform state and parameter estimation for SDEs of the form (1). In this paper, we assume that we observe partial and noisy increments of the signal , given by for t in the observation interval , , where is a given linear operator, denotes standard -dimensional Brownian motion with and is a covariance matrix. We introduce the observation map for later use. Unless , it is clear that the model error in (1) and the total observation error in (3) are correlated. The impact of correlations between the model and measurement errors on the state estimation problem have been discussed by [6,7]. Furthermore, such correlations require adjustments to sequential estimation methods [16,17,25] which are the main focus of this paper. We assume throughout this paper that the covariance matrix of the observation error (5) is invertible. The special case and leads to a pure parameter estimation problem which has been extensively studied in the literature in the settings of maximum likelihood and Bayesian estimators [1,2]. In Section 3, we provide a reformulation of the Bayesian approach as McKean–Vlasov equations for the parameters, based on the results in [9,11]. If , then (1) and (3) lead to a combined state and parameter estimation problem with correlated noise terms. We will first discuss the impact of this correlation on the pure state estimation problem in Section 4 assuming that the parameters of the problem are known. Again, we will derive appropriate McKean–Vlasov equations in the state variables. Our key contribution is a formulation that avoids the need for log-density estimates, and can be put into an appropriately generalised ensemble Kalman–Bucy filter approximation framework [14,15]. We also formally demonstrate that the McKean–Vlasov filter equation reduces to in the limit and , a property that is less straightforward to demonstrate for filter formulations involving log-densities. These McKean–Vlasov equations are generalised to the combined state and parameter estimation problem via an augmentation of state space [13] in Section 5. Given the results from Section 4, such an extension is rather straightforward. The numerical experiments in Section 6 rely exclusively on the generalised ensemble Kalman–Bucy filter approximation to the McKean–Vlasov equations, which are easy to implement and yield robust and accurate numerical results.

3. Parameter Estimation from Noiseless Data

In this section, we treat the simpler Bayesian parameter estimation problem which arises from setting and in (3), i.e., . This leads to and, furthermore, for all , provided which we assume throughout this paper. The requirement that is invertible requires that G has rank ; that is, in (1). The data likelihood thus follows from the observation model with additive Brownian noise in (3). Given a prior distribution for the parameters, the resulting posterior distribution at any time is according to Bayes’ theorem [7]. Here, we have introduced the shorthand for the expectation of with respect to . It is well-known that the posterior distributions satisfy the stochastic partial differential equation with the time-dependent observation map where is a compactly supported smooth test function, and again denoting the expectation of with respect to . See [7] for a detailed discussion. Equation (10) is a special instance of the well-known Kushner–Stratonovitch equation from time-continuous filtering [7].

3.1. Feedback Particle Filter

We now state a McKean–Vlasov reformulation of the Kushner–Stratonovitch Equation (10) as a special instance of the feedback particle filter of [11,12]. The key idea is to formulate a stochastic differential equation in the parameters in which they are treated as time-dependent random variables. We introduce the notation for these, and require that the law of coincide with (8) for , i.e., with the solution to (10). Consider the McKean–Vlasov equations where the matrix-valued Kalman gain The innovation process or and Then, the distribution Throughout this paper, we write (12) in the more compact Stratonovitch form where the Stratonovitch interpretation is to be applied only to in , while the explicit time-dependence of remains in its Itô interpretation. It should be noted that the matrix-valued function is not uniquely defined by the PDE (13). Indeed, provided solves (13), is also a solution whenever . As discussed in [15], the minimiser over all suitable with respect to a kinetic energy-type functional is of the form for a vector of potential functions , . Inserting (18) into (13) leads to elliptic partial differential equations (often referred to as Poisson equations), understood component wise, where the centring condition makes the solution unique under mild assumptions on (see [26]). The numerical approximation of (19) in the context of the feedback particle filter has been discussed in [27]. Finally, (15) yields a particularly appealing formulation, since it is based on a direct comparison of with a random realisation of the right hand side of the SDE (1), given a parameter value and a realisation of the noise term . This fact will be explored further in Section 4. For clarity, let us repeat Equations

3.2. Ensemble Kalman–Bucy Filter

Let us now assume that the initial distribution is Gaussian, and that f is linear in the unknown parameters such as in (2). Then, the distributions remain Gaussian for all times with mean and covariance matrix . The elliptic PDE (13) is solved by the parameter-independent Kalman gain matrix and one obtains the McKean–Vlasov formulation of the Kalman–Bucy filter, with the innovation process defined by either or Please note that the Stratonovitch formulation (17) reduces to the standard Itô interpretation, since no longer depends explicitly on . The McKean–Vlasov Equation (23) can be extended to nonlinear, non-Gaussian parameter estimation problems by generalising the parameter-independent Kalman gain matrix (22) to Clearly, the gain (26) provides only an approximation to the solution of (13). However, such approximations have become popular in nonlinear state estimation in the form of the ensemble Kalman filter [16,17], and we will test its suitability for parameter estimation in Section 6. Numerical implementations of the proposed McKean–Vlasov approaches rely on Monte–Carlo approximations. More specifically, given M samples , , from the initial distribution , we introduce the interacting particle system where the innovation processes are defined by either or, alternatively, and , , denote independent -dimensional Brownian motions. For , we will use the parameter-independent empirical Kalman gain approximation in our numerical experiments, which leads to the so-called ensemble Kalman–Bucy filter [14,15]. Please note that provides an unbiased estimator of . Finally, a robust and efficient time-stepping procedure for approximating , , is provided in [28,29,30]. Denoting the approximations at time by , , we obtain with step size , empirical covariance matrices and innovation increments given by either or Here we have used the abbreviations , , and . While the feedback particle formulation (17) and its ensemble Kalman–Bucy filter approximation (31) are special cases of already available formulations, they provide the starting point for our novel McKean–Vlasov equations and their numerical approximation of the combined state and parameter estimation problem with correlated measurement and model errors, which we develop in the following two sections.

4. State Estimation for Noisy Data

We return to the observation Model (3) with and general H. The pure state estimation problem is considered first; that is, in (1). Using , given by (5), and defined by with the total measurement error covariance matrix C given by (6), we find that and the covariations [2] satisfy These errors naturally suggest linear combinations of and in (1) and (3) that shift the correlation between measurement and model errors to the signal dynamics, yielding where and denote mutually independent standard Brownian motions of dimension and , respectively. These equations correspond exactly to the correlated noise example from [7] (Section 3.8). Furthermore, and lead to , , and, hence, . A straightforward application of the results from [7] (Section 3.8) yields the following statement: The conditional expectations where We use the notation is the generator of For the convenience of the reader, we present an independent derivation in Appendix A. We note that (39) also arises as the Kushner–Stratonovitch equations for an SDE Model (1) with observations satisfying the observation model where denotes -dimensional Brownian motion independent of the Brownian motion in (1). Here we have used that . This reinterpretation of our state estimation problem in terms of uncorrelated model and observation errors and modified observation map allows one to apply available MCMC and SMC methods for continuous-time filtering and smoothing problems. See, for example, [16]. However, there are two major limitations of such an approach. First, it requires approximating the gradient of the log-density. Second, the modified observation Model (41) is not well-defined in the limit and , since the density collapses to a Dirac delta function under the given initial condition a.s. In order to circumvent these complications, we develop an alternative approach based on an appropriately modified feedback particle filter formulation in the following subsection.

4.1. Generalised Feedback Particle Filter Formulation

While it is clearly possible to apply the standard feedback particle filter formulations using (41), the following alternative formulation avoids the need for approximating the gradient of the log-density. Consider the McKean–Vlasov equation where the gain with observation map and the innovation process Here, It should be stressed that in (43) and (46) denote the same Brownian motion, resulting in correlations between the innovation process and model noise. In this proof the Einstein summation convention over repeated indices is employed, noting that (44) takes the form We begin by writing (43) in its Itô-form, where Here, we have used that the covariation between and satisfies Furthermore, and . For a smooth compactly supported test function , Itô’s formula implies where the covariation process is given by Our aim is to show that coincides with as defined by the Kushner–Stratonovich Equation (39). To this end, we insert (48) and (52) into (51) and take the conditional expectation, arriving at recalling that the generator has been defined in (40). Under the assumption that satisfies (44), the two Equations (39) and (53) coincide. Indeed, implies and the -contributions agree. To verify the same for the -contributions, we use (44) to obtain Finally, collecting terms in (53) and (56), and applying (55) to the remaining -contribution, i.e., , leads to the desired result. □ We note that the correlation between the innovation process and the model error leads to a correction term in (43) which cannot be subsumed into a Stratonovitch correction, in contrast to the standard feedback particle filter formulation (17). Assuming that there exist potential functions thus generalising If we set since We develop a simplified version of the feedback particle filter formulation (43) for linear SDEs and Gaussian distributions in the following subsection, which will form the basis of the generalised ensemble Kalman–Bucy filter put forward in the follow-up Section 4.3.

4.2. Generalised Kalman–Bucy Filter

Let us assume that with , i.e., Equations (1) and (3) take the form with initial conditions drawn from a Gaussian distribution. In this case stays Gaussian for all , i.e., with , . Equation (19) can be solved uniquely by , and thus the McKean–Vlasov equations for the feedback particle filter (43) reduce to with the innovation process (46) leading to We take the expectation in (60) and (61) and end up with Defining , we see that Next we use and to obtain, after some calculations, Hence we have shown that our McKean–Vlasov formulation (60) agrees with the standard Kalman–Bucy filter equations for the mean and the covariance matrix in the correlated noise case [6].

4.3. Ensemble Kalman–Bucy Filter

The McKean–Vlasov Equation (60) for linear systems, along with Gaussian prior and posterior distributions, suggest approximating the feedback particle filter formulation (43) for nonlinear systems by where the innovation process given by (46) as before. In other words, we approximate the gain matrix in (43) by the state independent term with the covariance matrix defined by where denotes the law of . We can now generalise the ensemble Kalman–Bucy filter formulation (31) for the pure parameter estimation problem to the state estimation problem with correlated noise. We assume that M initial state values have been sampled from an initial distribution or, alternatively, for all in case the initial condition is known exactly. These state values are then propagated under the time-stepping procedure with , step size , empirical covariance matrices and innovation increments given by The McKean–Vlasov equations of this section form the basis for the methods proposed for the combined state and parameter estimation problem to be considered next.

5. Combined State and Parameter Estimation

We now return to the combined state and parameter estimation problem, and consider the augmented dynamics with observations (3) as before. The initial conditions satisfy a.s., and . Let us introduce the extended state space variable . In terms of , the Equations (3) and (71) take the form with Thus we end up with an augmented state estimation problem of the general structure considered in detail in Section 4 already. Below we provide details on some of the necessary modifications.

5.1. Feedback Particle Filter Formulation

The appropriately extended feedback particle filter Equation (43) leads to where (46) takes the form with observation map (4) and correction given by (45), with Q replaced by and H by . In the Poisson equation(s) (19), is replaced by denoting the joint density of . We also stress that becomes a function of x and a, and we distinguish between gradients with respect to x and a using the notation and , respectively. Numerical implementations of the extended feedback particle filter are demanding due to the need for solving the Poisson equation(s) (19). Instead, we again rely on the ensemble Kalman–Bucy filter approximation, which we describe next.

5.2. Ensemble Kalman–Bucy Filter

We approximate the joint density of by an ensemble of particles that is, where denotes the Dirac delta function centred at . The initial ensemble satisfies for all , and the initial parameter values are independent draws from the prior distribution . At the same time, we make the approximation when dealing with the Kalman gain of the feedback particle filter. Here the empirical mean has components and the joint empirical covariance matrix is given by As in Section 4.3, the solution to (19) can be approximated by where finally, the covariance matrices and are estimated by their empirical counterparts with defined by Summing everything up, we obtain the following generalised ensemble Kalman–Bucy filter equations where the innovations are given by and and denote independent -dimensional and -dimensional Brownian motions, respectively, for . The interacting particle Equation (83) can be time-stepped along the lines discussed in Section 4.3 for the pure state estimation formulation of the ensemble Kalman–Bucy filter.

6. Numerical Results

We now apply the generalised ensemble Kalman–Bucy filter formulation (83) with innovation (84) to five different model scenarios.

6.1. Parameter Estimation for the Ornstein–Uhlenbeck Process

Our first example is provided by the Ornstein–Uhlenbeck process with unknown parameter , and known initial condition . We assume an observation model of the form (3) with , and a measurement error taking values , , and . The model error variance is set to either or . Except for the case a combined state and parameter estimation problem is to be solved. We implement the ensemble Kalman–Bucy filter (Section 5.2) with innovation (84), step size , and ensemble size . The data is generated using the Euler–Maruyama method applied to (85), with and integrated over a time-interval with the same step size. The prior distribution for the parameter is Gaussian with mean and variance . The results can be found in Figure 1. We find that the ensemble Kalman–Bucy filter is able to successfully identify the unknown parameter under all tested experimental settings, except for the largest measurement error case where . There, a small systematic offset of the estimated parameter value can be observed. One can also see that the variance in the parameter estimate monotonically decreases in time in all cases, while the variance in the state estimates approximately reaches a steady state.
Figure 1

Results for the Ornstein–Uhlenbeck state and parameter estimation problem under different experimental settings: (a) , ; (b) , ; (c) , (pure parameter estimation); (d) , . The ensemble size is set to in all cases. Displayed are the ensemble mean and the ensemble variance in and . The variance of is zero when in case (b).

6.2. Averaging

Consider the equations from [19] for , and initial condition , . The reduced equations in the limit are given by (85), with parameter value and initial condition . The reduced dynamics corresponds to a (stable) Ornstein–Uhlenbeck process for . We wish to estimate the parameter a from observed increments where the sequence of is obtained by time-stepping (86) using the Euler–Maruyama method with a step size . We set , (so that ), , and in our experiments. The measurement noise is set to or (pure parameter estimation). We implement the ensemble Kalman–Bucy filter (83) with innovation (84), step size , and ensemble size for the reduced Equation (87). The data is generated from an Euler–Maruyama discretization of (86) with the same step size. We also investigate the effect of subsampling the observations for by solving (86) with step size and storing only every tenth solution , while the reduced equations and the ensemble Kalman–Bucy filter equations are integrated with . The results are shown in Figure 2. Figure 3 shows the results for the same experiments repeated with a smaller ensemble size of . We find that the smaller ensemble size leads to more noisy estimates for the variance in and a faster decay of the variance in , but the estimated parameter values are equally well converged. Subsampling does not lead to significant changes in the estimated parameter values. This is in contrast to the example considered next.
Figure 2

Results for the averaged Ornstein–Uhlenbeck state and parameter estimation problem under different experimental settings: (a) , , ; (b) , , (pure parameter estimation); (c) , , ; (d) , , and subsampling by a factor of ten. The ensemble size is set to in all cases. Displayed are the ensemble mean and the ensemble variance in and . The variance of is zero when in case (b).

Figure 3

Results for the averaged Ornstein-Uhlenbeck process, now with a smaller ensemble size M = 10. Otherwise, panels (a–d) correspond to the same experimental settings as in Figure 2.

We finally mention [31] for alternative approaches to sequential estimation in the context of averaging using however different assumptions on the data.

6.3. Homogenisation

In this example, the data is produced by integrating the multi-scale SDE with parameter values , , , and initial condition , . Here, denotes standard Brownian motion. The equations are discretised with step size , and the resulting increments (88) are stored over a time interval . See [32] for more details. According to homogenisation theory, the reduced model is given by (85) with , and we wish to estimate the parameter a from the data produced according to (88). It is known that a standard maximum likelihood estimator (MLE) given by leads to in the limit and the observation interval . This MLE corresponds to and in our extended state space formulation of the problem. Subsampling can be achieved by choosing an appropriate time-step in the ensemble Kalman–Bucy filter equations and a corresponding subsampling of the data points in (88). We used and , respectively. The results can be found in Figure 4. It can be seen that only the larger subsampling leads to a correct estimate of the parameter a. This is in line with known results for the maximum likelihood estimator (90). See [32] and references therein.
Figure 4

Results for the homoginsation Ornstein–Uhlenbeck state and parameter estimation problem under different experimental settings: (a) , , ; (b) , , (pure parameter estimation); (c) , , and subsampling by a factor of fifty; (d) , , and subsampling by a factor of five hundred. The ensemble size is set to in all cases. Displayed are the ensemble mean and the ensemble variance in and . The variance of is zero under (c).

6.4. Nonparametric Drift and State Estimation

We consider nonparametric drift estimation for one-dimensional SDEs over a periodic domain in the setting considered from a theoretical perspective in [33]. There, a zero-mean Gaussian process prior is placed on the unknown drift function, with inverse covariance operator The integer parameter p sets the regularity of the process, whereas control its characteristic correlation length and stationary variance. Spatial discretization of the problem is carried out by first defining a grid of evenly spaced points on the domain, at locations , . The drift function is projected onto compactly supported functions centred at these points, which are piecewise linear with and linear interpolation is used to define a drift function for all , that is, it is of the form (2) with . In this example, we set . Sample realisations, as well as the reference drift , can be found in Figure 5a.
Figure 5

Results for the nonparametric drift and state estimation problem: (a) reference drift function (thick line) and ensemble of drift functions drawn from the prior distribution; (b) histogram of samples from the reference trajectory; (c) reference drift function and its estimate (top) and ensemble of drift functions (bottom) at final time; (d) ensemble of states and the true value at final time.

Data is generated by integrating the SDE (1) with drift forward in time from initial condition and with noise level , using the Euler–Maruyama discretisation with step size over one million time-steps. The spatial distribution of the solutions is plotted in Figure 5b. The data is then given by with . Data assimilation is performed using the time-discretised ensemble Kalman–Bucy filter Equation (83) with innovation (84), ensemble size , and step size . The final estimate of the drift function (ensemble mean) and the ensemble of drift functions can be found in Figure 5c. Figure 5d displays the ensemble of state estimates and the value of the reference solution at the final time. We find that the ensemble Kalman–Bucy filter is able to successfully estimate the drift function and the model states. Further experiments reveal that the drift function can only be identified for sufficiently small measurement errors.

6.5. Spde Parameter Estimation

Consider the stochastic heat equation on the periodic domain , given in conservative form by the stochastic partial differential equation (SPDE) where is space-time white noise. With constant , this SPDE reduces to In this example, we examine the estimation of from incremental measurements of a locally averaged quantity that arises naturally in a standard finite volume discretisation of (95). To discretise the system, one first defines around grid points on a regular grid, separated by distances , as The conservative (drift) term in (94) reduces to where , etc. The following standard finite difference approximations yield the -dimensional SDE for constant , where are independent one-dimensional Brownian motions in time. Following recent results from [20] we consider the case of estimation of a constant value from measurements at a fixed location/index . The data trajectory is thus given by where is a scalar and is a standard Brownian motion in one dimension. We perform numerical experiments in which the initial state is set to zero for all indices i and the prior on the unknown parameter is uniform over the interval . The increment data is generated by first integrating (95) forward in time from the known initial condition for all i. The equation is discretised in time using the Euler-Maruyama method. It is known that is required for stability of the Euler–Maruyama discretisation; we use the much smaller time step . The solution is sampled with this same time step, and increment measurements are approximated at time by setting the measurement noise level R to zero in (100), resulting in Please note that the associated model error in (1) is given by and the matrix H in (3) projects the vector of state increments onto a single component with index . Simulations are performed over the time-interval . The results can be found in Figure 6a. We also compute the model evidence for a sequence of parameter values based on a standard Kalman–Bucy filter [6] for the associated linear state estimation problem. See Figure 6b. Both approaches agree with the reference value .
Figure 6

Results for SPDE parameter estimation: (a) estimate of as a function of time as obtained by the ensemble Kalman–Bucy filter; (b) evidence based on a Kalman–Bucy filter for state estimation applied to a sequence of parameter values .

6.6. Discussion

The results presented here demonstrate that the proposed methodology can be applied to a broad range of continuous-time state and parameter estimation problems with correlated measurement and model errors. Alternatively, one could have employed standard SMC or MCMC methods utilising the modified observation Model (41) as implied by the Kushner–Stratonovitch formulation (39) of the filtering problem. However, such implementations require the approximation of the additional term which is nontrivial if only samples from are available. Furthermore, the limiting behaviour of such implementations in the limit and (pure parameter estimation problem) is unclear since degenerates into a Dirac delta distribution, potentially leading to numerical difficulties in this singular regime. The proposed generalised feedback particle filter formulation avoids these issues through the use of stochastic innovations which are correlated with the model noise. In other words, the distribution does not appear explicitly in the innovation process (46), and the correlated noise terms cancel each other out as discussed in Remark 3 for and . The main computational challenge of the feedback particle filter approach is given by the need for finding the Kalman gain matrix (57). However, the constant gain ensemble Kalman–Bucy approximation is easy to implement. In fact, the only differences with the standard ensemble Kalman–Bucy filter formulation of [14] are in the additional term in the Kalman gain, and a correlation between the stochastic innovation process and the model error. While the ensemble Kalman–Bucy filter gave rather satisfactory results for the numerical experiments displayed in Section 6, strongly non-Gaussian distributions might require more accurate approximations to the Kalman gain matrix (57). In that case, one could rely on the particle-based diffusion map approximation considered in [27].

7. Conclusions

In this paper, we have derived McKean–Vlasov equations for combined state and parameter estimation from continuously observed state increments. An approximate and robust implementation of these McKean–Vlasov equations in the form of a generalised ensemble Kalman–Bucy filter has been provided and applied to a range of increasingly complex model systems. Future work will address the treatment of temporally-correlated measurement and model errors, as well as a rigorous analysis of these McKean–Vlasov equations in the contexts of multi-scale dynamics and nonparametric drift estimation.
  1 in total

1.  A class of markov processes associated with nonlinear parabolic equations.

Authors:  H P McKean
Journal:  Proc Natl Acad Sci U S A       Date:  1966-12       Impact factor: 11.205

  1 in total
  1 in total

1.  Constrained Parameter Estimation for a Mechanistic Kinetic Model of Cobalt-Hydrogen Electrochemical Competition during a Cobalt Removal Process.

Authors:  Yiting Liang; Yuanhua Zhang; Yonggang Li
Journal:  Entropy (Basel)       Date:  2021-03-24       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.