Literature DB >> 30813314

Optimal Shadowing Filter for a Positioning and Tracking Methodology with Limited Information.

Ayham Zaitouny1,2, Thomas Stemler3, Shannon Dee Algar4.   

Abstract

Positioning and tracking a moving target from limited positional information is a frequently-encountered problem. For given noisy observations of the target's position, one wants to estimate the true trajectory and reconstruct the full phase space including velocity and acceleration. The shadowing filter offers a robust methodology to achieve such an estimation and reconstruction. Here, we highlight and validate important merits of this methodology for real-life applications. In particular, we explore the filter's performance when dealing with correlated or uncorrelated noise, irregular sampling in time and how it can be optimised even when the true dynamics of the system are not known.

Entities:  

Keywords:  correlated noise; irregular sampling; positioning; shadowing filter; singularities; tracking

Year:  2019        PMID: 30813314      PMCID: PMC6412521          DOI: 10.3390/s19040931

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

A wide range of filters has been developed for the purposes of tracking moving objects and approximating their trajectories. Currently, tracking is typically achieved using sequential statistical filters, such as Kalman or particle filters, which take only the current and future states into account [1,2,3,4,5,6,7]. The classic Kalman filter has been extended to consider converted measurements and nonlinear problems [8]. A review of how to use the particle filter and its capability to handle general models and applications has been introduced in [9,10]. In [11,12,13], we developed and introduced a new tracking methodology based on shadowing filters. We proved the concept and demonstrated the method’s superiority over other tracking methods, as well as its applicability to real-life problems. In this paper, we conduct further investigations to demonstrate and verify the additional advantages of the shadowing filter tracking method, which are important for real applications. Namely, we introduce the two-dimensional solution when the observational error’s correlation is taken into account and compare the method’s performance when this correlation is considered or ignored. We additionally test the filter’s performance when dealing with non-uniform sampling in time and demonstrate the filter’s capability to overcome singularities. As a motivation, consider the frequently-occurring tracking problem that arises when observations are recorded in a non-Cartesian coordinate system. Assume that the bearing and range of an object are observed. These observations are independently determined from some reference point. However, when these measurements are transformed into Cartesian data, they will have correlated errors resulting from this transformation. To make matters worse, we often need to combine multiple measurements. In practice, such problems arise in: (a) active radar locations; (b) satellite interferometry; (c) GPS; and (d) passive sonar locations. The challenges in these applications vary. Applications (a), (b) and (d) all use bearing and range data, but in (b), the range is very accurately measured, while the bearing has large errors. In (d), the situation is the opposite, with observations of the bearing being high quality. Application (c), on the other hand, uses only range information, but from several different satellites. All of these cases can be dealt with using the shadowing filter by taking into account the correlation of the observational error. Section 3 of this paper introduces the methodology for approximating the trajectory of an object being tracked with noisy position data and potential correlations in the observational error. The solution in two-dimensions is derived for both uncorrelated and correlated error scenarios. Section 4 utilizes a simple, non-trivial system example to perform a numerical analysis in order to assess whether or not error correlation has to be considered to ensure a good approximation of the object’s path and if the extra computational effort required is worth it. Other common challenges one could face in practice are singularities and non-uniform sampling of data caused by the failure of the recording devices. Section 5 demonstrates the robustness of the shadowing filter tracking methodology to avoid singularity impacts, as well as deal successfully with irregular time resolution. We also apply the tracking technique to a noisy chaotic trajectory, specifically that of the Lorenz model, and analyse the results from the perspective of noise reduction. In the final section, we draw conclusions and make some suggestions for future directions of this work.

2. Shadowing Filters

The shadowing filter method is unlike sequential filter methods in that it estimates the full trajectory based on all observations simultaneously. In this aspect, our approach is somewhat similar to variational filters [14,15]; however, it does not fall into the trap of local minima [16,17]. This is due to the very simple principle these filters are based on: if the model is good enough, state estimations must be close to the observations and consistent with the model’s equations. In practice, this principle imposes a quadratic norm on the filter, which only has one minimum. Specifically regarding the tracking approach, previous investigations showed and verified some merits and advantages of the shadowing filter tracking methodology. In [12,13], we showed the capability of shadowing filter to track single-particle and multi-particle moving objects using GPS data from flying birds; in Figure 1, we show an example of how we successfully used this method to track eight pigeons’ trajectories and extracted the corresponding acceleration profiles only from positional GPS data. To track rigid bodies, the method was extended [11] to include rotational motion and moments of inertia. Our method is able to reconstruct the full dynamical phase space from positional information [11,12,13].
Figure 1

Shadowing filter applied to GPS data from a flock of pigeons. The shadowing filter successfully tracks the eight pigeons. In this demonstration, we refer to our previous works [12,13]. In (a), we track the trajectories of the eight pigeons (the coloured solid lines) from the GPS signals (red dots). In (b), we use the tracking algorithm to estimate the corresponding acceleration profiles for each pigeon (colours refer to different pigeons).

In general, varieties of the shadowing filter have proven to have better performance than Kalman filters [18], particle filters [19], sliding average filters [12] and variational filters [17]. They have been successfully implemented from simple low-dimensional maps and flows [17] up to operational weather models [20]. In [11], we addressed a benchmark with the state-of-art of tracking methods. We conducted direct detailed comparisons with the Kalman filter [21], extended Kalman filter [22] and particle filter [23,24] approaches. Our computational comparisons confirm the superiority of the shadowing filter tracking methodology over these sequential filter tracking methods with respect to accuracy and complexity (computational time). In this manuscript, we will not redo these comprehensive numerical comparisons. However, as the motivation, we confirm the performance compared to existing methods in Figure 2 with systems later investigated in this manuscript. Specifically, correlated observational errors are taken into account in Figure 2a,b and when singularities arise due to recording failure in Figure 2c,d. More detailed explanations and investigations about these scenarios and others are addressed in the following sections.
Figure 2

The superiority of the shadowing filter tracking methodology over a sequential filter methodology (Kalman filter). Error correlation has been taken into account in (a) and (b), while the singularity scenario is presented in (c) and (d). The shadowing filter shows obvious superiority, especially where error correlation increases the nonlinearity of the system far from the reference point at the origin in the first scenario and around the singular records in the second scenario. Note that the large error of the shadowing filter estimates at the beginning of the trajectory is expected, as explained in previous investigations [11,12].

3. Summary of the Tracking Methodology

Typically, tracking data will contain only noisy position observations, and so, we are faced with the challenge of having no explicit information about the object’s acceleration or velocity. Using the acceleration as a free parameter, we are able to implement the shadowing filter to find a good approximation of the full phase space including acceleration and velocity information based on the position observations. This free parameter can then be tuned in order to optimize the quality of the solution of the approximated states. We aim to track a point object that is moving in a d-dimensional Euclidean space with Cartesian coordinates, given a sequence of noisy observations. Let be the true states and be the noisy observations of the position at time for , where the observational errors have a covariance matrix and corresponding information matrix , which is used to account for the correlations of the observational error. The object’s dynamics are modelled based on its observed position , velocity and acceleration for . Our aim is to approximate close to the true state . To do this, we minimize the total squared error . We assume that the acceleration is constant over one time interval and that its magnitude is bounded over the entire trajectory by the relation . Using Newton’s laws and a Galilean transformation, we can solve this optimization problem with the Lagrange multipliers method [25]. The appropriate Lagrange function is: where the Lagrange multipliers are d-dimensional column vectors, and . Equation (1b) represents Newton’s first law, Equation (1c) Newton’s second law, while Equation (1d) bounds the magnitude of the acceleration. While using the Lagrange multiplier method seems only natural in the context of a Newtonian model, the optimization could be also implemented using for example gradient descent [16,17] or penalty methods [26]. The solution to the scalar case and rigid-body problems can be found in [11,27]. In this paper, using similar linear algebra methods and appropriately-constructed vector and matrix equations, we extend the solution to higher dimensions, specifically two-dimensional problems, taking into account the correlation of the observational errors. Note that this solution can be generalised for the d-dimensional problem where all vectors and matrices increase in dimension by a factor of d.

3.1. Solution of the Two-Dimensional Problem

3.1.1. Correlated Observational Errors

For higher dimensions, even in two dimensions, the complexity increases as we have to determine whether observational errors are not independent, but correlated. Note that the Weighted Least Squares (WLS) method and its extension, the constrained-WLS, are related to our shadowing filter [28,29,30,31]. For the tracking problem, they are able to deal with correlated/uncorrelated noise and irregular sampling in time. However, unlike the shadowing filter, they do not consider explicitly the velocity and acceleration [32,33]. Consider the situation of an object moving in the -plane. Given a time resolution of measurements for , the true position is and the noisy observed position is . The error in the x-direction has variance , and similarly, in the y-direction the error has a variance . Therefore, the observational error has the following covariance matrix: with the corresponding information matrix: where: Note that generally, and can vary with time. In addition, defining the unknown variables, , , , , , and , then the Lagrangian expressed by Equation (1) can be stated as: Obviously, the optimal solution occurs when all the partial derivatives of L are zero: The partial derivative equations and the complete solution of this situation are detailed in Appendix A. For the purpose of approximating the trajectory, we require only the final equation of the solution given by Equation (A24): where and are numerical matrices used to construct the solution, is the information matrix, is the observational states vector, is the approximated states vector and is the smoothing parameter of the filter.

3.1.2. Uncorrelated Observational Errors

If the observational errors of the x-component are independent of those in the y-component, then they are uncorrelated and at each time where the covariance between and equals zero, i.e., . Therefore, , , . Consequently, Equations (A1) and (A2) can be stated as: This gives us two independent systems of equations: the x-component system Equations (A3), (A5), (A7), (A9) and (3) and the y-component system Equations (A4), (A6), (A8), (A10) and (4). For each component, the systems are identical to the scalar case, which has been solved previously [11,27], and the optimization problem can be solved independently for x and y.

4. Numerical Investigations: Correlated vs. Uncorrelated Observational Errors

Given a dataset with correlated observational errors, we can either take the correlation into account or ignore it. There are practical reasons why one might choose to ignore the correlation. For instance, the algorithms are more complex with correlation and much more computationally intensive, as a -dimensional system of equations must be solved (see Equations (A1)–(A10)), but if we ignore correlation, the problem is reduced to d independent scalar problems. We refer herein to these two situations, of taking the Correlation into Account and Ignoring it, as CA and CI, respectively. In the following investigations, a target’s position is tracked in the -plane by the position given by the simple, non-trivial observation model: where , i.e., the length of the time series is . Note that the choice of this particular path is motivated by the range and bearing problem set up below with the non-linearity in y simplifying the description of the dynamics in polar coordinates. We introduce correlated observational errors by transforming the position into range and bearing coordinates with the inverse of the standard transformation : where r is the range from a reference point and is the bearing in radians measured anti-clockwise from the x-axis. Thus, the transformed observations are , where and the origin is the reference point. We then add white noise to each measurement: where is the observational noise of the range component with variance and is the observational noise of the bearing component with variance . Finally, we obtain the noisy observations, , in Cartesian coordinates, by applying the transformation defined in Equation (6) to . Under the assumption that r is not close to zero and the variances of r and are small, the information matrix of can be approximated as [27]: Given that this information matrix is estimated in advance, we are now able to apply the shadowing technique via Equation (A24) to approximate the trajectory, which we illustrate in Figure 3.
Figure 3

Taking error correlation into account when using range and bearing observations of different accuracies. The smoothing parameter is for both situations.

In Figure 3a, the range measurement is four-times more accurate than that of the bearing, as, for this experiment, the standard deviations were chosen to be and . Conversely, in Figure 3b, the bearing measurement is four-times more accurate than the range, as and . Whilst the filter does work well in either case, note that has a significant effect on the observations. This is not the case for .

4.1. Quality Measures

In order to assess the quality of the filter, we must specify the aim of the tracking. Our aim is to find an approximate trajectory from noisy observations of a moving object’s positions, which lies optimally close to the true trajectory of the object. In this context, there are two questions that one might hope to answer: (1) Where is the object in a given time interval? (2) What is the current position? The different nature of these questions is subtle, but profound, with each lending itself to a different class of possible applications. For Question (1), we need to find an approximate trajectory that is close to the true trajectory over the entire interval and therefore shadows the true dynamics. Such a scenario would be required, for example, in order to identify the ship responsible for leaking oil into the ocean based on its trajectory. Question (2) does not need an approximated trajectory that is close to all states, but only close to the last state, for example finding a missing flight. A filter capable of solving both questions will likely need to be optimized differently for the particular case. This could be either “tracking the path that the object travelled” or “knowing the recent position of the object”. Consequently, we require two measures. The first measure we define details how close the approximated states are to the true states along the whole trajectory for where is the length of the time series. For this, we use the root mean squared error defined by: The second measure details how close the approximated final state, , is to the true final state, and can be quantified by the end-point error: For more accurate results, we use the ensemble averages of these quantities as measures of the filter’s performance. We consider N time series, giving N true trajectories, N sequences of observations and therefore N sequences of approximates: In the experiments that follow, we assess the quality of the filter for both tracking objectives and optimize parameters such that the respective average errors are minimized.

4.2. Optimization of the Filter Performance

The quality of the solution, i.e., the approximated states, can be optimized by tuning the free parameter of our filter. Generally, the dataset will contain noisy position observations, and so, acceleration can be used as the free parameter. Thus, the smoothing parameter of the acceleration Lagrangian term in Equation (2d) approximates a suitable value of the object’s inertia. The optimal value is dependent on the time resolution of the time series, the system’s dynamics, the noise intensity, etc. Therefore, the following investigation should be understood as a proof of concept only, whereby an optimal value is determined for some specific application. Furthermore, as we only want to show that does in fact have some optimum value, we can use a rather simple numerical scheme. The algorithm implemented starts with a broad parameter sweep, , and calculates the average errors for each tracking objective ( and ) from time series. It then identifies the sub-interval in which the minimum error occurs and divides this -interval into five new subintervals. Repeating this procedure six times, with a new time series for each narrowing of the parameter range, we get an approximation of the optimal . This approximation is sufficient to show that an optimal value exists. Table A1 and Table A2 show the results when optimizing subject to minimizing for the CA and CI cases, respectively. The minimum in each interval is indicated in bold. We note that the optimal values for the two types of correlation consideration lie close together, and there does not seem to be a large difference between the resulting errors.
Table A1

Results of the experiments to identify the almost optimum value of giving the smallest when Correlation is Accounted for (CA).

Experiment 1 η 0.11 10 1001000
E 0.340.28 0.24 0.260.46
Experiment 2 η 15 10 50100
E 0.270.24 0.23 0.240.25
Experiment 3 η 57 10 3050
E 0.2350.230 0.227 0.2280.237
Experiment 4 η 71015 20 30
E 0.2340.2290.227 0.226 0.228
Experiment 5 η 1517 20 2530
E 0.23610.2358 0.2357 0.23640.2374
Experiment 6 η 17 18 202225
E 0.22406 0.22400 0.224020.224200.22470
Table A2

Results of the experiments to identify the almost optimum value of giving the smallest when Correlation is Ignored (CI).

Experiment 1 η 0.11 10 1001000
E 0.370.28 0.23 0.240.43
Experiment 2 η 1510 50 100
E 0.2910.2460.233 0.231 0.249
Experiment 3 η 10 30 5070100
E 0.23 0.22 0.230.230.25
Experiment 4 η 1020 30 4050
E 0.23610.2299 0.2298 0.23170.2345
Experiment 5 η 20 25 303540
E 0.227 0.226 0.2270.2280.229
Experiment 6 η 2022 25 2730
E 0.227720.22737 0.22716 0.227180.22741
Table A3 and Table A4 show the optimization data when our aim is to minimize for the CA and CI cases, respectively. Again, minimum values in each interval are indicated in bold. Here, we note a huge difference between the approximated optimal values for the two types of correlation considered. When correlation was ignored, we were required to expand the starting interval as the best value in Experiment 1 (the zeroth iteration) was the interval’s endpoint . Consequently, the almost optimal value of this method was much higher than the value of , found when correlation was considered. On the other hand, the average end-point error was 1.4-times larger when correlation was accounted for.
Table A3

Results of the experiments to identify the almost optimum value of giving the smallest when Correlation is Accounted for (CA).

Experiment 1 η 0.1110 100 1000
e 0.860.770.59 0.47 0.91
Experiment 2 η 10 100 2005001000
e 0.55 0.47 0.510.700.95
Experiment 3 η 10 75 100150200
e 0.55 0.49 0.500.520.54
Experiment 4 η 105075 90 100
e 0.57770.49760.4863 0.4861 0.4851
Experiment 5 η 7580 90 95 100
e 0.46900.4678 0.4664 0.4664 0.4669
Experiment 6 η 90 91 929395
e 0.47355 0.47354 0.473560.473580.47368
Table A4

Results of the experiments to identify the almost optimum value of giving the smallest when Correlation is Ignored (CI).

Experiment 1 η 110100 1000 10,000
e 0.710.590.45 0.36 0.47
Experiment 2 η 1005001000500010,000
e 0.400.36 0.35 0.400.50
Experiment 3 η 5007501000 2500 5000
e 0.3490.3360.329 0.327 0.365
Experiment 4 η 1000 2000 250035005000
e 0.364 0.353 0.3560.3670.392
Experiment 5 η 10001500 2000 22502500
e 0.3540.344 0.340 0.3410.341
Experiment 6 η 1500 1750 200021002250
e 0.36438 0.36315 0.363190.363480.36416
Experiment 7 η 1500 1600 175018502000
e 0.3394 0.3391 0.33920.33950.3402
The results of this investigation are summarized in Figure 4. Blue lines show the results when correlation is accounted for, and red lines show the results when correlation is ignored. The general shape of the graphs—large values of the error for small and large values—support our initial assumption that an optimal value of exists. Since our primitive iterative algorithm can only be used to find an approximation of the optimal , we cannot say anything about whether the found best value is a local or a global property. However, as was pointed out in the beginning, the purpose of this investigation was only to show that the value can be optimized for a specific application.
Figure 4

Optimisation of the smoothing parameter : average root mean squared error and average end-point error as functions of the smoothing parameter . We show both scenarios when the error’s correlation is taken into account (blue) or completely ignored (red). The minimum illustrates the existence of the optimal value of that minimises the error and shows that incorporating the error’s correlation does not significantly improve the filter’s performance.

4.3. Comparison of the CA and CI Cases

Recall that CA refers to the situation where correlation is taken into account, and CI ignores it. Surprisingly, the CI algorithm seems to lead to similar or even better estimates. To get a clearer picture, we need to understand the error distributions, as well. Figure 5 and Figure 6 show histograms of the two error measures, and , resulting from each correlation algorithm. We used data from 100 time series and chose the approximated optimal values found previously.
Figure 5

Histograms of the root mean squared error E from 100 observation sequences at the optimal value of in both scenarios when the error’s correlation is taken into account or ignored.

Figure 6

Histograms of the end-point error e from 100 observation sequences at the optimal value of in both scenarios when error’s correlation is taken into account or ignored.

We are not surprised that there was not much difference between the CA and CI algorithms when we were interested in minimizing , as seen in Figure 5. Both average errors were about 0.21, and the distributions were peaked around this value. If there was any difference, it was that the percentage of errors contained in the interval was about 10% higher for the CA algorithm. It is not immediately clear whether this is statistically significant, but given that both methods showed similar result, we do not think that investigating this statistical significance is important. Recall that for the case of minimizing , unlike for , our optimization gave us quite different optimal values depending on which algorithm was used. This situation is depicted in Figure 6. Surprisingly, the resulting distributions have much in common: For the CA algorithm: the -distribution decayed monotonously, and all errors were contained within the interval ; and about 60% of the errors were between . For the CI algorithm: the errors were contained within the smaller interval , and the distribution was less broad; and in the subinterval from , we found 70% of the errors. In addition, there was a peak around the average value in both cases with . We conclude that the shadowing filter’s tracking methodology was robust enough to consider or ignore the correlations of the observational errors. That is, there was not a marked improvement in the shadowing filter’s performance if we took correlations into account. This is further illustrated in Figure 7, which shows estimates for both tasks ((a) minimizing average error and (b) minimizing last point error).
Figure 7

The filter performance in estimating (a) the path and (b) the current position, when error correlation is taken into account. Note that the true dynamics is given by Equation (5).

5. Applications

In this section, we demonstrate some applications with common challenges one frequently faces in practice. Specifically, we emphasise three challenges relevant to tracking: irregular/non-uniform time resolution, records with large singularities, tracking chaotic trajectories and real-world multiple object application.

5.1. Non-Uniform Sampling

Here, we highlight the fact that our shadowing filter is flexible enough to be used even for non-uniformly sampled data. Often, data points are not recorded with the same sampling rate, and so, does not hold for all h in the dataset. Such a situation is quite common for example in GPS tracking, as the GPS device often fails to log one or more data points [34,35]. Our filter can be used with arbitrary values of between any two data points; however, we focus here on the sub-problem that some data points are not recorded with each a multiple of some sampling time such that . It has been shown that our tracking methodology is sufficiently robust to ignore the error correlation without effecting performance. Consequently, any d-dimensional problem can be treated as d independent scalar problems. Hence, for our numerical investigation, we will use data generated from the following scalar function that gives true states Y and noisy observations P: where , the observation’s noise with a white noise process, is an independent cumulative white noise process, , is the variance of the cumulative white noise and is the variance of the observational noise. We generated data with n initial length of using and . From the dataset, we randomly deleted data points using a uniform distribution (cf. Figure 8a) so that the final dataset had data points. Again, we used our usual measures and estimated from datasets for or 19 (depending on the tracking objective). To incorporate the non-constant sampling time, we included the sampling times into the matrix Equation (A12). Our numerical investigation shows that for increasing levels of deletion, the error increased relative to the uniform sampling. Specifically, we found that () and (). Overall, our filter was still able to find a good enough approximation, as can be seen in Figure 8b.
Figure 8

Performance of the filter when using non-uniform sampled data. (a) shows the histogram of sample times in the sequence, and (b) directly compares the observed and estimated trajectory, as well as the observed and non-uniformly down-sampled time series.

5.2. Multiple Bearing Observations: Singularities

In this section, we will study another situation where non-Cartesian observations are used. Consider the case when the target is tracked in the -plane, by the position , using bearing observations from two distinct reference points and . The transformation f between the different coordinates is defined by: where under most circumstances, there exist unique . Hence, if we have bearing observations, then we need to find the raw position estimates, and that requires calculating m and . These can be estimated by solving the linear equations: which are equivalent to: and these can be expressed in matrix form as follows: Solving these linear equations enables us to find the raw position estimates from bearing observations using the transformation f [27]. In this situation, it is difficult to compute the Jacobian matrix directly from the transformation f. However, note that the inverse of the transformation f is given by: Consequently, the matrix K, the Jacobian matrix of , is easily computed as follows: where and , which requires simple computations once an estimate of is obtained. Therefore, under the assumption that both r and are not close to zero, the information matrix can be approximated using the matrix K as follows: where is the variance of the observational error in the first bearing component and is the variance in the second bearing component. It follows that the information matrix is given by: Given this information matrix, we can apply the shadowing filter to find an approximated trajectory. A problem arises in situations when the sensors and the target are collinear; i.e., . In such situations, the linear Equations (14) are singular or badly conditioned. This can lead to raw position estimates far from their true positions. To solve this problem, we have three options: we can drop the bad observation, replace it with a forecasted position or take into account that the information in this observation is less reliable than in other observations. Since the shadowing filter is estimating a trajectory from a sequence of observations, generally there is no harm in either of these solutions. Although, dropping bad observations leads to a non–uniform time-gap between observations where the shadowing filter still works successfully, as verified in the above investigations. We have chosen here to give our observations different weights, according to their reliability. To do this, we scaled the information matrix by using the rcond command in MATLAB. This command calculates the one-norm estimate of the reciprocal condition number as returned by LAPACK. rcond was used on the matrix giving information on the angles: The output of rcond varies between zero and one. For angles that lead to an ill-conditioned Equation (14), we get small weights, but good observations will get maximum weighting. In the example shown in Figure 9 and [27], we tracked an object using bearing observations provided by two moving sensors. The target moved on a circular path for . It moved in a clockwise direction, starting from the 12 o’clock position. The first sensor moved between and , and the other between and . Both sensors had a constant speed. To generate the bearing observations, we transformed the true states of the object to the bearing coordinates using Equations (15) and (16). We added white noise to both components with variances to obtain the desired multiple bearing noisy observations. After that, we transformed these noisy observations to the Cartesian coordinates using the transformation f after computing m and from Equation (14). We applied the shadowing filter at on these observations using the scaled information matrices to deal with the bad observations. In Figure 9, we can also see the effect of the different weighting. Note that when the target was between the four and five o’clock position, it was directly between the sensors . This led to a very bad observation, as seen by the large error. Similarly, at the end of the trajectory, the object was almost directly behind both sensors, which also led to a poorly-conditioned observations. However, using the one-norm estimates as described above enabled us to determine good approximations.
Figure 9

Performance of the shadowing filter in tracking an object using multiple bearing observations with , . Note how the filter is able to overcome the singularities in the observations and estimates the true dynamics quite well.

5.3. Chaotic Trajectory: Lorenz Model

In the previous sections and examples, we investigated the performance of the tracking filter for deterministic trajectories in one- and two-dimensional spaces. The natural step from here is to consider the realistic case of a tracking such as the chaotic motions seen in nature in insects, fish or birds. We now apply the tracking technique to the Lorenz model [36]: Using the standard Lorenz model and typical parameters , and , we initialized the system with and generated a trajectory containing 2501 data points for . The data evolved as per Equation (17) and were measured with some uncertainty following the incorporation of noise with variance of to each component as per Equation (11). The dynamics were tracked using the shadowing filter with , which was the optimal value for the chosen noise variance as found in Section 4, and a sampling rate of 0.1. We considered uncorrelated errors and treated the system as per the scalar case outlined in Section 3. Figure 10a–c shows the successful tracking of a target in each of the variables, while Figure 11 shows these details in a projection of the three-dimensional state space. We note that as the shadowing filter did not contain any information about the vector field, the noisy observations were projected down on to the attracting manifold of the Lorenz model. Therefore, our filter acted similar to the other shadowing filters, which had information regarding the vector field encoded in them [17]. Although there were some estimation error, one can see at the outer loops in Figure 11, we note that our filter estimated the true time series most of the time.
Figure 10

Projections onto each direction for the chaotic Lorenz model.

Figure 11

Chaotic Lorenz attractor (blue line), noisy observations (red line) and the tracking technique results (green triangles).

Up until this point, we have been concerned with forecasting. However, this is closely related to noise reduction, and the results of the previous section can be viewed in such a context [37,38]. We assumed that the clean time series of each variable, and , is in the presence of measurement noise and aimed to separate these from the observed noisy data in order to restore the structure of the system and improve the quality of predictions [38], i.e., , where is defined as in Equation (11). For measurement noise applied to Equation (17), the approximated trajectory (green triangles) of Figure 10 and Figure 11 is the reduced noise trajectory. Figure 11 illustrates how the noise (red line) has smeared the chaotic attractor and obscured its structure and how the chaotic dynamics of the clean (blue line) trajectory were successfully captured.

5.4. Real-World Multiple Object Application

As a final application, we present how the shadowing filter can be used to track multiple objects. We used two-dimensional data produced from images of a real flock of ducks moving on a water surface. The dataset was provided by the authors of [39]. We focussed on one of the data files where a turning event occurred. We consider 25 individuals observed at 30 time points. We tracked the trajectories using the positional observations and extracted the corresponding acceleration for each bird. The difference between this application and the tracking of pigeons illustrated in Figure 1 was the size of the flock. Here, we have a larger number of individuals, but a relatively short trajectory. The results are shown in Figure 12, where the circles are the observations and the estimated paths are given as solid lines. The filter accurately tracked the position of each individual. The acceleration of each bird is shown in the lower panels of the figure. Note that this additional information about each individual’s state can be used to forecast its future dynamics and could be used to infer that the forces dominate such a collective behaviour [13].
Figure 12

Shadowing filter applied to positional data from a group of ducks moving on the water surface. The shadowing filter successfully tracked the 25 individuals. In (a), we tracked the trajectories of the 25 ducks (the coloured solid lines) from the positional observations (red circles). In (b), we used the tracking algorithm to estimate the corresponding acceleration profiles for each duck (colours refer to different individuals).

6. Conclusions

In this paper and our previous ones [11,12,13], a new class of shadowing filters, applicable to observational data of objects moving in more than one dimension, was introduced and investigated. Our investigations demonstrate that this approach is highly versatile. First of all, it may be easily adapted to different settings. For instance, it can be implemented to track a single moving particle, a system of multiple objects [13] or be extended to consider rigid bodies instead [11]. Secondly, it is computationally inexpensive in comparison to alternative methods [11]. Finally, it outperformed the established and widely-used Kalman and particle filters in situations where nonlinearities were present [11]. Our tracking technique is applicable to real data and enables us to minimise measurement errors and overcome device failures, as well as allows for a reconstruction of the full dynamical state space from limited position-only data [12,13]. In this paper, specifically, optimization of the filter quality was achieved for two tracking objectives, namely “tracking the path that the object travelled” and “knowing the recent position of the object”, by tuning the smoothing parameter such that the (task-specific) error was minimized. Numerical investigations indicated that our tracking method was sufficiently robust that the benefit of taking error correlation into account was negligible, allowing for many systems to be treated as per the scalar case. Furthermore, we demonstrated that this tracking method can be applied to irregularly-sampled real-world data without additional effort, a situation often encountered when measurement devices fail to log some measurement points. We highlight the ability of the shadowing filter tracking method to avoid singularities in data, another challenge encountered frequently in practice. The shadowing filter was successfully applied to noisy data from the Lorenz model, and we briefly commented on the relation of this approach to noise reduction algorithms. Finally, a real-world application of multiple object tracking was provided, as we tracked a flock of 25 ducks moving on the water surface. Lightweight and inexpensive tracking devices are now available and have opened up a new area of tracking possibilities [34,35,40,41,42,43,44], resulting in new observational data, which enable the study of movement, of animals for example, with so far unreached precision. Given these advantages and the resulting big data, it is clear that the new generation of filters needs to be fast and efficient. The outlined benefits of our filter together with its clear optimization possibility make it, in our opinion, a very strong contender for such applications, and we are looking forward to seeing the progress in this area based on our filter.
  1 in total

1.  Distributed Target Tracking in Challenging Environments Using Multiple Asynchronous Bearing-Only Sensors.

Authors:  Yifang Shi; Jee Woong Choi; Lei Xu; Hyung June Kim; Ihsan Ullah; Uzair Khan
Journal:  Sensors (Basel)       Date:  2020-05-07       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.