Literature DB >> 32295132

Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight.

Kyuman Lee1, Eric N Johnson2.   

Abstract

In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least partially unknown parameters. A lack of compensation for unknown parameters often leads to a serious impact on the accuracy of VIO systems and systems like them. To compensate for the uncertainties of the unknown time delays, this study incorporates parameter estimation into feature initialization and state estimation. Moreover, computing cross-covariance and estimating delays in online temporal calibration correct residual, Jacobian, and covariance. Results from flight dataset testing validate the improved accuracy of VIO employing latency compensated filtering frameworks. The insights and methods proposed here are ultimately useful in any estimation problem (e.g., multi-sensor fusion scenarios) where compensation for partially unknown time delays can enhance performance.

Entities:  

Keywords:  EKF; IMU; UAV; VIO; camera vision; latency compensation; navigation; online temporal calibration; sensor fusion; time delay

Year:  2020        PMID: 32295132      PMCID: PMC7218848          DOI: 10.3390/s20082209

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


1. Introduction

The most widely used algorithms for estimating the states of a dynamic system are a Kalman Filter [1,2] and its nonlinear versions (e.g., extended Kalman filter (EKF) [3,4] and unscented Kalman filter (UKF) [5]). The design of the standard Kalman filter does not inherently allow for significant sensor-related delays in computation. Figure 2 shows that the delay is the time difference between an instant when a measurement is taken by a sensor and another instant when the measurement is available in the filter. As an example of key delay sources, some complex sensors such as vision processors for navigation often require extensive computations to obtain higher-level information from raw sensor data. Furthermore, a closed-loop system including control logic may be an overall computational burden to a single processor. Delays resulting from heavy computation may distort the quality of state estimation since a current measurement is compared to past states of a system model. In other words, unless compensating delays in Kalman filtering, large estimation errors may accumulate over time, or even cause the filter to diverge. The delay value is typically at least partially unknown and at least partially variable in many real applications. As an example of delay uncertainty contributors, even though a local clock is initially forced to synchronize with the centralized clock, deviations between clocks would occur because of clock drift, skew, or bias. In sensor fusion systems, when the timestamps of each sensor are typically recorded by triggered signals, non-deterministic, or non-quantized transmission delays lead to unknown time offsets on sensor streams. Moreover, if low-cost sensors such as rolling shutter cameras or software triggered devices are mounted on a vehicle, the variance of the uncertainty of timestamps might be larger. In particular, in visual-inertial odometry (VIO), we do not know the exact time instant when a camera opens and captures images for any particular pixel location. Often, exposure time depends on surrounding illumination conditions. The timestamp of the latest image by some cameras corresponds to some event such as when the shutter was triggered to start or when the entire image was available in memory. In practice, these uncertainties may be small compared to traditional sensors used for feedback in aerospace applications—but can be a major contributor to errors in emerging estimation problems such as VIO. Indeed, when estimating faster motions such as a highly agile unmanned aerial vehicle (UAV) or using progressive scan cameras, the unknown time delays may be a major driver of navigation quality and achievable controller bandwidth. We have experimentally observed the necessity of the time delay compensation to be more accurate than typically demanded, specifically for a UAV flying closed-loop on a vision-based navigation solution. With poor time delay compensation, we have observed oscillations and even divergence of estimates and closed-loop tracking error as expected, but even when we use fixed/known time delay compensation, we find time delay is still a limiting factor in accuracy and achievable control bandwidth. To illustrate, consider a UAV with a body-fixed camera that maneuvers with a more rapid rotation than reference design. Any time error produces a larger potential position estimation error with this faster rotation. Thus, even after the very best job possible has eliminated as much deterministic time delay error as is practical, we find that adapting to the non-deterministic error can enhance performance. In particular, we find it can be beneficial to deal with unknown time delays in VIO systems used in closed flight control and other systems like these.

1.1. Related Work

1.1.1. Visual-Inertial Odometry

In recent years, an increasing demand for the research of UAVs has prompted substantial interest in VIO systems [6,7,8,9]. Delmerico and Scaramuzza [10] provide a benchmark comparison of monocular VIO algorithms for flying robots. Similar to their comparison, Table 1 illustrates state-of-the-art VIO techniques even including stereo VIO. Let us explain some relevant terms for clarity. The tightly-coupled VIO jointly optimizes over all sensor measurements (i.e., visual and inertial cost terms in VIO) within a single process which results in higher accuracy. The opposite is referred to as the loosely-coupled. Indeed, the loosely-coupled VIO does not handle the correlation of visual and inertial motion constraints, resulting in the loss of information. Moreover, at the back-end of VIO, the optimization-based VIO solves a nonlinear least-squares problem (e.g., pose-graph optimization or bundle adjustment [11]) to update a window of states, which allows for reducing errors by re-linearization [12] but with a high computational cost and possibly stuck in the local minima. In contrast, the filtering-based VIO updates only the most recent state by the Kalman filter or EKF framework, resulting in computationally faster and more efficient, but one-time linearization possibly leads to linearization errors into the estimator. For more details of the terminology, see reference [13,14].
Table 1

State-of-the-art Visual-Inertial Odometry.

Name ROVIO [15]VINS-MONO [16]SVO +MSF [17]Alternating Stereo VINS [18]S-MSCKF [19]OKVIS [20]
Monocular ×××
Stereo ×××
Indirect × ×××
Semi-direct ×
Direct ×
Loosely-Coupled ×
Tightly-Coupled ×× ×××
Optimization-based × ×
Filtering-based × ×××
Open-source ××× ××
VINS-MONO [16,21] is optimization-based visual simultaneous localization and mapping (SLAM) including loop closure. Some processes in this approach are not efficient due to the following reasons. VINS-MONO duplicates integration with the same IMU data at different timestamps for prediction and optimization purposes. That is, for publishing odometry at IMU rate, it integrates whenever IMU data arrives, whereas IMU data are also accumulated in a buffer for batch processing of integration at the time of image measurement update steps. Mourikis first introduced a multi-state constraint Kalman filter (MSCKF) [22,23], and Sun et al. [19] recently provided its stereo version. Although the real-time high-frequency VIO outputs might be crucial for UAV attitude control, MSCKF does not publish the odometry at the IMU rate but at the image rate. Furthermore, batch processing for IMU data integration in MSCKF may add redundant time delays to the filter when vision measurements are available. VINS-MONO and MSCKF are applicable to IMU and vision fusion. If we fuse other sensors such as the global positioning system (GPS) and altimeters in navigation systems, those approaches may not be operable since measurements from other sensors are available to update between images. Another limitation is that assumptions for IMU pre-integration between keyframes and backward propagation with loop closure in their approaches do not always hold. Hence, the EKF-based VIO frameworks cover a greater scope of sensor fusion problems. Faessler et al. [17] combined semi-direct visual odometry (SVO) [24,25] with modular multi-sensor fusion (MSF) [26]. Even though this approach uses IMU data for fusing, since it is loosely-coupled, its results are sub-optimal. Paul et al. [18,27] recently proposed alternating stereo VINS that requires computation comparable to monocular VIO, yet provides scale information from the visual observations. However, this method may not be sufficient for tracking fast motion in low-latency demanding applications. Since the implementation is not open-source, this is not used for comparison in this paper. Leutenegger et al. [20] introduced a consistent keyframe-based stereo SLAM algorithm that performs nonlinear optimization over both visual and inertial cost terms. To maintain the sparsity of the system, their approach employs an approximation rendering it sub-optimal. Since it requires considerable computation resources or specific levels of sensors such as industrial grade IMUs, operating OKVIS in real-time is more challenging. Among the six algorithms in Table 1, only S-MSCKF and SVO+MSF handle an unknown time delay, so we will use their estimation results for comparison in this study.

1.1.2. State Estimation Using Time-Delayed Measurements

In a number of applications, a vital problem for combining data from various sensors is the fusion of delayed observations, and if the computational delay is crucial, fusing the data in a Kalman filter is challenging. During the last 20 years, the sensor time-delay problem has been addressed by a number of methods, most of which modify the Kalman filter so that it handles delay in the sensor update step. Alexander [28] derived a method of calculating a correction term and then added it to filter estimates when lagged measurements arrive. However, because the uncertainty of measurements is often an unknown quantity until the data are processed, applying the method in time-varying systems is not addressed. To overcome the shortcoming, Larsen et al. [29] extrapolated a measurement to a current time using the past and present estimates of the Kalman filter and calculated an optimal gain for this extrapolated measurement. However, Larsen’s approach is exact for linear systems only, but if the system dynamics and measurement equations are significantly nonlinear, it can be highly inaccurate. For optimally fusing lagged sensor data in a general nonlinear system, Van Der Merwe et al. [30,31] introduced a new technique called “sample-state augmentation,” based on the Schmidt–Kalman filter [32] or stochastic cloning [33]. Appendix C provides detailed background information about the new technique. Lastly, Gopalakrishnan et al. [34] provided a survey of all previously noted methods. All of the above methods assume that the amount of time delay is known. As an illustration, those methods only work with synchronized sensors. However, the hardware synchronization of most low-cost or customized sensors is not always available. Moreover, situations in which a current, accurate time delay might not be known can arise in real applications. To deal with the unknown time delays, Julier and Uhlmann [35] introduced the covariance union algorithm, and Sinopoli et al. [36] modeled the arrival of intermittent observations as a random variable with a probability. In addition, Choi et al. [37] and Yoon et al. [38] augmented a state vector with as many past states as the maximum number of delayed steps. The size of this augmented state vector is extremely large, and calculations with the large-size vector might require additional extensive computational effort. Recently, for the uncertainty of time delays in state estimation, Lee and Johnson [39] also suggested an approach combined with multiple-model adaptive estimation. However because of imperfect information on a certain range of the delay value, this method might not be suitable if uncertainty of time delay is high. Instead, we directly estimate the time delay as an additional state since augmentation is a straightforward means of handling the unknown delay. Nilsson et al. [40] investigated this idea using the Taylor series expansion for small delays. However, delay values are typically larger than a time step, and the linearization in their approach does not hold for large delays. Li and Mourikis [41] also examined the state augmentation for estimating an unknown time offset between the timestamps of two sensors. However, their approach is not optimal since it performs the measurement update of delayed sensor data without the covariance correction that uses the cross-covariance term computed during the delay period. Furthermore, in the recent optimization-based method proposed by Qin and Shen [42], if cameras move at non-constant speed during the short time period like progressive scan cameras, then their assumption does not hold. Despite the short time period, the camera coordinate frame is still changing and moving. Their assumption of a constant time offset is also not general since the unknown delay may be varying. To overcome all previously noted limitations, this paper proposes a novel approach, “latency compensated filtering” based on the combined parameter-state estimator [43,44].

1.2. Summary of Contributions

To fuse visual measurements with unknown time delays in VIO systems and systems like them (e.g., multi-sensor fusion), the approach in this paper incorporates three correction techniques into state estimation. First, we directly estimate the unknown part of actual delays in online fashion by augmenting vehicle-feature states. With the estimated unknown part and the approximately known part of the delays, we find the most precise measurement times based on the definition of total delays introduced in this paper. Next, at the calibrated measurement time, we evaluate the Jacobian and the residual for the EKF using interpolated states. At the measurement update of the EKF, the third correction is to formulate a modified Kalman gain by the cross-covariance term computed during the delay period. The testing results of this study on flight datasets show that the proposed latency compensated VIO is a more reliable and accurate navigation solution than the existing VIO systems.

1.3. A Guide to This Document

The remainder of this document contains the following sections. Section 2 introduces background for all of this study. To estimate the unknown time delays and states of VIO, Section 3 and Section 4 present theory and implementation for a novel combination of the parameter estimation technique with the modified EKF that compensates delayed measurements, respectively. Section 5 shows the testing results of this study on the benchmark flight dataset. The last section concludes and plans future work.

2. Preliminaries

2.1. Sequential Measurement Update

When multiple measurements are observed at one discrete-time, sequential Kalman filtering, shown in Figure 1, is useful [45]. In fact, we obtain N measurements, , at time k; that is, we first measure , then , ⋯, and finally .
Figure 1

A schematic of the sequential measurement update.

We first initialize a posteriori estimate and covariance after zero measurement is processed; that is, they are equal to the a priori estimate and covariance. For , perform the general measurement update using the i-th measurement. We lastly assign the a posteriori estimate and covariance as and . To clarify, hat “ˆ” denotes an estimate, and superscript − and + a priori and a posteriori estimates, respectively. Based on Simon [45]’s proof that the sequential Kalman filtering is an equivalent formulation of the standard EKF, the order of updates does not affect the overall performance of estimation.

2.2. Vehicle Model

The nonlinear dynamics of a vehicle is driven by IMU sensor data including specific force and angular velocity. The estimated vehicle state is given by where , are the position and velocity of the vehicle with respect to the inertial frame, respectively. is the error quaternion of the attitude of the vehicle, and its more details are explained in references [46,47,48]. , are the acceleration and gyroscope biases of the IMU, respectively. Left superscript i denotes a vector expressed in the inertial frame. The EKF propagates the vehicle state vector by dead reckoning with data from the IMU. Raw IMU sensor measurements and are corrupted by noise and bias as follows: where , are the true acceleration and angular rate, respectively, and g is the gravitational acceleration in the inertial frame. , are zero-mean, white, Gaussian noise of the accelerometer and gyroscope measurement, and , are the random walk rate of the acceleration and gyroscope biases. denotes the rotation matrix from the inertial frame to the body frame. The vehicle dynamics is given by where is a skew symmetric matrix, and function maps a 3 by 1 vector of the angular velocity into a 4 by 4 matrix [44]. The use of the 4 by 1 quaternion representation in state estimation causes the covariance matrix to become singular, so it requires considerable accounting for the quaternion constraints. To avoid these difficulties, engineers developed the error-state Kalman filter in which 3 by 1 infinitesimal error quaternion is used instead of 4 by 1 quaternion q in the state vector. In other words, we use attitude error quaternion to express the incremental difference between tracked reference body frame and actual body frame b for the vehicle. where ⊗ is quaternion product defined in reference [47]. Resulting rotation matrices with error quaternion and with respect to the nominal reference body frame are Jacobian matrix and are computed in Appendix A.

2.3. Camera Model

An intrinsically calibrated pinhole camera model [49,50] is given by where x is the state vector including the vehicle and feature state, and measurement is the j-th feature 2D location on the image plane. , are the horizontal and vertical focal lengths, respectively, and , are additive, zero-mean, white, Gaussian noise of the measurement. Vectors , are the j-th feature 3D position with respect to the camera frame and the inertial frame, respectively. Extrinsic parameter and are known and constant, and rotation matrix . Jacobian matrix is computed in Appendix A.

2.4. Feature Initialization

From Equation (16), if j-th measurement on an image is a new feature, then is unknown so it needs to be initialized. In the first step of the measurement update, we employ Gauss–Newton least-squares minimization [22,51] to estimate feature 3D position . To avoid local minima, we apply the inverse depth parameterization of the feature position [52] that is numerically more stable than the Cartesian parameterization. In other words, by the derivation explained in Appendix B, we obtain j-th feature 3D position with respect to left camera frame of a stereo camera. The j-th feature 3D position with respect to the inertial frame is The new feature is initialized using only one image in which the feature is first observed. Although the new feature is initialized, since it still entails uncertainty, the EKF recursively estimates and updates its 3D position by augmenting into the state vector: where is the estimated vehicle state vector defined in Equation (1). The overall initialization includes the initial value of the feature state and its error covariance assignment. The error covariance of the new feature are initialized using state augmentation with Jacobian J: where Jacobian is computed as follows: is the number of all features and is the initial uncertainty of the initialized new feature. The error pertains to measurement noise and the error of the least-squares minimization. In fact, since Montiel et al. [52] validate the initial uncertainty as a Gaussian distribution, the EKF including the feature initialization still holds optimality. Equations (18)–(20) arise based on “consider covariance analysis [53,54]”. Once initialized, the EKF processes the feature state in the prediction-update loop. In the time update of the EKF, we propagate P by where state transition matrix . , is the error covariance of all features, and represents vehicle–feature correlations. In addition, we assume that the surroundings are static, so the dynamics of features . In the measurement update of the EKF, only tracked features are used for the update. For the efficient management of the map database, if the size of the state vector exceeds than the maximum limit, then the feature with the least number of observations is pruned and marginalized.

3. Theory

3.1. Definition of Time Delays

Based on dead reckoning, the EKF propagates state x and its error covariance P at time t when IMU sensor data and are measured. Since an IMU is a discrete-time sensor, the time update of the EKF is processed in discrete time step where defines the conversion of all data types to integers, continuous time , and is the sampling rate of the IMU. is generally almost constant since a micro controller such as Arduino and Pixhawk calculates precise timestamps in milliseconds for each IMU measurement. Next, whenever new vision data from an image are arrived at the filter, the EKF performs the measurement update for correcting the state estimate and its error covariance. As introduced in Section 1, various reasons such as image processing produce time delays that the time stamps of vision data contain. For clarity, this section defines the time delay in detail. Latency is the time difference between when an image was grabbed and when vision data from the image are updated in the filter, shown in Figure 2.
Figure 2

Data streams of the IMU and the delayed vision data.

That is, true delays are written as where t is current IMU time and is the time when the current image was captured. In essence, we treat IMU time as our common time reference, and we do not necessarily know the exact time when images are grabbed. The timestamp of each image is encoded by indirect ways such as triggers. In other words, true image time constitutes readable timestamps and unknown . Let us define time differences between the time readouts of sensors as follows: where and are the approximately known and the unknown parts of true delays , respectively.

3.2. Approximately Known Part of Time Delays

is either a fixed value determined by offline beforehand tuning or readable differences between the time stamps of image and the time stamps of IMU data. Indeed, regardless of a constant value or readable varying delays, approximate delay is a known value. Let the discrete steps of the approximately known part be , where (integer) means type conversion to integer from other types; that is, d is the quotient of division .

3.2.1. Jacobian and Residual—“Baseline Correction”

Since is unknown, we first consider only the term as our delay of the system. From the system models given in Section 2.2 and Section 2.3, only measurements from the camera model depend on the time delays. To correct the Jacobian and residual with approximately known delays, interpolation and quaternion slerp are required. Since , we define new time notation as When time is expressed at subscript (e.g., , ), we will use the shorthand notation without (e.g., , ). Although delay d in discrete-time systems is the number of delayed samples, time is not required to be an integer by reading timestamps of each sensor. Since is not an integer, we cannot directly access the values of either or its corresponding error covariance , so relevant interpolation is required instead. Mathematically, linear interpolation constructs a new data point within the range of two known adjacent data points by the same slope of two lines [55]. Let us take the nearest integer time step , which is greater than or equal to , shown in Figure 3a.
Figure 3

Examples of interpolation and slerp. (a) Linear interpolation, (b) Quaternion slerp.

With two data points, either and or and , the interpolants at time are given by where in Equation (26). Likewise, Although we compute the interpolants at time using linear interpolation because of the constraint and specialty of quaternion, another interpolation is required. Slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake [56] in the context of quaternion interpolation for the purpose of animating 3D rotation. Interpolants refer to constant-speed motion along a unit-radius circle arc, shown in Figure 3b. Based on the fact that any point on the curve is linear combination of the given ends, the geometric formula [56,57] is where since only unit quaternions are valid rotations, normalization of each quaternion before applying Slerp is a prerequisite. is a smaller angle between two end quaternions, so we ensure that . If the dot product in Equation (28) is negative, Slerp does not represent the shortest path. To prevent long paths, we negate one of end quaternions since q and are equivalent when the negation is applied to all four components. If two quaternions input , are too close, then interpolants by linear interpolation are acceptable. Otherwise, in Equation (28) is safe computation because the dot product is in the range of the threshold. With suitable interpolants at time , a baseline approach modifies the feature initialization in Appendix B and the measurement update. At time k, the vision data of an image grabbed at time arrive at the filter for either the feature initializations or the sequential measurement updates. If j-th measurement on the last image is a new feature, then, from Equations (18) and (19), state and covariance P at current time k are augmented as follows: where and . is initialized by Gaussian-Newton least-squares minimization derived in Appendix B. Although we assume static features, since the feature initialization is related to estimated camera pose at the time when the delays begin, corrected Jacobian is required in the initialization steps. If j-th measurement on the image is a tracked feature, then we correct only residual r and Jacobian C in the following measurement update: where corrected residual and Jacobian . R is the measurement noise covariance of , and is sub-optimal Kalman gain computed by current covariance. As sequential Kalman Filtering introduced in Section 2.1, if j is the first feature on the current image (i.e., j = 0), then assign , , and if j is the last feature on the current image (i.e., j = ), then assign , . Before measurement updates (32)–(34), a chi-squared gating test rejects outliers of each measurement. For only this test purpose in the case of baseline correction, we add uncertainty due to time delay. Procedures in Equations (30)–(34) are referred to as the “baseline correction.”

3.2.2. Cross-Covariance—“Covariance Correction”

During the delay period, even though an image was already captured in the past, since vision data from the image have not yet arrived at the filter, the EKF is not able to perform the measurement update. Indeed, the filter processes only time update. When a vision data packet from the image finally arrives and is ready to update in the filter, we simply execute the Jacobian and residual correction in Equations (32)–(34) using the delayed measurements. However, unlike the baseline correction, if the filter updates as if the measurements arrive immediately without delays (like red lines in Figure 4), then filter can achieve a more accurate estimate. In fact, covariance correction presented in this section (like blue lines in Figure 4) is as if the filter accomplished the general measurement update at the time instant when the image was captured. In other words, red lines in Figure 4 are ideal but unrealistic, and blue lines in the figure are practical. The red lines process the measurement update first and then time update; however, the order of the processes of the blue lines is the opposite. Only the order of the processes has changed.
Figure 4

A schematic of a modified measurement update using covariance correction.

Among a variety of fusing techniques for time-delayed observations discussed in Section 1.1.2, the stochastic cloning [33]-based method (i.e., the Schmidt EKF [30,31]) is applicable to varying delays and nonlinear functions such as the vehicle and camera models described in Section 2.2 and Section 2.3, respectively. Thus, this study modifies the method for finding the optimal navigation solution of vision-aided inertial navigation systems. Let us introduce new notation . is P covariance matrix at the time when the true delays begin. In the scope of this section, . In addition, when this section uses corrected residual and Jacobians , , we will use their shorthand notations as and , , respectively. That is, each residual and Jacobian is corrected based on Section 3.2.1. In addition to the baseline correction, we correct error covariance in both the feature initialization and the measurement update when delayed vision data are available in the filter. If j-th feature measurement on the recent image is a new feature, the augmentation of in the feature initialization is similar to Equation (31). On the other hand, since Jacobian is computed at the time when the delays begin, the augmentation of covariance matrix at the current time is as follows in a different way: where and . State estimate is augmented by Equation (30). When j-th delayed vision data is ready to update at time k, we modify the measurement update steps of the sequential Kalman filtering as follows: where and . is the relevant cross-covariance term during the delay period. This term, which fuses a current prediction of the state with an observation related to the lagged state of the system, is used for formulating modified Kalman gain matrix . Equation (39) still holds Joseph’s form [58] that preserves the symmetry of the updated covariance and ensures its the positive definiteness. By sequential update provisions, the state estimate and covariance at time are also updated as follows: At time , when cameras open for capturing the image, the cross-covariance matrix is initialized with covariance at that time; that is, . During the delay period, from time to current time k, if no other measurements are fused into the filter, the cross-covariance is only propagated by the following computation based on the Schmidt–Kalman filter [30,32,33]: where is the state transition matrix. In the sequential measurement update, based on updated in Equation (42), updating is straightforward as follows: If other measurements from other sensors such as an altimeter and GPS are fused during the delay period, then and cross-covariance are also recursively updated using the Kalman gain of the other measurements. For this case, Equations (43)–(46) do not hold any longer. For more details, see Appendix C. All modification in this section is referred to as “covariance correction.” Furthermore, the optimality of this covariance correction is guaranteed based on the fact that the standard Kalman filter is an optimal filter since Appendix C proves that the covariance correction is identical to the standard EKF. Hence, the proposed correction still holds its optimality. Section 4 will describe its practical implementation.

3.3. Unknown Part of Time Delays—“Online Calibration”

Although residual, Jacobians, and covariance are corrected for measurements with time delays, if  is uncertain readouts or is the larger portion of true delays, we cannot guarantee the reliability of the correction algorithm (Figure 5). For the robustness of vision-aided navigation systems, we need to additionally investigate the unknown part of true delays.
Figure 5

Three corrections in the latency compensated VIO.

Figure 5 shows three corrections in the latency compensated VIO presented in this study. From the standard Kalman filter, if one does not account for time delay, propagation, and measurement update look like grey lines in Figure 5. For the last correction, we estimate the unknown part of time delays to obtain more precise time instant when the delays begin. As discussed in Section 1, unknown phenomena such as clock bias, drift, skews, and asynchronization cause , so may be a positive or negative value. State estimation theory can be used to estimate not only the states but also the unknown parameters of the system [59]. Numerous researchers [60,61,62] have proved that state augmentation functions are easy to use with state observers, so we enable design a state observer by state augmentation to estimate the unknown part of the time delays. To estimate unknown delay value , we first augment state estimates and covariance of the vehicle as follows: Like the modeling of the IMU biases in Equations (2) and (3), we model the dynamics of using a small artificial noise term where is a random walk rate that allows the EKF to change its estimate of ; that is, the power spectral density of represents the variability of . In fact, this is a conventional random walk model for an unknown parameter that may be varying—commonly seen for things like gyro bias, as done here. If additional modeling information about the way time delays is expected to vary is known, then it could be captured here with a more complex model. Let us rewrite the definition of time delays: For clarity, we define new time notation as where now time is the most precise time instant when the image was captured. To apply the relevant interpolation techniques to the state estimates and covariance at time , we access their values at the nearest integer time step , where . In other words, s, discrete delayed samples including estimated latency, is greater than or equal to , as shown in Figure 2. To operate the augmented system, we match its dimension by augmenting other matrices. In the time update, since , the state transition matrix and the process noise covariance matrix are augmented where I is due to and the Gaussian white noise . Under the assumption of static features, since estimated latency pertains to only vision measurements, we compute augmented elements of Jacobian matrices J and C [41,43]. In fact, from Equation (20), Jacobian in the feature initialization is augmented as follows: where Furthermore, from Equation (A2), augmented Jacobian in the measurement update is where Here, let us call the combination of the estimation of the unknown latency in this section with the baseline correction “online calibration.” Therefore, to reliably estimate the state variable and effectively compensate the total delays, we incorporate all three corrections, called “latency-adaptive filtering.”

4. Implementation

This section summarizes and describes an implementation of the proposed method. Figure 6 illustrates a flow chart of the overall process.
Figure 6

A flow chart of the overall process of the latency compensated VIO.

4.1. Forward Computation of Cross-Covariance

Even though delays begins time prior, estimated delay value is only accessible when delay finished. That is, during the delay period from to t, is unknown yet. is estimated at current time k. Since estimated delay value is unknown up to time k, we are not sure when the covariance correction begins computing cross-covariance . Theoretically, when is estimated at time k, we compute and by backward from time k to time with saved Jacobians and covariance during the delay period. This is an ideal computation, but not realistic. Backward computation that is used in [43] is impossible for real-time operations since storing large matrices such as sequences of Jacobian and covariance matrices allocates huge memory uses. Furthermore, backward computing is not efficient because it iterates backward at time k like batch processing. Instead, for a real-time framework, an approximated way of forward computation of cross-covariance is introduced. Since , we assume that the time delay does not change in state propagation during the delay period, so a posteriori estimate of time delay when the last measurement update is assumed to be a priori estimate of the delay at the current time. Next, under this assumption, we predict when the time delay of the next image begins. At the predicted time instant, we store the covariance matrix once for and recursively calculate for .

4.2. Summarized Algorithm

When the size of the state after augmentation in the feature initialization steps exceeds a maximum threshold, we prune the number of features in the database. The system in this study finds an index for the best place to insert a new point in the database. The one with the least number of observations or frequent outliers is marginalized. Unlike Lee at al. [43], this study does not estimate the total parts of time delays, so the latency compensated filter does not entail specific constraints. That is, this study estimates only unknown part that is a possible positive or negative value. To save computation, constrained Kalman filtering is not necessary. Instead, interpolation and quaternion Slerp explained in Section 3.2.1 are tractable. From the definition of time delays presented in Section 3.1, the total time delay is not estimated as negative. For example, if the estimated delay is negative (i.e., an exceeded index), estimation is impossible since this case is forecasting states or obtaining measurements from the future, so the total delay has to be bounded by zero. Moreover, in the sequential measurement update, if estimated time delay is larger than the sampling time of the IMU, , then we indicate another slot in the delay buffer. Algorithm 1 is a summarized algorithm of the overall processes of the latency compensated VIO. fordo if new IMU packet arrival then Time Update: ▹ static features Numerically integrate with Store the state estimates into the delay buffer if during delay period then ▹ recursive else end if end if if new vision data packet arrival then Compute index of delay Interpolate using the state estimates from the buffer for of observed features do if new feature then Feature Initialization: ▹ least-squares minimization Augment state if feature is valid                         ▹ if positive depth Prune state vector if exceed maximum else                                     ▹ tracked feature Measurement Update: Update if gating test is passed                         ▹ Sequentially update the buffer end if end for Store index of the posterior estimated delay Erase used slots in the delay buffer end if end for

5. Results

This section provides results from Monte Carlo trials and flight datasets testing. First, in a simulation environment, since we can set true time-delay values at measurements, we test if the proposed framework estimates the actual time delays accurately. The next subsection presents the performance of the proposed approach by testing with benchmark flight datasets for validating whether it solves real-world problems.

5.1. Monte Carlo Simulations of a Simple Example

To show actual-time delays being estimated accurately, this section simulates a simple example problem by 100 Monte Carlo trials. The vehicle and measurement models of this simulation are direct from Lee and Johnson’s previous work [43]. The models are a second-order dynamic system with a non-delayed speed measurement and two delayed bearing angles measured from each location of two stations. From Equation (49), variance value of this simulation is .The actual time delay of the delayed measurements in this simulation is , and this value is identical to 18 delayed samples since the propagation rate of the simulation is . Monte Carlo simulations estimate the values of time delays, shown in Figure 7.
Figure 7

Estimation of total delays in simulation. (a) A static delay, (b) Varying delays.

Figure 7a shows that the estimated delay rapidly converges to the true delay value. That is, the estimation error of the delayed samples gradually decreases toward zero. Moreover, we may wonder whether the latency compensated filtering algorithm works when the delay is not static. See Figure 7b, which illustrates a response to a change in time delay. Although the values of unknown delays vary over time, estimation resulting from the online calibration method converges to true delay values.

5.2. Flight Datasets’ Test Results

To validate the reliability of the proposed approach for estimating states and unknown delay values, we test one of the benchmark datasets, so-called “EuRoC MAV datasets [63].” The visual-inertial sequences of the datasets were recorded onboard a micro aerial vehicle while a pilot manually flew around indoor Vicon environments. For more details, see reference [63]. Although the datasets include noise model parameters from the IMU at rest, we need to tune each variance of process noise covariance Q for the best performance. Likewise, to estimate the unknown part of time delays, we set the standard deviation of random walk in Equation (48) as since the order of this value is set to the same order of the smallest value among the provided noise parameters. Given that datasets provide various levels of challenging sequences such as faster motion and poor illumination in each environment, to articulate the significance of time delays defined in Section 3.1, we select two datasets of slow motion, called “EuRoC V1 Easy,” and fast motion, called “EuRoC V1 Medium.” Since the vehicle in the medium dataset maneuvers twice as fast, we hypothesize that the time delays have greater impact on the navigation solution of the medium dataset. Algorithms of image processing and filtering are developed under the robot operating system (ROS) [64], given that IMU data and images from the stereo camera are also subscribed under the ROS, shown in Figure 8.
Figure 8

ROS rqt graph of the latency compensated VIO linked to the EuRoC dataset.

The simplest solution to the estimation problem of the given datasets is to run the baseline in Section 3.2.1 that corrects only Jacobians and residual. However, the novel latency compensated filter described in Algorithm 1 compensates for delayed measurements at the time when the vision data are fused at the filter and estimate the refined state and the delay values. This compensated filtering follows the processes of all three corrections, shown in Figure 6. The EKF estimates relative location from a starting point. Since we do not know the exact absolute location of origin of given datasets, to compare with ground truth data given in the datasets, certain evaluation error metrics such as so-called “absolute trajectory error [65]” are required. After applying the absolute trajectory error, Figure 9 illustrates the top-down view of the estimated flight trajectory of the medium dataset. Figure 10 exhibit estimated x, y, z position and their estimation errors.
Figure 9

Top-down view of flight trajectory of the EuRoC V1 medium dataset by the latency compensated VIO.

Figure 10

Position and estimation error of the EuRoC V1 medium dataset by the latency compensated VIO. (a) Position x, (b) Position y, (c) Position z.

All estimation errors are bounded within each standard deviation bounds. We should expect significant time correlation in error plots and a generally growing error covariance for vision-aided inertial navigation problems like this one. Conceptually, position error gets “locked in” and to the extent new features are being mapped the position error will tend to grow with the length of the trajectory. Starting from the noise model parameters reported for the datasets, the compensated filter is a well-tuned estimator—the performance of doing runs with ×3 or ×10 (/3 or /10) multiplier on the R term used in the filter is worse for all of those, shown in Table 2. In other words, the fact that using those multipliers shows larger root mean square (RMS) estimation errors indicates that our approach is a well-tuned filter.
Table 2

Indication that the latency compensated VIO is well-tuned for EuRoC V1 medium dataset.

Multiplier on R /10/31×3×10
RMS error [m] 1.50960.19690.16190.26360.2850
Figure 11 shows the advantages of each correction in the latency compensated VIO by comparing it with the baseline and the covariance correction. The baseline discards cross-covariance and unknown part of the delays, and, although the latency compensated filter might increase the computational effort of the entire system, it significantly improves the accuracy of estimation.
Figure 11

Box plot of absolute estimation error of position of the EuRoC V1 medium dataset by the latency compensated VIO.

Unlike either the baseline or the covariance correction, the latency compensated filter calibrates the unknown part of time delays. Figure 12 shows that estimation resulting from the compensated filter converges to a certain, final delay value, and its variance rapidly decreases although initial uncertainty is high.
Figure 12

Estimation of unknown part of time delays of the EuRoC V1 medium dataset.

As shown in Figure 13, the average of estimated total delays is around that could generate about drift and offset during the delay period when the vehicle fly at average speed.
Figure 13

Estimation of time delays of the EuRoC V1 medium dataset. (a) Readable delays, (b) Estimated total delays.

When readable delay values are negative, the timestamps of images might indicate wrong pairs or packets. Table 3 lists the RMS position errors of cases for sensitivity analysis. Approximately known part of time delays introduced in Section 3.2 are either fixed by tuning or readouts , which is the difference of readable timestamps of current IMU and image. In addition, we can directly estimate entire parts of time delays without information on the approximately known part. For another case, using the final value of the estimated unknown part of time delays, we add a fixed to the total delays at every time. However, this case might not work when the delay is varying, and we can know the final value only after running the proposed filter. In other words, before applying the compensation filtering, fixed is still unknown. The estimation results from the latency compensated VIO approach depict the influence of the delays and the effectiveness of the corrections in the sensor fusion of the lagged measurements.
Table 3

Sensitivity analysis in RMS position error [m] of latency compensated VIO.

Dataset EuRoC V1 Easy Slow Motion 0.41 m/s, 16.0 deg/sEuRoC V1 MediumFast Motion 0.91 m/s, 32.1 deg/s
Method Cross-Cov OFFCross-Cov ONCross-Cov OFFCross-Cov ON
Fixed t¯dconst 0.33760.26770.46440.3135
Entirely Estimated t^d 0.22820.24060.47340.3538
Readouts t¯d+ N/A 0.25580.20320.41630.3121
+ Fixed δt¯d 0.28690.22850.32810.2218
+ Estimated δt^d 0.2019 0.1461 0.3353 0.1619
Fast motion datasets are more sensitive to time delays since the improvement is larger when applied to those datasets. Although numerous researchers have explored the VIO of the EuRoC datasets, few of them thoroughly considered measurements with unknown time delays. Table 4 reveals that the proposed estimator, the latency compensated VIO, outperforms the existing state-of-the-art methods, called “S-MSCKF” and “SVO+MSF” in which stereo is available.
Table 4

Comparison with other methods in RMS position error [m] of latency compensated VIO

DatasetEuRoC V1 EasyEuRoC V1 Medium
Method Slow Motion 0.41 m/s, 0.28 rad/sFast Motion 0.91 m/s, 0.56 rad/s
Latency Compensated VIO0.14610.1619
S-MSCKF (stereo-filter)0.340.20
SVO+MSF (loosely-coupled)0.400.63

6. Discussion

This study develops a practical extended Kalman filter (EKF)-based visual-inertial odometry (VIO) accounting for vehicle-feature correlations; that is, we develop tightly-coupled VIO for autonomous flight of unmanned aerial vehicles (UAVs). In particular, this paper has presented the development of a reliable and accurate filtering scheme for measurements with unknown time delays. We define time delays of vision data measurements in VIO. For compensating delayed measurements and estimating unknown delay values, this paper presents latency compensated filtering that includes state augmentation, interpolation, and residual, Jacobian, covariance corrections. The optimality of the three corrections and the observability of the state augmentation are validated; in other words, the appendix shows that the proposed latency compensated filter results in optimal estimates as if there were no delays in the data. We test the performance of VIO employing the latency compensated filtering algorithms in the benchmark flight datasets for comparison to other state-of-the-art VIO algorithms. Results from flight datasets testing show that the novel navigation approach in this paper improves the accuracy and reliability of state estimation with unknown time delays in VIO. With the latency compensated filtering, the root mean square (RMS) errors of estimation are decreased. In particular, we show improved accuracy of our method over previous approaches for state estimation in the fast motion datasets. The overall approach in this document can be easily employed in other filter-based, sensor-aided inertial navigation frameworks and is suitable to monocular VIO although this study uses a stereo camera to showcase the methods. Although the reliability and robustness of this study are validated by testing benchmark flight datasets, validating with other datasets is of interest.
  2 in total

1.  Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

Authors:  Weiwei Kong; Tianjiang Hu; Daibing Zhang; Lincheng Shen; Jianwei Zhang
Journal:  Sensors (Basel)       Date:  2017-06-19       Impact factor: 3.576

2.  A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

Authors:  Yu Song; Stephen Nuske; Sebastian Scherer
Journal:  Sensors (Basel)       Date:  2016-12-22       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.