| Literature DB >> 31382700 |
Junxiang Jiang1, Xiaoji Niu2,3, Ruonan Guo4, Jingnan Liu1,5.
Abstract
The fusion of visual and inertial measurements for motion tracking has become prevalent in the robotic community, due to its complementary sensing characteristics, low cost, and small space requirements. This fusion task is known as the vision-aided inertial navigation system problem. We present a novel hybrid sliding window optimizer to achieve information fusion for a tightly-coupled vision-aided inertial navigation system. It possesses the advantages of both the conditioning-based method and the prior-based method. A novel distributed marginalization method was also designed based on the multi-state constraints method with significant efficiency improvement over the traditional method. The performance of the proposed algorithm was evaluated with the publicly available EuRoC datasets and showed competitive results compared with existing algorithms.Entities:
Keywords: MSC; VINS; conditioning; marginalization; optimization; sliding window
Year: 2019 PMID: 31382700 PMCID: PMC6696157 DOI: 10.3390/s19153418
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The structure of the hybrid sliding window optimizer.
The RMSEs of keyframe trajectory error with changing N, M, F.
| Parameter Setting | Median of RMSEs (m) |
|---|---|
| 0.092 | |
| 0.125 | |
| 0.071 | |
| 0.068 | |
|
| |
| 0.063 |
“Nx-My-Fz” means N = x, M = y, and F = z.
RMSEs of absolute metric position errors (m) on the EuRoC datasets.
| Cited from [ | VI-ORB-SLAM [ | VI-DSO [ | HSWO | HSWO (Limited) | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| MSCKF | OKVIS | ROVIO | VINSMONO | KF Trajectory | Full Trajectory | KF Trajectory | Full Trajectory | |||
| MH01 | 0.42 | 0.16 | 0.21 | 0.27 | 0.075 |
|
| 0.048 | 0.078 | 0.075 |
| MH02 | 0.45 | 0.22 | 0.25 | 0.12 | 0.084 |
|
| 0.036 | 0.072 | 0.057 |
| MH03 | 0.23 | 0.24 | 0.25 | 0.13 | 0.087 | 0.117 |
| 0.063 | 0.055 |
|
| MH04 | 0.37 | 0.34 | 0.49 | 0.23 | 0.217 |
|
| 0.099 | 0.169 | 0.216 |
| MH05 | 0.48 | 0.47 | 0.52 | 0.35 | 0.082 | 0.121 |
| 0.063 | 0.139 |
|
| V101 | 0.34 | 0.09 | 0.10 | 0.07 |
| 0.059 | 0.039 | 0.035 | 0.034 |
|
| V102 | 0.20 | 0.20 | 0.10 | 0.10 |
| 0.067 | 0.032 | 0.036 | 0.042 |
|
| V103 | 0.67 | 0.24 | 0.14 | 0.13 | - | 0.096 |
| 0.057 | 0.055 |
|
| V201 | 0.10 | 0.13 | 0.12 | 0.08 |
|
| 0.033 | 0.027 | 0.036 | 0.042 |
| V202 | 0.16 | 0.16 | 0.14 | 0.08 | 0.041 | 0.062 |
| 0.027 | 0.043 |
|
| V203 | 1.13 | 0.29 | 0.14 | 0.21 |
| 0.174 | 0.083 | 0.072 | - | - |
The keyframe trajectory with highest accuracy is highlighted in bold and black; The best performance among the VINS pipelines, except for VI-ORB-SLAM and HSWO, is also highlighted in bold.
Figure 2Results for the MH_03_medium image sequence after visual-inertial initialization: (a) Full trajectory after visual-inertial initialization; (b) Translation deviation with respect to the ground truth; (c) Rotation deviation of camera-to-IMU transformation; (d) Translation deviation of camera-to-IMU transformation.
Figure 3Results for the V1_03_difficult image sequence after visual-inertial initialization: (a) Full trajectory after visual-inertial initialization; (b) Translation deviation with respect to the ground truth; (c) Rotation deviation of camera-to-IMU transformation; (d) Translation deviation of camera-to-IMU transformation.
Figure 4The yaw error of our implementation on Machine Hall datasets.
Figure 5The yaw error of our implementation on Vicon room datasets.
Time consumption of the HSWO and LBA method.
| METHOD | MEDIAN (ms) | MEAN (ms) | MAX (ms) | STD (ms) |
|---|---|---|---|---|
| HSWO | 189 | 219 | 507 | 82 |
| LBA | 163 | 216 | 1202 | 189 |
Figure 6The comparison of the time consumption of our method and the traditional method.