| Literature DB >> 35459050 |
Lin Yang1,2, Hongwei Ma1,2, Yan Wang1,2, Jing Xia1,2, Chuanwei Wang1,2.
Abstract
Realizing robust six degrees of freedom (6DOF) state estimation and high-performance simultaneous localization and mapping (SLAM) for perceptually degraded scenes (such as underground tunnels, corridors, and roadways) is a challenge in robotics. To solve these problems, we propose a SLAM algorithm based on tightly coupled LiDAR-IMU fusion, which consists of two parts: front end iterative Kalman filtering and back end pose graph optimization. Firstly, on the front end, an iterative Kalman filter is established to construct a tightly coupled LiDAR-Inertial Odometry (LIO). The state propagation process for the a priori position and attitude of a robot, which uses predictions and observations, increases the accuracy of the attitude and enhances the system robustness. Second, on the back end, we deploy a keyframe selection strategy to meet the real-time requirements of large-scale scenes. Moreover, loop detection and ground constraints are added to the tightly coupled framework, thereby further improving the overall accuracy of the 6DOF state estimation. Finally, the performance of the algorithm is verified using a public dataset and the dataset we collected. The experimental results show that for perceptually degraded scenes, compared with existing LiDAR-SLAM algorithms, our proposed algorithm grants the robot higher accuracy, real-time performance and robustness, effectively reducing the cumulative error of the system and ensuring the global consistency of the constructed maps.Entities:
Keywords: IMU; LiDAR; SLAM; perceptually degraded scenes; state estimation
Mesh:
Year: 2022 PMID: 35459050 PMCID: PMC9024896 DOI: 10.3390/s22083063
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1System framework.
Meaning of the variables involved in this paper.
| Symbol | Meaning | Symbol | Meaning |
|---|---|---|---|
|
| Position |
| Translation of |
|
| Velocity |
| Rotation of |
|
| Euler angular pose |
| Rotation of |
|
| Quaternion pose |
| Translation of |
|
| Transformation matrix |
| Transformation matrix from |
|
| Angular velocity |
| World coordinates |
|
| Acceleration |
| LiDAR coordinates |
|
| Gravity |
| Robotics, IMU coordinates |
|
| Acceleration and angular velocity bias |
| kth frame of LiDAR under |
|
| Acceleration and angular velocity noise |
| State of the |
|
| 3D point cloud |
| State at moment |
|
| Map |
| Priori state |
Figure 2Wheeled inspection robot.
Rotation and translation errors for comparing true values in the following six fast-to-slow datasets.
| Error | Dataset | LOAM | LIO-Raw | LIO-No-Ext | LIO | Ours |
|---|---|---|---|---|---|---|
| Root mean square error of translation (m) | Fast_1 | 0.4469 | 0.2464 | 0.0957 | 0.0949 |
|
| Fast_2 | 0.2023 | 0.4346 | 0.1210 | 0.0755 |
| |
| Med_1 | 0.1740 | 0.1413 | 0.1677 | 0.1002 |
| |
| Med_2 | 0.1010 | 0.2460 | 0.3032 | 0.1308 |
| |
| Slow_1 | 0.0606 | 0.1014 | 0.0838 | 0.0725 |
| |
| Slow_2 | 0.0666 | 0.1016 | 0.0868 | 0.1024 |
| |
| Root mean square error of rotation (rad) | Fast_1 | 0.1104 | 0.1123 | 0.0547 | 0.0545 |
|
| Fast_2 | 0.0763 | 0.1063 | 0.0784 | 0.0581 |
| |
| Med_1 | 0.0724 | 0.0620 | 0.0596 | 0.0570 |
| |
| Med_2 | 0.0617 | 0.0886 | 0.0900 | 0.0557 |
| |
| Slow_1 | 0.0558 | 0.0672 | 0.0572 | 0.0581 |
| |
| Slow_2 | 0.0614 | 0.0584 | 0.0571 | 0.0533 |
|
Figure 3Comparison between the motion trajectories generated by the four different algorithms and the true values.
Figure 4Comparison between the trajectory error and the true value produced by the four different, they should be listed as: (a) translation error; (b) rotation error.
Error statistics of relative translations and rotations between the four algorithms and the true values.
| Algorithm | Aloam | Lego_Loam | Lio_Sam | Ours | Aloam | Lego_Loam | Lio_Sam | Ours |
|---|---|---|---|---|---|---|---|---|
| Error | Translation (m) | Rotation (deg) | ||||||
| Max | 1.2899 | 2.5120 | 4.7334 |
| 2.9888 | 6.3016 | 3.2062 |
|
| Mean | 0.1015 | 1.1715 | 1.5112 |
| 0.2889 | 0.9832 | 0.2777 |
|
| Min | 0.0032 | 0.0092 | 0.0066 |
| 0.0027 | 0.0073 | 0.0039 |
|
| RMSE | 0.1175 | 1.2414 | 1.7136 |
| 0.3678 | 1.5164 | 0.3667 |
|
| Std | 0.0591 | 0.4108 | 0.8079 |
| 0.2277 | 1.1545 | 0.2395 |
|
Figure 5The point cloud map was constructed by four SLAM methods in the current experimental environment, they should be listed as: (a) Corridor scene on the 3rd floor of the mechanical building; (b) corridor scene on the 21st floor of the innovation building; (c) corridor scene on the 11th floor of the science and technology building.
Figure 6Effect of ground constraints on positioning and map building, they should be listed as: (a) Without ground constraint, (b) with ground constraint.
Figure 7Effect of loop detection on localization and map building: (a) Without loop detection and (b) with loop detection.
Average running time statistics for each major module.
| Submodule Name | Self-Picked Datasets (ms) | KITTI Dataset | |
|---|---|---|---|
| Front End | down sampling | 4.14 | 14.13 |
| pre-processing | 2.89 | 8.68 | |
| scan matching | 3.45 | 16.21 | |
| Back End | map construction | 2.19 | 4.74 |
| loop detection | 34.58 | 125.35 | |
| pose graph optimization | 13.41 | 118.62 | |