| Literature DB >> 30832227 |
Liang Wang1,2, Zhiqiu Wu3.
Abstract
Due to image noise, image blur, and inconsistency between depth data and color image, the accuracy and robustness of the pairwise spatial transformation computed by matching extracted features of detected key points in existing sparse Red Green Blue-Depth (RGB-D) Simultaneously Localization And Mapping (SLAM) algorithms are poor. Considering that most indoor environments follow the Manhattan World assumption and the Manhattan Frame can be used as a reference to compute the pairwise spatial transformation, a new RGB-D SLAM algorithm is proposed. It first performs the Manhattan Frame Estimation using the introduced concept of orientation relevance. Then the pairwise spatial transformation between two RGB-D frames is computed with the Manhattan Frame Estimation. Finally, the Manhattan Frame Estimation using orientation relevance is incorporated into the RGB-D SLAM to improve its performance. Experimental results show that the proposed RGB-D SLAM algorithm has definite improvements in accuracy, robustness, and runtime.Entities:
Keywords: Manhattan frame estimation; RGB-D; SLAM; indoor environment; orientation relevance; spatial transformation
Year: 2019 PMID: 30832227 PMCID: PMC6427174 DOI: 10.3390/s19051050
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Schematic overview of (a) the original Red Green Blue-Depth (RGB-D) Simultaneously Localization And Mapping (SLAM) and (b) the proposed RGB-D SLAM.
Details of sequences from the Red Green Blue-Depth (RGB-D) Simultaneously Localization And Mapping (SLAM) dataset [23].
| Sequence | Frames | Duration (s) | Length (m) | Avg. Trans. | Avg. Rot. | Range (m |
|---|---|---|---|---|---|---|
| fr1/360 | 745 | 28.69 | 5.82 | 0.21 | 41.60 | 0.54 × 0.46 × 0.47 |
| fr3/long_office | 2585 | 87.09 | 21.45 | 0.25 | 10.19 | 5.12 × 4.89 × 0.54 |
| fr1/floor | 1214 | 49.87 | 12.57 | 0.258 | 15.07 | 2.30 × 1.31 × 1.58 |
Figure 2Experimental results of the proposed RGB-D SLAM with sequence fr1/360. (a) Mapping results in the form of volumetric 3D model. (b) Estimated trajectories.
Figure 3Experimental results of the proposed RGB-D SLAM with sequence fr3/long_office_household. (a) Mapping results in the form of volumetric 3D model. (b) Estimated trajectories.
Figure 4Experimental results of the proposed RGB-D SLAM with sequence fr1/floor. (a) Mapping results in the form of volumetric 3D model. (b) Estimated trajectories.
Trajectory results of RGB-D SLAM with fr1/360 sequence.
| Method | Translation | Rotation | Runtime | |||
|---|---|---|---|---|---|---|
| RMSE (m) | RI | RMSE ( | RI | (s) | RI | |
| original method [ | 0.103 | − | 3.41 | − | 145 | − |
| method with RMFE [ | 0.107 |
| 3.37 |
| 112 |
|
| proposed method | 0.082 |
| 3.10 |
| 100 |
|
Trajectory results of RGB-D SLAM with fr3/long_office_householdsequence.
| Method | Translation | Rotation | Runtime | |||
|---|---|---|---|---|---|---|
| RMSE (m) | RI | RMSE ( | RI | (s) | RI | |
| original method [ | 0.082 | − | 1.63 | − | 722 | − |
| proposed method | 0.052 |
| 1.52 |
| 511 |
|
Trajectory results of RGB-D SLAM with fr1/floor sequence.
| Method | Translation | Rotation | Runtime | |||
|---|---|---|---|---|---|---|
| RMSE (m) | RI | RMSE ( | RI | (s) | RI | |
| original method [ | 0.061 | − | 2.72 | − | 488 | − |
| proposed method | 0.054 |
| 2.69 |
| 402 |
|