| Literature DB >> 27690028 |
Shengjun Tang1,2,3,4,5, Qing Zhu6,7,8,9, Wu Chen10, Walid Darwish11, Bo Wu12, Han Hu13, Min Chen14.
Abstract
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.Entities:
Keywords: RGB-D camera; camera pose; depth; image; indoor modeling; registration
Year: 2016 PMID: 27690028 PMCID: PMC5087378 DOI: 10.3390/s16101589
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1(top) The hardware scheme of the RGB-D sensor (sensor with RGB camera and Depth camera); (bottom left) the acquired depth image; and (bottom right) the acquired RGB image.
Figure 2Flowchart of the enhanced RGB-D mapping approach.
Figure 3Methodology for RGB-D cameras Calibration.
Figure 4Relationship between the camera and the sensor coordinate systems.
Figure 5(left) Feature matches from an RGB image; and (right) feature matches on the corresponding depth image.
Figure 6Estimated trajectories compared against ground truth trajectories.
Comparison of median (maximum) absolute trajectory error in mm for joint optimization on RGB-D sequences of the Freiburg Benchmark Dataset, best results in bold.
| Datasets | Ours | 3D-NDT | Warp | Fovis | ||||
|---|---|---|---|---|---|---|---|---|
| Median | Max | Median | Max | Median | Max | Median | Max | |
| 47.8 | 26.6 | 6.2 | 147 | 6.3 | 34.2 | |||
| 9.1 | 14 | 18 | 2 | 1.9 | 9.9 | |||
| 7 | 18.6 | 74.6 | 19.2 | 246 | 20.8 | 101.5 | ||
Calibration results of the IR camera and RGB camera.
| IR Sensor | Focal length (pixels) | fxD | 580 ± 3.49 |
| fyD | 581 ± 3.27 | ||
| Principal point (pixels) | cxD | 331.59 ± 1.57 | |
| cyD | 236.59 ± 1.98 | ||
| Distortion | K1D | −0.0075 ± 0.0188 | |
| K2D | 1.7812 ± 0.3383 | ||
| P1D | −0.0047 ± 0.0009 | ||
| P2D | 0.0017 ± 0.0013 | ||
| K3D | −8.7810 ± 1.95 | ||
| RGB Sensor | Focal length (pixels) | fxC | 570.63 ± 3.43 |
| fyC | 570.96 ± 3.20 | ||
| Principal point (pixels) | cxC | 319.84 ± 1.55 | |
| cyC | 244.96 ± 2.01 | ||
| Distortion | K1C | −0.0378 ± 0.0209 | |
| K2C | −0.5221 ± 0.3959 | ||
| P1C | −0.0025 ± 0.0007 | ||
| P2C | −0.0014 ± 0.0010 | ||
| K3C | 3.9233 ± 2.3220 |
Figure 7(a) Dataset captured along a corridor; and (b) dataset captured in the outside environment.
Sequential alignment comparison with different method.
| Method | Avg. Translational Error (m) | Avg. Angular Error (deg) | Avg. Distance Error (m) | ||
|---|---|---|---|---|---|
| Corridor Experiment | Chair Experiment | Corridor Experiment | Chair Experiment | Corridor Experiment | |
| ICP | 0.236 | 0.143 | 3.563 | 1.724 | 0.265 |
| ICP + Global Optimization | 0.068 | 0.032 | 2.153 | 0.983 | 0.081 |
Figure 8False matches rejection for corridor model.
Statistics on discrepancies in the object space between the model from depth and RGB images.
| Dataset | Registration Results | RMSE of the Discrepancies from the Check Points | ||||
|---|---|---|---|---|---|---|
| Scale Factor | Rigid Transformation | σx (m) | σy (m) | σz (m) | ||
| Corridor Model | 2.796 | 174.997° | 2.694 | 0.026 | 0.019 | 0.023 |
| 4.657° | 1.546 | |||||
| 41.335° | −6.329 | |||||
| Chair Model | 1.075 | 174.915° | −0.955 | 0.015 | 0.014 | 0.012 |
| 6.536° | −0.332 | |||||
| −21.312° | −3.304 | |||||
Figure 9Results of geometric registration for corridor model.
Figure 10Results of geometric registration for chair model.