| Literature DB >> 22368478 |
Xinzheng Zhang1, Ahmad B Rad, Yiu-Kwong Wong.
Abstract
This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.Entities:
Keywords: SLAM; feature fusion; homography transform matrix; multi-sensor point estimation fusion (MPEF)
Mesh:
Year: 2012 PMID: 22368478 PMCID: PMC3279222 DOI: 10.3390/s120100429
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1.The world and robot/camera coordinate reference. Red and subscript W: world reference; black and subscript R: camera/robot reference. a is the height from the ground plane.
Figure 2.(a) The region of interest (ROI); (b) Detected horizontal edges in ROI without morphological operation, and some edges related to dynamic objects are not removed; (c) Detected horizontal edges in ROI after shrink and clean morphological operation with thresholding technique; (d) Selected line features in ROI after thicken operation.
Figure 3.Matched point determined by SIFT descriptors.
HTMDA algorithm.
| // I |
| //O |
| [desCur, locCur ] = sift(CurrentImg); // Find SIFT keypoints for current captured image. The outputs are |
| // desCur: descriptor for the keypoint; locCur: keypoint location |
| |
| [desK, locK] = sift(Img(k)); // Find SIFT keypoints for kth image. |
| // Estimating |
| [M(k), ΣM(k)] = HomographyEstimation(locCur, locK, |
| // Observation prediction |
| |
| EndsPred (j) = M(k)EndsMap(j); // |
| sm = (EndsObs(i) – EndsPred(j)) · (ΣM(k))−1 · (EndsObs(i) − EndsPred(j)) |
| |
| DA(i, j, k) = 1; |
| |
| // Two predicted line points locate on the same line, or one Mahalanobis distance value meets |
| // condition (7) and one predicted line point lies on the observed line. |
| DA(i, j, k) = 1; |
| |
| DA(i, j, k) = 0; |
| |
| |
| |
| |
| |
| |
Modified MPEF-SLAM algorithm.
| // Robot pose initialization |
| [ |
| [ |
| [ |
| Q = createQ( |
| // Line Feature initialization |
| SegC = HorizontalEdge(image); // Horizontal line extraction from 1st captured image |
| [ |
| // intrinsic parameters of the camera |
| SegL = LineExtraction(laserdata); // Segment extraction from 1st frame of laser data |
| [ |
| // State variable and related covariance initialization |
| // Fused robot pose initialization
|
| // Main loop |
| k = 1; |
| uk = getControl(k); // Obtaining control variables |
| [ |
| [ |
| // Do MPEF procedure |
|
|
| |
| // Propagate backward the MPEF results to each individual SLAM |
|
|
| // Update individual covariance |
|
|
|
|
| k = k + 1; |
Intrinsic Parameters.
| Focal length | fc = [365.12674 365.02905] |
| Principal point | cc = [145.79917 114.50956] |
| Skew factor | alpha_c = 0.000 |
| Distortion factor | kc = [−0.22776, 0.36413, −0.00545, −0.00192, 0.000] |
| Pixel std | err = [0.10083 0.10936] |
Figure 4.Local mapping result at the 33rd sample time. (a) The original captured image. A person stood in front of the robot for a moment. (b) The extracted image lines and their endpoints in ROI. The cyan line is the first extracted one and numbered as 1. (c) The extracted laser segments within the HFOV and a pseudo segment (segment 4) related to standing person. (d) After integrating the lines information extracted from images, the incorrect segment was removed.
The hypothesis test of feature fusion.
| 1 | ||||||
| 2 | ||||||
| 3 | ||||||
| 4 | ||||||
| 5 | ||||||
means no fusion process is implemented. H0 means the laser segments are assumed to assign to the noise, and H1 implies the laser segments probably are related to image line features.
Figure 5.The laser SLAM results with the feature fusion. (a) The robot trajectory and the grid map plotted by the software of ActivMedia Co. using raw laser data; (b) The final built map using feature fusion where the part circled by the ellipse is enlarged to show the details. orange: the segment map after fusion; light gray: the grid map for comparison.
Figure 6.The estimated covariance of fused and individual robot pose. red line: estimated covariance in laser SLAM; green line: estimated covariance in monocular SLAM; blue lines: estimated covariance in MPEF-SLAM.
Figure 7.The estimated covariance of an endpoint of one line feature. Red line: estimated covariance in laser SLAM; Green line: estimated covariance in monocular SLAM; Blue lines: estimated covariance in MPEF-SLAM.
Figure 8.The endpoints and projected intersection point of the lines in the stored and captured images. The image 1 (on the left) is captured at the 57th sample time and the right one (image 2) is at the 58th sample time.
Figure 9.Errors between the actual and estimated location of endpoint 2 from 40th to 60th sample time. The 99% confidence limit is shown in red dash-dot line.