| Literature DB >> 35155589 |
André Silva Aguiar1,2, Filipe Neves Dos Santos1, Héber Sobreira1, José Boaventura-Cunha1,2, Armando Jorge Sousa1,3.
Abstract
Developing ground robots for agriculture is a demanding task. Robots should be capable of performing tasks like spraying, harvesting, or monitoring. However, the absence of structure in the agricultural scenes challenges the implementation of localization and mapping algorithms. Thus, the research and development of localization techniques are essential to boost agricultural robotics. To address this issue, we propose an algorithm called VineSLAM suitable for localization and mapping in agriculture. This approach uses both point- and semiplane-features extracted from 3D LiDAR data to map the environment and localize the robot using a novel Particle Filter that considers both feature modalities. The numeric stability of the algorithm was tested using simulated data. The proposed methodology proved to be suitable to localize a robot using only three orthogonal semiplanes. Moreover, the entire VineSLAM pipeline was compared against a state-of-the-art approach considering three real-world experiments in a woody-crop vineyard. Results show that our approach can localize the robot with precision even in long and symmetric vineyard corridors outperforming the state-of-the-art algorithm in this context.Entities:
Keywords: 3D LIDAR; agriculture; localization; mapping; semiplanes
Year: 2022 PMID: 35155589 PMCID: PMC8831384 DOI: 10.3389/frobt.2022.832165
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144
Summary of the current state-of-the-art on plane-based localization and mapping.
| References | Application | Feature extraction | Mapping |
|---|---|---|---|
|
| 3D reconstruction of indoor spaces using hand-held sensors. | Points: image feature detector; planes: RANSAC algorithm. | RANSAC-based registration algorithm. |
|
| Localization and mapping of low-texture indoor environments. | CNN-based plane detection. | Point and semiplane registration. |
|
| Egomotion estimation in indoor and outdoor semi-structured environments. | Plane extraction using a PCA technique. | Iterative Closest Point (ICP) algorithm used for plane registration. |
|
| Velodyne point-plane SLAM in challenging indoor and outdoor environments. | Find groups of points that arise from planar surfaces in a scan-line basis ( | Registration with a developed algorithm: Iterative Closest Point Plus Plane Optimization (IC3PO). |
|
| SLAM in indoor environments. | Plane segmentation using a connected component-based approach. | Points added by triangulation and observed planes added if no correspondence is found. |
|
| Mapping of indoor environments using hand-held sensors. | Plane segmentation from point cloud data. | Infinite plane representation and mapping. |
|
| Localization and mapping of indoor environments. | Divide and conquer approach: best-fitting planes from small regions ( | 3D map builds using an Extended Kalman Filter (EKF). |
|
| SLAM in indoor environments using hand-held sensors. | Planar surfaces extracted using RANSAC. | Registering based on similarity test. |
|
| 3D reconstruction of indoor environments. | Planes extracted using RANSAC. | Planes registered and fused using a weight function. |
|
| Planar representation of indoor and outdoor environments. | Plane segments extracted by a 2D Delaunay triangulation. | Registration using the overlapping between planes. |
|
| Outdoor SLAM. | Planes extracted using RANSAC. | Planes matched and registered using: orientation, translation and closeness. |
| Our approach | Autonomous navigation in outdoor agricultural environments. | Point-wise and three stage semiplane-wise feature extraction. | Point registration and semiplane matching and merging algorithm for registering and mapping. |
FIGURE 1System architecture partitioned in three main layers: perception, localization and mapping.
FIGURE 2Corner (yellow) and planar (red) feature extraction example in a woody crop vineyard.
FIGURE 3Vertical angle definition. Green dots are estimated ground points, and red dots non-ground points. The vertical angle is estimated between two consecutive points of the same column in the range image.
FIGURE 4Semiplane feature extraction example in a woody crop vineyard. The blue lines represent the polygons edges, the red dots their extremas and the dark dots the semiplane inliers points.
FIGURE 5Point feature nearest neighbor local search. In a discrete 3D space, the nearest neighbor of a point feature can be in the grid map layer where the feature is located or at the top and bottom adjacent layers. A local search in these three layers is performed to find the nearest feature as described in the figure. If a feature is found when searching in the blue path, the search ends. Otherwise, the search continues through the yellow path. The user can tune the stop criteria.
FIGURE 6Likelihood of the semiplane-based weight calculation represented as a multivariable function. The likelihood decreases exponentially with the increase of difference between normal vectors and centroid-to-plane distance. Their corresponding standard deviations can control the impact of each one of the variables in the final likelihood.
FIGURE 7Simulation environment containing three perpendicular planes: the ground and two walls.
FIGURE 8Semiplane feature extraction of the three simulated semiplanes.
FIGURE 9Simulation results using three perpendicular planes in the localization and mapping procedures for a (A) translation-only trajectory, and for a (B) rotation-only trajectory.
FIGURE 10AgRob V16 robotic platform used to test the proposed approach placed in a woody-crop vineyard.
Summary of the experiments performed in Aveleda’s vineyard.
| Experiment | Distance (m) | Foliage stage | Season |
|---|---|---|---|
| Sequence 1 | 69.73 | With | Summer |
| Sequence 2 | 23.52 | Without | Winter |
| Sequence 3 | 81.72 | With | Summer |
FIGURE 11Satellite image of Aveleda’s vineyard. The sub-figures represent the sequences (1, 2 and 3) traveled by the robot.
Absolute pose error metrics for VineSLAM and LeGO-LOAM under the three test sequences.
| Experiment | Method | Max (m) | Mean (m) | RMS (m) |
|---|---|---|---|---|
| Sequence 1 | VineSLAM | 2.65 | 1.41 | 1.58 |
| LeGO-LOAM | 2.26 | 0.81 | 0.93 | |
| Sequence 2 | VineSLAM | 0.84 | 0.38 | 0.44 |
| LeGO-LOAM | 20.81 | 10.49 | 11.87 | |
| Sequence 3 | VineSLAM | 1.17 | 0.64 | 0.69 |
| LeGO-LOAM | 29.57 | 21.39 | 22.48 |
FIGURE 12Average pose error (m) and its corresponding Root Mean Squared Error (RMSE), median, mean and standard deviation for (A–C) VineSLAM and (D–F) LeGO-LOAM under the three experiments performed.
FIGURE 13Absolute pose error (m) mapped onto the trajectory for (A–C) VineSLAM and (D–F) LeGO-LOAM with reference to the ground truth.
FIGURE 14VineSLAM’s and LeGO-LOAM’s localization estimation with reference to the ground truth for sequences (A) 1, (B) 2, and (C)3.