| Literature DB >> 26828496 |
Yan Li1, Qingwu Hu2, Meng Wu3, Yang Gao4.
Abstract
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.Entities:
Keywords: geo-referenced; image database; image matching; image retrieval; imaging sensor; multiple sensor-integrated mobile mapping; vision navigation
Year: 2016 PMID: 26828496 PMCID: PMC4801544 DOI: 10.3390/s16020166
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Framework of the proposed imaging sensor-aided vision navigation approach that uses a geo-referenced database.
Figure 2L-MMS for the data collection of geo-referenced database.
Figure 3Dynamic segment indexing of GRID based on the road network LRS: (a) division of DS index space; (b) structure of the DS index tree.
Figure 4Large file-based data storage model for GRID.
Figure 5Fast search and large file location based on spatial query.
Figure 6Fast search and large file location based on spatial query.
Figure 7Definition of a coordinate system.
Experimental GRID data set.
| Data Set | Data Volume (G) | Mileage (km) | Data Level |
|---|---|---|---|
| I | 1.9 | 1.8 | road |
| II | 120.0 | 115.3 | street (town) |
| III | 1035.0 | 1027.7 | county |
Comparison of image spatial query times of the DS tree and the quad-tree.
| Data Set | Index Model | Query Time ( |
|---|---|---|
| I | Quad-tree | 0.50 |
| II | Quad-tree | 0.52 |
| III | Quad-tree | 0.55 |
Comparison of image retrieval times in the large file model and the single-image file model.
| Data Set | Index Model | Image Retrieval Time ( |
|---|---|---|
| I | Single-image file | 1.25 |
| II | Single-image file | 4.86 |
| III | Single-image file | 18.24 |
Figure 8Accuracy of the proposed vision navigation approach: (a) x direction; (b) y direction; (c) H.