| Literature DB >> 32290582 |
Prasanna Kolar1, Patrick Benavidez1, Mo Jamshidi1.
Abstract
This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.Entities:
Keywords: LiDAR; RGB; SLAM; autonomous systems; data alignment; data fusion; data integration; datafusion; deep learning; fusion; information fusion; localization; mapping; mobile robot; multimodal; navigation; neural networks; obstacle avoidance; obstacle detection; optical; review; robot; stereo vision; survey; vision
Year: 2020 PMID: 32290582 PMCID: PMC7218742 DOI: 10.3390/s20082180
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1High level Perception Architecture.
Figure 2Concepts of perception.
Data fusion techniques and their classifications.
| Classification Type | Types of Fusion for the Given Classification | ||||
|---|---|---|---|---|---|
| Classification based on Data Relationship | Complementary data relationship | Redundant duplicated relationship | Co-operation relationship | ||
| Classification based on Abstraction Level | Signal level | Pixel level | Characteristics | Symbols | |
| Dasarathy classification | Data In Data Out | Data In Feature Out | Feature In Feature Out | Feature In Decision Out | Decision In Decision Out |
| JDL | Source pre-processing | Object refinement | Situation assessment | Impact assessment | Process refinement |
| Classification based on Architecture | Distributed | Centralized, Decentralized | Hierarchical | ||
Figure 3Depth calibration [127].
Figure 4Realsense phone calibration tool [127].
Figure 5Realsense iPhone speck pattern for calibration [128].
Figure 6High-level perception task.
Figure 7High level block diagram of LiDAR and Camera data fusion [259].
Characteristics and properties of acoustic and vision based sensors.
| Sensor | Data Density | Low Light Operation | Position Information | Velocity Information | Class | Size Information | Color Availability |
|---|---|---|---|---|---|---|---|
| LiDAR |
| Yes | Yes | Yes | Yes | ||
| Ultrasonic |
| Yes | Yes | Yes | Yes | ||
| Radar |
| Yes | Yes | Yes | Yes | ||
| Thermal Camera |
| Yes | Yes | Yes | |||
| Vision Camera |
| Yes | Yes | Yes |
Figure 8Architecture of a fusion system.
Summary of the usage of data fusion techniques in autonomous navigation.
| Summary of the Usage of Data Fusion Techniques in Autonomous Navigation | |
|---|---|
|
| We discuss the usage of data fusion in mapping applications |
| Thrun, S. Survey of Robotic Mapping and discuss their research about how combining posterior estimation with incremental map building using maximum likelihood estimators | |
| Akthar - Developed a data fusion system that was used to create a 3D Model with a depth map and object 3D reconstruction | |
| Jin - proposed an approach for SLAM using 2D LiDAR and stereo camera | |
| Andersen et al., used LiDAR and camera fusion for fast and accurate mapping in autonomous racing | |
|
| We briefly discuss what sensors are used in localization and the challenges using these sensors in data fusion |
| Dasarathi et al., proposed techniques for localization and navigation in general | |
| Zhang et al., proposed a robust model that used the MM-estimate technique for segment-based SLAM in dynamic environments using 2D LiDAR. | |
| Wei et al., did data fusion with LiDAR and camera using Fuzzy logic techniques. | |
|
| Wang et al., developed a sensor fusion platform using camera for a mobile robot which can be used for path planning. |
| Ali et al., developed a fusion algorithm for an online navigation with a complete planner. | |
| Gwon et al., designed sweeper robots that were used for path estimation for the curling game | |
| Xi et al., proposed a swam technique with mapping and path planning to improve navigation. | |
| Sabe et al., used occupancy grids for path planning and finding paths from robot source to target location | |
|
| Several references are provided that introduce various research areas with systems like autonomous, assistive, haptic, motor and optical assistive and similar systems |
| We discuss several algorithms by Danescu, wu and redmond and their teams that are used in object detection. | |
| We discuss pros and cons of each sensor for object detection. | |
| Banerjee et al., developed a fusion system of LiDAR and camera using a gradient free optimizer giving them a low footprint, lightweight and robust system. | |
| Huber et al., studied the same senors and performed data fusion. They state that the sparse LiDAR information is not useful for complex applications | |
| Asvadi et al., researched multimodal fusion and used it in vehicle detection areas, identifying obstacles around an autonomous vehicle | |
| Manghat et al., developed a real-time tracking fusion system for ADAS systems. | |
| Luo et al. published a manuscript documenting datafusion techniques, approaches applications and future research. | |
| Dynamic obstacle detection and avoidance is discussed as proposed by Fox et al., and theGaussian obstacle avoidance system by cho, | |
| We present the flow of a navigation system, with data fusion feeding into an object detection system. | |
| We summarize the usage of AI and Neural networks in object detection; techniques like YOLO, SSD, CNN, RNN were discussed | |
| An architecture of data fusion system, that can be used in autonomous navigation is presented. | |