| Literature DB >> 36015703 |
Nikola Lopac1,2, Irena Jurdana1, Adrian Brnelić1, Tomislav Krljan1.
Abstract
The development of light detection and ranging (lidar) technology began in the 1960s, following the invention of the laser, which represents the central component of this system, integrating laser scanning with an inertial measurement unit (IMU) and Global Positioning System (GPS). Lidar technology is spreading to many different areas of application, from those in autonomous vehicles for road detection and object recognition, to those in the maritime sector, including object detection for autonomous navigation, monitoring ocean ecosystems, mapping coastal areas, and other diverse applications. This paper presents lidar system technology and reviews its application in the modern road transportation and maritime sector. Some of the better-known lidar systems for practical applications, on which current commercial models are based, are presented, and their advantages and disadvantages are described and analyzed. Moreover, current challenges and future trends of application are discussed. This paper also provides a systematic review of recent scientific research on the application of lidar system technology and the corresponding computational algorithms for data analysis, mainly focusing on deep learning algorithms, in the modern road transportation and maritime sector, based on an extensive analysis of the available scientific literature.Entities:
Keywords: autonomous vehicles; data analysis; deep learning; laser sensors; lidar; maritime sector; object detection; remote sensing; road transportation; transportation
Mesh:
Year: 2022 PMID: 36015703 PMCID: PMC9415075 DOI: 10.3390/s22165946
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Conceptual representation of the lidar system operating principle.
Figure 2Example of 3D point cloud representation obtained by the lidar system.
Figure 3Application of the lidar system in modern road transportation.
Review of scientific papers on the application and analysis of the lidar system in modern road transportation.
| Reference | Description of Application | Conclusion | |
|---|---|---|---|
|
| [ | A review of the state-of-the-art lidar technologies and the associated perception algorithms for application in autonomous driving | The limitations and challenges of the lidar technology are presented, as well as the impressive results of the analyzed algorithms |
| [ | Discussion of the lidar systems’ role in autonomous driving applications | The vital role of monitoring fixed and moving objects in traffic | |
| [ | A review of lidar applications in automated extraction of road features and a discussion on challenges and future research | Use of lidar for various transportation applications, including on-road (road surface, lane, and road edge), roadside (traffic signs, objects), and geometric (road cross, vertical alignment, pavement condition, sight distance, vertical clearance) information extraction | |
| [ | Simultaneous localization and mapping (SLAM)-based indoor navigation for autonomous vehicles directly based on the three-dimensional (3D) spatial information from the lidar point cloud data | A comparative analysis of different navigation methods is conducted based on extensive experiments in real environments | |
| [ | Extensive analysis of automotive lidar performance in adverse weather conditions, such as dense fog and heavy rain | Poor perception and detection of objects during rain and fog; the proposed rain and fog classification method provides satisfactory results | |
| [ | Testing the lidar system for outdoor unmanned ground vehicles in adverse weather conditions, including rain, dust, and smoke | Signal attenuation due to scattering, reflection, and absorption of light and the reduction of detection distance are identified | |
| [ | Analysis of the effects of fog conditions on the lidar system for visibility distance estimation for autonomous vehicles on roads | The visibility distances obtained by lidar systems are in the same range as those obtained by human observers; the correlation between the decrease in the optical power and the decrease of the visual acuity in fog conditions is established | |
| [ | Analysis of the performance of a time-of-flight (ToF) lidar in a fog environment for different fog densities | The relations between the ranging performance and different types of fog are investigated, and a machine learning-based model is developed to predict the minimum fog visibility that allows successful ranging | |
| [ | Application of Kalman filter and nearby point cloud denoising to reconstruct lidar measurements from autonomous vehicles in adverse weather conditions, including rain, thick smoke, and their combination | The experiments in the 2 × 2 × 0.6 m space show an improved normal weather 3D signal reconstruction from the lidar data in adverse weather conditions, with a 10–30% improvement | |
| [ | Analysis of the influence of adverse environmental factors on the ToF lidar detection range, considering the 905 nm and 1550 nm laser wavelengths | A significant difference in the performance of the two laser types is identified—a 905 nm laser is recommended for poor environmental conditions | |
|
| [ | Deep learning road detection based on the simple and fast fully convolutional neural networks (FCNs) using only lidar data, where a top-view representation of point cloud data is considered, thus reducing road detection to a single-scale problem | High accuracy of road segmentation in all lighting conditions accompanied by fast inference suitable for real-time applications |
| [ | Automatic traffic lane detection method based on the roadside lidar data of the vehicle trajectories, where the proposed method consists of background filtering and road boundary identification | Two case studies confirm the method’s ability to detect the boundaries of lanes for curvy roads while not being affected by pedestrians’ presence | |
| [ | Deep learning road detection based on the FCNs using camera and lidar data fusion | High system accuracy is achieved by the multimodal approach, in contrast to the poor detection results obtained by using only a camera | |
| [ | Road detection based on the lidar data as input to the system integrating the building information modeling (BIM) and geographic information system (GIS) | Accurate road detection is achieved by lidar data classification, but additional manual adjustments are still required | |
| [ | Lidar-histogram method for detecting roads and obstacles based on the linear classification of the obstacle projections with respect to the line representing the road | Promising results in urban and off-road environments, with the proposed method being suitable for real-time applications | |
| [ | Road-segmentation-based pavement edge detection for autonomous vehicles using 3D lidar sensors | The accuracy, robustness, and fast processing time of the proposed method are demonstrated on the experimental data acquired by a self-driving car | |
| [ | An automated algorithm based on the parametric active contour model for detecting road edges from terrestrial mobile lidar data | Tests on various road types show satisfactory results, with dependence on the algorithm parameter settings | |
|
| [ | Visual localization of an autonomous vehicle in the urban environment based on a 3D lidar map and a monocular camera | The possibility of using a single monocular camera for the needs of visual localization on a 3D lidar map is confirmed, achieving performance close to the state-of-the-art lidar-only vehicle localization while using a much cheaper sensor |
| [ | Probabilistic localization of an autonomous vehicle combining lidar data with Kalman-filtered Global Navigation Satellite System (GNSS) data | Improved localization with smooth transitions between using GNSS data to using lidar and map data | |
| [ | Generating high-definition 3D maps based on the autonomous vehicle sensor data integration, including GNSS, inertial measurement unit (IMU), and lidar | Existing autonomous vehicle sensor systems can be successfully utilized to generate high-resolution maps with a centimeter-level accuracy | |
| [ | Vehicle localization consisting of curb detection based on ring compression analysis and least trimmed squares, road marking detection based on road segmentation, and Monte Carlo localization | Experimental tests in urban environments show high detection accuracy with lateral and longitudinal errors of less than 0.3 m | |
| [ | Vehicle localization based on the free-resolution probability distributions map (FRPDM) using lidar data | Efficient object representation with reduced map size and good position accuracy in urban areas are achieved | |
| [ | Optimal vehicle pose estimation based on the ensemble learning network utilizing spatial tightness and time series obtained from the lidar data | Improved pose estimation accuracy is obtained, even on curved roads | |
| [ | Autonomous vehicle localization based on the IMU, wheel encoder, and lidar odometry | Accurate and high-frequency localization results in a diverse environment | |
| [ | Automatic recognition of road markings from mobile lidar point clouds | Good performance in recognizing road markings; further research is needed for more complex markings and intersections | |
| [ | Development and implementation of a strategy for automatic extraction of road markings from the mobile lidar data based on the two-dimensional (2D) georeferenced feature images, modified inverse distance weighted (IDW) interpolation, weighted neighboring difference histogram (WNDH)-based dynamic thresholding, and multiscale tensor voting (MSTV) | Experimental tests in a subtropical urban environment show more accurate and complete recognition of road markings with fewer errors | |
| [ | Automatic detection of traffic signs, road markings, and pole-shaped objects | The experimental tests on the two-kilometer long road in an urban area show that the proposed method is suitable for detecting individual signs, while there are difficulties in distinguishing multiple signs on the same construction | |
| [ | Recognition of traffic signs for lidar-equipped vehicles based on the latent structural support vector machine (SVM)-based weakly supervised metric learning (WSMLR) method | Experiments indicate the effectiveness and efficiency of the proposed method, both for the single-view and multi-view sign recognition | |
| [ | Automatic highway sign extraction based on the multiple filtering and clustering of the mobile lidar point cloud data | The tests conducted on three different highways show that the proposed straightforward method can achieve high accuracy values and can be efficiently used to create an accurate inventory of traffic signs | |
| [ | Pedestrian and vehicle detection and tracking at intersections using roadside lidar data, the density-based spatial clustering of applications with noise (DBSCAN), backpropagation artificial neural network (BP-ANN), and Kalman filter | The experimental tests with a 16-laser lidar show the proposed method’s accuracy above 95% and detection range of about 30 m | |
| [ | Vehicle tracking using roadside lidar data and a method consisting of background filtering, lane identification, and vehicle position and speed tracking | Satisfactory vehicle detection and speed tracking in experimental case studies, with a detection range of about 30 m; difficulties in the vehicle type identification | |
| [ | Vehicle detection from the Velodyne 64E 3D lidar data using 2D FCN, where the data are transformed to the 2D point maps | An end-to-end (E2E) detection method with excellent performance and a possibility for additional improvements by including more training data and designing deeper networks | |
| [ | Convolutional neural network (CNN)-based multimodal vehicle detection using three data modalities from the color camera and 3D lidar (dense-depth map, reflectance map, and red-green-blue (RGB) image) | The proposed data fusion approach provides higher accuracy than the individual modalities for the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset | |
| [ | Camera and lidar data fusion for pedestrian detection using CNNs, where lidar data features (horizontal disparity, height above ground, and angle) are fused with RGB images | The tests on the KITTI pedestrian detection dataset show that the proposed approach outperforms the one using only camera imagery | |
| [ | CNN-based classification of objects using camera and lidar data from autonomous vehicles, where point cloud lidar data are upsampled and converted into the pixel-level depth feature map, which is then fused with the RGB images and fed to the deep CNN | Results obtained on the public dataset support the effectiveness and efficiency of the data fusion and object classification strategies, where the proposed approach outperforms the approach using only RGB or depth data | |
| [ | Real-time detection of non-stationary (moving) objects based on the CNN using intensity data in automotive lidar SLAM | It is demonstrated that non-stationary objects can be detected using CNNs trained with the 2D intensity grayscale images in the supervised or unsupervised manner while achieving improved map consistency and localization results | |
| [ | Target detection for autonomous vehicles in complex environments based on the dual-modal instance segmentation deep neural network (DM-ISDNN) using camera and lidar data fusion | The experimental results show the robustness and effectiveness of the proposed approach, which outperforms the competitive methods | |
| [ | Road segmentation, obstacle detection, and vehicle tracking based on an encoder-decoder-based FCN, an extended Kalman filter, and camera, lidar, and radar sensor fusion for autonomous vehicles | Experimental results indicate that the proposed affordable, compact, and robust fusion system outperforms benchmark models and can be efficiently used in real-time for the vehicle’s environment perception | |
| [ | CNN-based real-time semantic segmentation of 3D lidar data for autonomous vehicle perception based on the projection method and the adaptive break point detector method | Practical implementation and satisfactory speed and accuracy of the proposed method | |
| [ | E2E self-driving algorithm using a CNN that predicts the vehicles’ longitudinal and lateral control values based on the input camera images and 2D lidar point cloud data | Experimental tests in the real-world complex urban environments show promising results | |
| [ | Pedestrian recognition and tracking for autonomous vehicles using an SVM classifier and Velodyne 64 lidar data, generating alarms when pedestrians are detected on the road or close to curbs | The validity of the method was confirmed on the autonomous vehicle platform in two scenarios: when the vehicle is stationary and while driving |
Figure 4Application of the lidar system in the maritime sector.
Review of scientific papers on the application and analysis of the lidar system in the maritime sector.
| Reference | Description of Application | Conclusion | |
|---|---|---|---|
|
| [ | Lidar as a part of the sensor system (absolute positioning, visual, audio, and remote sensing sensors) combined with artificial intelligence (AI) techniques for situational awareness in autonomous vessels | Several drawbacks of the current lidar technology are detected for application on autonomous vessels, including limited laser power due to eye-safety issues, lower operational ranges, expensive optics, and unsuitability for the harsh working environment |
| [ | Ship berthing information extraction based on the 3D lidar data using principal component analysis | The effectiveness of the proposed method in dynamic target recognition and safe ship berthing is confirmed by experimental validation on the ro-ro ship berthing | |
| [ | Berthing perception framework for maritime autonomous surface ships based on the estimation of the vessel’s berthing speed, angle, distance, and other parameters from the 3D shipborne lidar data | The proposed method allows accurate berthing in real-time, as confirmed by experiments | |
| [ | Low-cost lidar-based ship berthing and docking system, with a novel method of fusing lidar and GNSS positioning data | The usefulness of the proposed system in safe ship berthing is proven experimentally during several berthing maneuvers and compared to the GNSS-based navigational aid system | |
| [ | Computer-aided method for bollard segmentation and position estimation from the 3D lidar point cloud data for autonomous mooring based on the 3D feature matching and mixed feature-correspondence matching algorithms | The proposed approach is validated on experimental mooring scenes with a robotic arm equipped with lidar | |
| [ | Use of the dual-channel lidar for rotorcraft searching, positioning, tracking, and landing on a ship at sea based on the estimation of the azimuth angle, the distance of the ship relative to the rotorcraft, and the ship’s course | The simulation and experimental tests confirm the effectiveness of the developed method and associated models | |
| [ | Algorithm for detecting objects on seas and oceans using lidar data for application on maritime vessels in different environmental conditions | A proven accurate object detection method called DBSCAN is used to cluster the data points | |
| [ | Detection, monitoring, and classification of objects on seas and oceans based on the SVM classifier and the fusion of lidar and camera data | The proposed method is proven to be highly effective, with an overall accuracy of 98.7% for six classes | |
| [ | Detection, classification, and mapping of objects on seas and oceans using an unmanned surface vehicle with four multi-beam lidar sensors and polygon representation methods | The ability to create a map of the environment with detected objects that are not in motion, with polygons being accurate to 20 cm using a 10 cm occupancy grid | |
|
| [ | A review of the development of profiling oceanographic lidars | The possibility of sea and ocean analysis and monitoring of animal species using lidar is described where these lidars can provide quantitative profiles of the optical properties of the water column to depths of 20–30 m in coastal waters and 100 m for a blue lidar in the open ocean |
| [ | Application of lidar for monitoring and mapping the marine coral reef ecosystems | Successful monitoring of fish, plankton, and coral reef distribution using 3D lidar data | |
| [ | Spaceborne lidar for ocean observations | The usefulness of satellite lidar for observations of ocean ecosystems, particularly in combination with ocean color observations | |
|
| [ | A review of lidar application in creating shoreline and bathymetric maps | Lidar, combined with Global Positioning System (GPS), provides accurate topographical and bathymetric coastal maps, with 10–15 cm vertical accuracy, where best water penetration is achieved by using a blue-green laser with a wavelength of 530 nm |
| [ | Classification of large bodies of water using airborne laser scanning (ALS) | Automatic and efficient classification of water surfaces with an SVM classifier, with an accuracy of over 95% for most cases of coastal areas | |
| [ | Mapping coastal terrains using unmanned aerial vehicle (UAV) lidar | High resolution and quality of topographic data (5–10 cm accuracy) of UAV lidar that outperforms UAV imagery in terms of ground coverage, point density, and the ability to penetrate through the vegetation | |
| [ | Semi-automatic coastal waste detection and recognition using 3D lidar data | Possible classification of waste into plastic, paper, fabric, and metal | |
|
| [ | Monitoring the dynamics of the upper part of the ocean by ship-lidar with the analysis of motion impact on lidar measurements | Measurement of waves, turbulence, and the impact of wind farms on the seas |
| [ | Doppler lidar-based data collection for offshore wind farms | High-resolution measuring of wind speed and direction at various altitudes for proper realization of offshore wind farms |
Figure 5Challenges in lidar system application in modern transportation.