| Literature DB >> 35898103 |
Md Al-Masrur Khan1, Md Foysal Haque2, Kazi Rakib Hasan3, Samah H Alajmani4, Mohammed Baz5, Mehedi Masud6, Abdullah-Al Nahid3.
Abstract
Lane detection plays a vital role in making the idea of the autonomous car a reality. Traditional lane detection methods need extensive hand-crafted features and post-processing techniques, which make the models specific feature-oriented, and susceptible to instability for the variations on road scenes. In recent years, Deep Learning (DL) models, especially Convolutional Neural Network (CNN) models have been proposed and utilized to perform pixel-level lane segmentation. However, most of the methods focus on achieving high accuracy while considering structured roads and good weather conditions and do not put emphasis on testing their models on defected roads, especially ones with blurry lane lines, no lane lines, and cracked pavements, which are predominant in the real world. Moreover, many of these CNN-based models have complex structures and require high-end systems to operate, which makes them quite unsuitable for being implemented in embedded devices. Considering these shortcomings, in this paper, we have introduced a novel CNN model named LLDNet based on an encoder-decoder architecture that is lightweight and has been tested in adverse weather as well as road conditions. A channel attention and spatial attention module are integrated into the designed architecture to refine the feature maps for achieving outstanding results with a lower number of parameters. We have used a hybrid dataset to train our model, which was created by combining two separate datasets, and have compared the model with a few state-of-the-art encoder-decoder architectures. Numerical results on the utilized dataset show that our model surpasses the compared methods in terms of dice coefficient, IoU, and the size of the models. Moreover, we carried out extensive experiments on the videos of different roads in Bangladesh. The visualization results exhibit that our model can detect the lanes accurately in both structured and defected roads and adverse weather conditions. Experimental results elicit that our designed method is capable of detecting lanes accurately and is ready for practical implementation.Entities:
Keywords: autonomous cars; convolutional neural network; deep learning; lane detection; semantic segmentation
Mesh:
Year: 2022 PMID: 35898103 PMCID: PMC9332112 DOI: 10.3390/s22155595
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Structure of the Proposed LLDNet architecture. Our code is available at https://github.com/Masrur02/LLDNet (accessed on 21 June 2022).
Figure 2Structure of the residual block.
Figure 3Structure of the CBAM module.
Figure 4Curves of loss, dice coefficient, and IoU while training and testing the models at 100 epochs.
Model Performance Comparison with other State-of-the-art models.
| Model | Accuracy (%) | Dice Coefficient (%) | IoU (%) | Dice Loss(%) | Number of Parameters (Million) | File Size (Mb) |
|---|---|---|---|---|---|---|
| PSPNet | 95.89 | 95.46 | 94.82 | 5.01 | 0.33 | 4.08 |
| U-net | 96.27 | 98.02 | 96.98 | 1.98 | 1.94 | 22.97 |
| FCN | 96.30 | 98.13 | 97.19 | 1.87 | 1.37 | 16.35 |
| Ours | 96.31 | 98.18 | 97.33 | 1.82 | 0.26 | 1.88 |
Figure 5Visualization of lane detection in perfect road and weather conditions.
Figure 6Visualization of lane detection on curvy road condition.
Figure 7Visualization of lane detection in rainy weather condition.
Figure 8Visualization of lane detection on night condition.
Figure 9Visualization of lane detection on defected road condition.