| Literature DB >> 30563274 |
Xiang Zhang1, Wei Yang2, Xiaolin Tang3, Jie Liu4.
Abstract
To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level.Entities:
Keywords: YOLO v3; adaptive learning; label image generation; lane detection
Year: 2018 PMID: 30563274 PMCID: PMC6308794 DOI: 10.3390/s18124308
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Structure of the two-stage lane detection model based on the YOLO v3.
Figure 2Lane distribution in the bird-view image.
Figure 3Training flowchart.
Figure 4Label images.
Figure 5Detection results of the first-stage KITTI model on the KITTI dataset.
Figure 6Detection results of the second-stage KITTI model on the KITTI dataset.
Figure 7Lane detection results on the Caltech dataset.
Figure 8PR curves of all algorithms on the two datasets. (a,b) are the PR curves on the KITTI and Caltech dataset, respectively.
Detection accuracy and speed of all lane detection algorithms on KITTI and Caltech datasets.
| Algorithm | KITTI | Caltech | ||
|---|---|---|---|---|
| mAP | Speed | mAP | Speed | |
| Fast RCNN [ | 49.87 | 2271 | 53.13 | 2140 |
| Faster RCNN [ | 58.78 | 122 | 61.73 | 149 |
| Sliding window & CNN [ | 68.98 | 79,000 | 71.26 | 42,000 |
| SSD [ | 75.73 | 29.3 | 77.39 | 25.6 |
| Context & RCNN [ | 79.26 | 197 | 81.75 | 136 |
| Yolo v1 ( | 72.21 | 44.7 | 73.92 | 45.2 |
| T-S Yolo v1 ( | 74.67 | 45.1 | 75.69 | 45.4 |
| Yolo v2 ( | 81.64 | 59.1 | 82.81 | 58.5 |
| T-S Yolo v2 ( | 83.16 | 59.6 | 84.07 | 59.2 |
| Yolo v3 ( | 87.42 | 24.8 | 88.44 | 24.3 |
| T-S Yolo v3 ( | 88.39 | 25.2 | 89.32 | 24.7 |
Figure 9The lane fitting process.
Figure 10Fitting results after lane detection (the odd columns show the lane detection result under the bird-view perspective, and the even columns show the lane fitting result mapped to the original image).