| Literature DB >> 35062390 |
Linhui Li1, Xin Sui1, Jing Lian1, Fengning Yu1, Yafu Zhou1.
Abstract
The structured road is a scene with high interaction between vehicles, but due to the high uncertainty of behavior, the prediction of vehicle interaction behavior is still a challenge. This prediction is significant for controlling the ego-vehicle. We propose an interaction behavior prediction model based on vehicle cluster (VC) by self-attention (VC-Attention) to improve the prediction performance. Firstly, a five-vehicle based cluster structure is designed to extract the interactive features between ego-vehicle and target vehicle, such as Deceleration Rate to Avoid a Crash (DRAC) and the lane gap. In addition, the proposed model utilizes the sliding window algorithm to extract VC behavior information. Then the temporal characteristics of the three interactive features mentioned above will be caught by two layers of self-attention encoder with six heads respectively. Finally, target vehicle's future behavior will be predicted by a sub-network consists of a fully connected layer and SoftMax module. The experimental results show that this method has achieved accuracy, precision, recall, and F1 score of more than 92% and time to event of 2.9 s on a Next Generation Simulation (NGSIM) dataset. It accurately predicts the interactive behaviors in class-imbalance prediction and adapts to various driving scenarios.Entities:
Keywords: class imbalance; end-to-end prediction; self-attention; vehicle cluster; vehicle interaction behavior prediction
Year: 2022 PMID: 35062390 PMCID: PMC8779130 DOI: 10.3390/s22020429
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Framework of proposed prediction approach.
Figure 2Diagram of case division.
Figure 3The structure of the VC-Attention.
Figure 4The structure of the multi-head attention.
Figure 5The structure of the self-attention encoder.
Figure 6The five-vehicle cluster.
Interactive feature parameters.
| Feature Type | Related Parameters | Parameter Representation |
|---|---|---|
| Between the target vehicle and the ego-vehicle | Relative speed/(m/s) |
|
| Relative acceleration/(m/s2) |
| |
| Relative position/(m) |
| |
| Between surrounding vehicles and the target vehicle | Relative speed/(m/s) |
|
| Relative acceleration/(m/s2) |
| |
| Lateral distance/(m) |
| |
| Deceleration rate to avoid a crash/(m/s2) |
| |
| Vehicle cluster | Relative position/(m) |
|
| Gap in the lane/(m) |
|
Figure 7The learning rate changes with the number of iterations.
Figure 8The extent of the NGSIM dataset study area. (a) The extent of the US101 dataset study area. (b) The extent of the I80 dataset study area.
Figure 9Loss during training.
Figure 10The prediction accuracy of different heads of self-attention.
Figure 11The prediction accuracy of different self-attention encoder layers.
Prediction results of VC-Attention.
|
|
|
|
|
| Cut-in | 0.907 | 0.932 | 0.921 |
| Cut-out | 0.912 | 0.916 | 0.910 |
| No-cut | 0.938 | 0.925 | 0.931 |
Figure 12Comparison of prediction accuracy of different methods.
Comparison of prediction and evaluation indicators by different methods.
|
|
|
|
|
|
|
| F1 Score | 0.924 | 0.795 | 0.868 | 0.802 | 0.61 |
| Precision | 0.925 | 0.705 | 0.884 | 0.790 | 0.53 |
| Recall | 0.924 | 0.94 | 0.862 | 0.814 | 0.72 |
| Accuracy | 0.924 | 0.675 | 0.965 | 0.799 | 0.57 |
| Time to event/s | 2.9 | 3.75 | 1.5 | 1.198 | 1.03 |
Figure 13The interweaving area.