| Literature DB >> 35808302 |
Bin Zou1,2,3, Wenbo Li1,2,3, Xianjun Hou1,2,3, Luqi Tang1,2,3, Quan Yuan1,2,3.
Abstract
Preceding vehicles have a significant impact on the safety of the vehicle, whether or not it has the same driving direction as an ego-vehicle. Reliable trajectory prediction of preceding vehicles is crucial for making safer planning. In this paper, we propose a framework for trajectory prediction of preceding target vehicles in an urban scenario using multi-sensor fusion. First, the preceding target vehicles historical trajectory is acquired using LIDAR, camera, and combined inertial navigation system fusion in the dynamic scene. Next, the Savitzky-Golay filter is taken to smooth the vehicle trajectory. Then, two transformer-based networks are built to predict preceding target vehicles' future trajectory, which are the traditional transformer and the cluster-based transformer. In a traditional transformer, preceding target vehicles trajectories are predicted using velocities in the X-axis and Y-axis. In the cluster-based transformer, the k-means algorithm and transformer are combined to predict trajectory in a high-dimensional space based on classification. Driving data from the real-world environment in Wuhan, China, are collected to train and validate the proposed preceding target vehicles trajectory prediction algorithm in the experiments. The result of the performance analysis confirms that the proposed two transformers methods can effectively predict the trajectory using multi-sensor fusion and cluster-based transformer method can achieve better performance than the traditional transformer.Entities:
Keywords: cluster; detection and tracking; different driving direction; multi-sensor fusion; trajectory prediction; transformer
Year: 2022 PMID: 35808302 PMCID: PMC9268907 DOI: 10.3390/s22134808
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Relative motion of the PTV.
Figure 2The effect of LIDAR–camera fusion.
Figure 3EV vehicle coordinate systems unification method.
Figure 4Relative distance of PTV and EV.
Figure 5Historical trajectory generation.
Figure 6The TF network.
Figure 7The C-TF network.
Figure 8Data collection routes.
Figure 9Test vehicle.
Figure 10Velocity distribution. (a) Velocity distribution of X-axis. (b) Velocity distribution of Y-axis.
The ADE and FDE of LSTM, TF, and C-TF on real-world preceding vehicles dataset.
| LSTM | TF | C-TF | ||||
|---|---|---|---|---|---|---|
| ADE (m) | FDE (m) | ADE (m) | FDE (m) | ADE (m) | FDE (m) | |
| 1s | 4.244 | 8.090 | 1.042 | 1.495 | 0.737 | 1.056 |
| 2s | 8.596 | 16.960 | 1.816 | 3.642 | 1.204 | 2.244 |
| 3s | 12.159 | 24.300 | 2.708 | 6.046 | 1.699 | 3.519 |
Figure 11The ADE and FDE in the X- and Y-axis directions. (a) The ADE of the X-axis. (b) The ADE of the Y-axis. (c) The FDE of the X-axis. (d) The FDE of the Y-axis.
Figure 12PTV The predicted effect of C-TF and TF on real-world preceding vehicles dataset.
Figure 13The 3rd second accuracy of C-TF at different number of clusters.
The ADE and FDE of LSTM, TF, and C-TF on the Argoverse dataset.
| LSTM | TF | C-TF | ||||
|---|---|---|---|---|---|---|
| ADE (m) | FDE (m) | ADE (m) | FDE (m) | ADE (m) | FDE (m) | |
| 1 s | 2.76 | 5.52 | 0.72 | 0.88 | 0.60 | 0.67 |
| 2 s | 5.86 | 11.73 | 1.00 | 1.68 | 0.79 | 1.35 |
| 3 s | 8.32 | 16.59 | 1.30 | 2.49 | 1.11 | 2.38 |
Figure 14PTV The predicted effect of C-TF and TF on the Argoverse dataset.