| Literature DB >> 35256877 |
Junxiao Ren1, Weidong Jin1,2, Liang Li1, Yunpu Wu1, Zhang Sun1.
Abstract
High-speed train bogies are essential for the safety and comfort of train operation. The performance of the bogie usually degrades before it fails, so it is necessary to detect the performance degradation of a high-speed train bogie in advance. In this paper, with two key dampers on the bogie taken as experimental objects (lateral damper and yaw damper), a novel 1D-ConvLSTM time-distributed convolutional neural network (CLTD-CNN) is proposed to estimate the performance degradation of a high-speed train bogie. The proposed CLTD-CNN is an encoder-decoder structure. Specifically, the encoder part of the proposed structure consists of a time-distributed 1D-CNN module and a 1D-ConvLSTM. The decoder part consists of a 1D-ConvLSTM and a simple time-CNN with residual connections. In addition, an auxiliary training part is introduced into the structure to support CLTD-CNN in learning the performance degradation trend characteristic, and a special input format is designed for this structure. The whole structure is end-to-end and does not require expert knowledge or engineering experience. The effectiveness of the proposed CLTD-CNN is tested by the high-speed train CRH380A under different performance states. The experimental results demonstrate the superiority of CLTD-CNN. Compared to other methods, the estimation error of CLTD-CNN is the smallest.Entities:
Mesh:
Year: 2022 PMID: 35256877 PMCID: PMC8898146 DOI: 10.1155/2022/5030175
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Structure of a high-speed train bogie and bogie key components (lateral damper and yaw damper).
Summary of recent works on high-speed train bogie fault diagnosis and performance degradation estimation.
| Content | Methods and references |
|---|---|
| Overview | [ |
| [ | |
| [ | |
|
| |
| Model-based methods | ALT method [ |
| DCLS calibration method [ | |
|
| |
| Deep learning methods on fault diagnosis | Deep neural network [ |
| Residual-squeeze CNN [ | |
| Multiscale CNN [ | |
| CapsNet-based model [ | |
| Bayesian deep learning [ | |
| 1D-CNN [ | |
| Deep neural network [ | |
| LSTM [ | |
| 1D-CNN [ | |
|
| |
| Deep learning methods on performance degradation estimation | SDS-CNN [ |
| M-CRNN [ | |
Summary of recent works on deep learning.
| Domain | References |
|---|---|
| Deep learning on image and video | [ |
| Deep learning on signal | [ |
Figure 2Overall structure of proposed CLTD-CNN.
Figure 3The format of the input X.
Figure 4Detailed structure of encoder part.
Hyperparameters of 1D-CNN in a time-distributed 1D-CNN module.
| Layers | Parameters |
|---|---|
| 1D-convolution layer | Filters: 64; kernel size: 3; stride: 2; activate function: ReLu |
| 1D-convolution layer | Filters: 128; kernel size: 3; stride: 1; activate function: ReLu |
| 1D-convolution layer | Filters: 128; kernel size: 3; stride: 1; activate function: ReLu |
| MaxPooling layer | Pool size: 2; stride: 2 |
| 1D convolution layer (Res) | Filters: 128; kernel size: 3; stride: 2; activate function: ReLu |
| 1D-convolution layer | Filters: 256; kernel size: 3; stride: 1; activate function: ReLu |
| 1D-convolution layer | Filters: 256; kernel size: 3; stride: 1; activate function: ReLu |
| MaxPooling layer | Pool size: 2; stride: 2 |
| 1D convolution layer (Res) | Filters: 256; kernel size: 3; stride: 2; activate function: ReLu |
Figure 5Detailed structure of LSTM.
Figure 6Detailed structure of 1D-ConvLSTM.
Hyperparameters of 1D-ConvLSTM in encoder part.
| Layers | Parameters |
|---|---|
| 1D-ConvLSTM | Filters: 512; kernel size: 3; stride: 2; activate function: ReLu; return sequences: true |
Figure 7Detailed structure of auxiliary training part.
Hyperparameters of 1D-CNN in auxiliary training part.
| Layers | Parameters |
|---|---|
| 1D-convolution layer | Filters: 512; kernel size: 3; stride: 1; activate function: ReLu |
| MaxPooling layer | Pool size: 2; stride: 2 |
| 1D-convolution layer | Filters: 1024; kernel size: 3; stride: 1; activate function: ReLu |
| MaxPooling layer | Pool size: 2; stride: 2 |
| GlobalAveragePooling layer | — |
| Fully connected layer | Filters: 1024; dropout rate: 0.5 |
| Fully connected layer | Filters: 1; dropout rate: 0.5 |
Figure 8Detailed structure of decoder part.
Hyperparameters of decoder part.
| Layers | Parameters |
|---|---|
| 1D-ConvLSTM | Filters: 1024; kernel size: 3; stride: 2; activate function: ReLu; Return sequences: False |
| 1D-convolution layer | Filters: 1024; kernel size: 3; stride: 1; activate function: ReLu |
| 1D-convolution layer | Filters: 1024; kernel size: 3; stride: 2; activate function: ReLu |
| 1D-convolution layer (Res) | Filters: 1024; kernel size: 3; stride: 2; activate function: ReLu |
| 1D-convolution layer | Filters: 1536; kernel size: 3; stride: 2; activate function: ReLu |
| 1D-convolution layer | Filters: 2048; kernel size: 3; stride: 2; activate function: ReLu |
| GlobalAveragePooling layer | — |
| Fully connected layer | Filters: 1024; dropout rate: 0.5 |
| Fully connected layer | Filters: 1; dropout rate: 0.5 |
Figure 9Simulation model of CRH380A.
Figure 10Actual rolling and vibration test rig of the vehicle in the key laboratory of rail transportation of Southwest Jiaotong University.
Details of high-speed train signal channels.
| Index | Description |
|---|---|
| 1 | lat.acc of the vehicle front part |
| 2 | lat.acc of the vehicle middle part |
| 3 | lat.acc of the vehicle rear part |
| 4 | ver.acc of the vehicle middle part |
| 5 | ver.acc of the vehicle front part |
| 6 | ver.acc of the vehicle rear part |
| 7 | lat.acc of the bogie 1 in pos. 1 |
| 8 | ver.acc of the bogie 1 in pos. 1 |
| 9 | lat.acc of the bogie 1 in pos. 4 |
| 10 | ver.acc of the bogie 1 in pos. 4 |
| 11 | lat.acc of the bogie 1 in the middle |
| 12 | ver.acc of the bogie 1 in the middle |
| 13 | lat.acc of the bogie 2 in pos. 5 |
| 14 | ver.acc of the bogie 2 in pos. 5 |
| 15 | lat.acc of the bogie 2 in pos. 8 |
| 16 | ver.acc of the bogie 2 in pos. 8 |
| 17 | lat.acc of the bogie 2 in the middle |
| 18 | ver.acc of the bogie 2 in the middle |
| 19 | lon.acc of the axle box 1 |
| 20 | lat.acc of the axle box 1 |
| 21 | ver.acc of the axle box 1 |
| 22 | lon.acc of the axle box 2 |
| 23 | lat.acc of the axle box 2 |
| 24 | ver.acc of the axle box 2 |
| 25 | lon.acc of the axle box 3 |
| 26 | lat.acc of the axle box 3 |
| 27 | ver.acc of the axle box 3 |
| 28 | lon.acc of the axle box 4 |
| 29 | lat.acc of the axle box 4 |
| 30 | ver.acc of the axle box 4 |
| 31 | lat.dis of the vehicle front part |
| 32 | ver.dis of the vehicle front part |
| 33 | lat.dis of the vehicle middle part |
| 34 | ver.dis of the vehicle middle part |
| 35 | lat.dis of the vehicle rear part |
| 36 | ver.dis of the vehicle rear part |
| 37 | lat.dis of the bogie 1 in pos. 1 |
| 38 | ver.dis of the bogie 1 in pos. 1 |
| 39 | lat.dis of the bogie 1 in pos. 4 |
| 40 | ver.dis of the bogie 1 in pos. 4 |
| 41 | lat.dis of the bogie 1 in the middle |
| 42 | ver.dis of the bogie 1 in the middle |
| 43 | lat.dis of the bogie 2 in pos. 5 |
| 44 | ver.dis of the bogie 2 in pos. 5 |
| 45 | lat.dis of the bogie 2 in pos. 8 |
| 46 | ver.dis of the bogie 2 in pos. 8 |
| 47 | lat.dis of the bogie 2 in the middle |
| 48 | ver.dis of the bogie 2 in the middle |
| 49 | lat.dis of the wheel-set 1 |
| 50 | lat.dis of the wheel-set 2 |
| 51 | lat.dis of the wheel-set 3 |
| 52 | lat.dis of the wheel-set 4 |
| 53 | Relative dis. of primary suspension in pos. 1 |
| 54 | Relative dis. of primary suspension in pos. 8 |
| 55 | Relative dis. of secondary suspension in pos. 1 |
| 56 | Relative dis. of secondary suspension in pos. 8 |
| 57 | Relative dis. of yaw damper in pos. 1 |
| 58 | Relative dis. of yaw damper in pos. 8 |
Note: lat. = lateral, ver. = vertical, lon. = longitudinal, acc. = acceleration, dis. = displacement, and pos. = position.
Figure 11Location of sensors.
Figure 12Portions of an acceleration signal sample and portions of a displacement signal sample (there are a total of 58 channels in the acceleration signal sample and displacement signal samples, respectively. Here, randomly demonstrated two channels of each sample are given.).
Details of the lateral damper data.
| Training set | Test set | |||||
|---|---|---|---|---|---|---|
| Performance state (%) | Label ( | Label ( | Number | Performance state (%) | Label ( | Number |
| 100, 95, 90, 85 | [100, 95, 90, 85] | 85 | 20000 | 95, 90, 85, 80 | 80 | 2000 |
| 95, 90, 85, 80 | [95, 90, 85, 80] | 80 | 20000 | 90, 85, 80, 75 | 75 | 2000 |
| 90, 85, 80, 75 | [90, 85, 80, 75] | 75 | 20000 | 85, 80, 75, 70 | 70 | 2000 |
| 85, 80, 75, 70 | [85, 80, 75, 70] | 70 | 20000 | 80, 75, 70, 65 | 65 | 2000 |
| 80, 75, 70, 65 | [80, 75, 70, 65] | 65 | 20000 | 75, 70, 65, 60 | 60 | 2000 |
| 75, 70, 65, 60 | [75, 70, 65, 60] | 60 | 20000 | 70, 65, 60, 55 | 55 | 2000 |
| 70, 65, 60, 55 | [70, 65, 60, 55] | 55 | 20000 | 65, 60, 55, 50 | 50 | 2000 |
| 65, 60, 55, 50 | [65, 60, 55, 50] | 50 | 20000 | 60, 55, 50, 45 | 45 | 2000 |
| 60, 55, 50, 45 | [60, 55, 50, 45] | 45 | 20000 | 55, 50, 45, 40 | 40 | 2000 |
Details of the yaw damper data.
| Training set | Test set | |||||
|---|---|---|---|---|---|---|
| Performance state (%) | Label ( | Label ( | Number | Performance state (%) | Label ( | Number |
| 100, 95, 90, 85 | [100, 95, 90, 85] | 85 | 20000 | 95, 90, 85, 80 | 80 | 2000 |
| 95, 90, 85, 80 | [95, 90, 85, 80] | 80 | 20000 | 90, 85, 80, 75 | 75 | 2000 |
| 90, 85, 80, 75 | [90, 85, 80, 75] | 75 | 20000 | 85, 80, 75, 70 | 70 | 2000 |
| 85, 80, 75, 70 | [85, 80, 75, 70] | 70 | 20000 | 80, 75, 70, 65 | 65 | 2000 |
| 80, 75, 70, 65 | [80, 75, 70, 65] | 65 | 20000 | 75, 70, 65, 60 | 60 | 2000 |
| 75, 70, 65, 60 | [75, 70, 65, 60] | 60 | 20000 | 70, 65, 60, 55 | 55 | 2000 |
| 70, 65, 60, 55 | [70, 65, 60, 55] | 55 | 20000 | 65, 60, 55, 50 | 50 | 2000 |
| 65, 60, 55, 50 | [65, 60, 55, 50] | 50 | 20000 | 60, 55, 50, 45 | 45 | 2000 |
| 60, 55, 50, 45 | [60, 55, 50, 45] | 45 | 20000 | 55, 50, 45, 40 | 40 | 2000 |
Figure 13Validation loss of experiments on step length n. (a) Validation loss of lateral damper. (b) Validation loss of yaw damper.
Results of experiments on step length n.
| Step length | Lateral damper | Yaw damper | ||
|---|---|---|---|---|
| MAE | RMSE | MAE | RMSE | |
| 2 | 3.96 | 4.35 | 4.37 | 4.88 |
| 3 | 1.98 | 2.50 | 2.17 | 2.68 |
| 4 |
|
|
| 1.46 |
| 5 | 0.93 | 1.14 | 1.22 |
|
| 6 | 0.89 | 1.17 | 1.25 | 1.60 |
| 7 | 1.19 | 1.43 | 1.38 | 1.72 |
The bold value means the minimum error of each case (column).
Figure 14Validation loss of experiments on time-distributed 1D-CNN module. (a) Validation loss of lateral damper. (b) Validation loss of yaw damper.
Results of experiments on time-distributed 1D-CNN module.
| Different cases | Lateral damper | Yaw damper | ||
|---|---|---|---|---|
| MAE | RMSE | MAE | RMSE | |
| Without time-distributed 1D-CNN module | 4.12 | 5.87 | 4.40 | 5.24 |
| With time-distributed 1D-CNN module |
|
|
|
|
The bold value means the minimum error of each case (column).
Figure 15Validation loss of experiments on different RNN structures. (a) Validation loss of lateral damper. (b) Validation loss of yaw damper.
Results of experiments on different RNN structures.
| CLTD-CNN with different RNN structures | Lateral damper | Yaw damper | ||
|---|---|---|---|---|
| MAE | RMSE | MAE | RMSE | |
| CLTD-CNN (with RNN) | 3.56 | 4.01 | 5.02 | 5.79 |
| CLTD-CNN (with LSTM) | 3.24 | 3.51 | 3.15 | 3.47 |
| CLTD-CNN (with GRU) | 3.31 | 3.79 | 3.21 | 3.39 |
| CLTD-CNN (with 1D-ConvGRU) | 1.21 | 1.37 | 1.26 | 1.68 |
| CLTD-CNN (with 1D-ConvLSTM) |
|
|
|
|
The bold value means the minimum error of each case (column).
Figure 16Validation loss of experiments on λ. (a) Validation loss of lateral damper. (b) Validation loss of yaw damper.
Results of experiments on λ.
|
| Lateral damper | Yaw damper | ||
|---|---|---|---|---|
| MAE | RMSE | MAE | RMSE | |
| 0.0 | 2.16 | 2.31 | 3.16 | 3.32 |
| 0.1 | 1.04 | 1.16 | 1.39 | 1.77 |
| 0.2 |
|
| 1.21 |
|
| 0.3 | 0.94 | 1.12 |
| 1.50 |
| 0.4 | 1.18 | 1.35 | 1.48 | 1.76 |
| 0.5 | 1.83 | 2.15 | 1.91 | 2.35 |
The bold value means the minimum error of each case (column).
Figure 17Validation loss of comparison experiments: (a) validation loss of lateral damper, and (b) validation loss of yaw damper.
Results of comparison experiments.
| Method | Lateral damper | Yaw damper | GFLOPs | Average training | Inference time (s) | ||
|---|---|---|---|---|---|---|---|
| MAE | RMSE | MAE | RMSE | Time per epoch (s) | (18000 test samples) | ||
| TCNN [ | 10.25 | 13.48 | 12.54 | 14.31 |
|
|
|
| LSTM-AON [ | 8.01 | 10.40 | 13.67 | 15.39 | 1.044 | 180.7 | 92.5 |
| BiGRU [ | 6.34 | 7.74 | 10.95 | 12.88 | 0.191 | 29.1 | 11.1 |
| MDDNN [ | 3.87 | 4.46 | 4.33 | 5.17 | 0.518 | 88.9 | 40.4 |
| SAE-LSTM [ | 2.44 | 2.96 | 3.35 | 4.04 | 0.807 | 151.4 | 72.1 |
| M-CRNN [ | 2.41 | 3.52 | 2.74 | 3.34 | 0.380 | 74.3 | 33.4 |
| SDS-CNN [ | 2.19 | 3.26 | 2.81 | 3.53 | 0.292 | 42.7 | 19.6 |
| Proposed CLTD-CNN |
|
|
|
| 0.322 | 57.1 | 26.2 |
The bold value means the minimum error of each case (column).