| Literature DB >> 30609699 |
Xianzhong Jian1, Wenlong Li2, Xuguang Guo3, Ruzhi Wang4.
Abstract
Deep learning has been an important topic in fault diagnosis of motor bearings, which can avoid the need for extensive domain expertise and cumbersome artificial feature extraction. However, existing neural networks have low fault recognition rates and low adaptability under variable load conditions. In order to solve these problems, we propose a one-dimensional fusion neural network (OFNN), which combines Adaptive one-dimensional Convolution Neural Networks with Wide Kernel (ACNN-W) and Dempster-Shafer (D-S) evidence theory. Firstly, the original vibration time-domain signals of a motor bearing acquired by two sensors are resampled. Then, four frameworks of ACNN-W optimized by RMSprop are utilized to learn features adaptively and pre-classify them with Softmax classifiers. Finally, the D-S evidence theory is used to comprehensively determine the class vector output by the Softmax classifiers to achieve fault detection of the bearing. The proposed method adapts to different load conditions by incorporating complementary or conflicting evidences from different sensors through experiments on the Case Western Reserve University (CWRU) motor bearing database. Experimental results show that the proposed method can effectively enhance the cross-domain adaptive ability of the model and has a better diagnostic accuracy than other existing experimental methods.Entities:
Keywords: D-S evidence theory; deep learning; fault diagnosis; motor bearings; one-dimensional fusion neural network
Year: 2019 PMID: 30609699 PMCID: PMC6339238 DOI: 10.3390/s19010122
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The framework of OFNN.
Figure 2Schematic diagram of ACNN-W.
Structures and parameters of ACNN-W.
| Number | Network Layer | Core Size/Step Size | Number of Cores | Output Size (Width × Depth) | Zero-Padding |
|---|---|---|---|---|---|
| 1 | Conv1 | 32 × 1/8 × 1 | 16 | 256 × 16 | YES |
| 2 | Pooling1 | 2 × 1/2 × 1 | 16 | 128 × 16 | NO |
| 3 | Conv2 | 3 × 1/2 × 1 | 32 | 64 × 32 | YES |
| 4 | Pooling 2 | 2 × 1/2 × 1 | 32 | 32 × 32 | NO |
| 5 | Conv3 | 3 × 1/2 × 1 | 64 | 16 × 64 | YES |
| 6 | Pooling 3 | 2 × 1/2 × 1 | 64 | 8 × 64 | NO |
| 7 | Conv4 | 3 × 1/2 × 1 | 64 | 4 × 64 | YES |
| 8 | Pooling 4 | 2 × 1/2 × 1 | 64 | 2 × 64 | NO |
| 9 | Fully connected layer | 100 | 1 | 100 × 1 | |
| 10 | Softmax layer | 10 | 1 | 10 |
Figure 3CWRU data sampling system used by CWRU.
Figure 4Schematic diagram of resampling sample extraction.
Description of experimental data set (the numbers 1 to 10 of Fault Type Label represent the 10 states of the bearing, including nine fault states and one normal state).
| Fault Location | Ball | Inner Race | Outer Race | None | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Fault Type Label | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |
| Fault Diameter (mm) | 0.1778 | 0.3556 | 0.5334 | 0.1778 | 0.3556 | 0.5334 | 0.1778 | 0.3556 | 0.5334 | 0 | |
| Dataset A (0.75 kW) | Train | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 |
| Test | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | |
| Dataset B (1.49 kW) | Train | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 |
| Test | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | |
| Dataset C (2.24 kW) | Train | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 | 4500 |
| Test | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | 500 | |
Figure 5The accuracy of without BN and BN.
Figure 6Loss function curve for without BN and BN.
Experiments of different optimizers and learning rates.
| Learning Rate | 0.0001 | 0.001 | 0.01 | 0.1 | 1 | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| Optimizer | Train | Test | Train | Test | Train | Test | Train | Test | Train | Test |
| Accuracy | ||||||||||
| Adadelta | 10.00% | 10.00% | 10.00% | 10.00% | 10.00% | 10.00% | 85.14% | 74.40% | 99.96% | 91.34% |
| Adam | 98.70% | 92.23% | 99.99% | 92.62% | 99.92% | 91.87% | 97.80% | 85.52% | 10.00% | 10.00% |
| RMSprop | 98.68% | 92.11% | 99.94% | 92.21% | 99.99% | 93.07% | 30.12% | 29.88% | 10.00% | 10.00% |
Figure 7Comparison of Min-Max Normalization over 20 experiments.
Figure 8Sixteen convolution kernel visualizations of the first layer.
Figure 9Feature visualization via t-SNE: feature representations for all test signals extracted from raw signal, four convolutional layers and the fully connected layer respectively.
Figure 10Drive end data test.
Figure 11Fan end data test.
Comparison of different methods over 20 experiments.
| A-A | A-B | A-C | B-A | B-B | B-C | C-A | C-B | C-C | AVG | |
|---|---|---|---|---|---|---|---|---|---|---|
| ACNN-W-AVG-DA | 99.97% | 99.87% | 98.01% | 98.19% | 100.00% | 99.88% | 92.98% | 99.11% | 100.00% | 98.66% |
| ACNN-W-AVG-FA | 100.00% | 99.37% | 85.75% | 90.65% | 100.00% | 93.18% | 76.35% | 85.56% | 100.00% | 92.31% |
| OFNN-DE | 100.00% | 100.00% | 99.62% | 98.50% | 100.00% | 99.90% | 94.37% | 99.99% | 100.00% | 99.15% |
| OFNN-FA | 100.00% | 100.00% | 90.60% | 91.61% | 100.00% | 93.48% | 77.84% | 85.16% | 100.00% | 93.18% |
| OFNN | 100.00% | 100.00% | 99.76% | 99.01% | 100.00% | 99.99% | 97.02% | 100.00% | 100.00% | 99.53% |
Figure 12Information fusion between drive-end and fan-end predictions: (a) C-A fan end confusion matrix; (b) C-A drive end confusion matrix; (c) C-A confusion matrix after DS evidence fusion.
Comparison of different methods.
| A-B | A-C | B-A | B-C | C-A | C-B | AVG | |
|---|---|---|---|---|---|---|---|
| FFT-SVM | 68.6% | 60.0% | 73.2% | 67.6% | 68.4% | 62.0% | 66.6% |
| FFT-DNN | 82.2% | 82.6% | 72.3% | 77.0% | 76.9% | 77.3% | 78.1% |
| WDCNN | 99.2% | 91.0% | 95.1% | 91.5% | 78.1% | 85.1% | 90.0% |
| TICNN | 99.1% | 90.7% | 97.4% | 98.8% | 89.2% | 97.6% | 95.5% |
| Ensemble TICNN | 99.5% | 91.1% | 97.6% | 99.4% | 90.2% | 98.7% | 96.1% |
| IDSCNN | 100.0% | 97.7% | 99.4% | 99.6% | 93.8% | 99.9% | 98.4% |
| ACNN-W | 99.8% | 98.0% | 98.1% | 99.8% | 92.9% | 99.1% | 97.9% |
| OFNN | 100.0% | 99.7% | 99.0% | 100.0% | 97.0% | 100.0% | 99.3% |