| Literature DB >> 35891092 |
Maryam Ahang1, Masoud Jalayer2, Ardeshir Shojaeinasab1, Oluwaseyi Ogunfowora2, Todd Charter1, Homayoun Najjaran1,2.
Abstract
Bearings are vital components of rotating machines that are prone to unexpected faults. Therefore, bearing fault diagnosis and condition monitoring are essential for reducing operational costs and downtime in numerous industries. In various production conditions, bearings can be operated under a range of loads and speeds, which causes different vibration patterns associated with each fault type. Normal data are ample as systems usually work in desired conditions. On the other hand, fault data are rare, and in many conditions, there are no data recorded for the fault classes. Accessing fault data is crucial for developing data-driven fault diagnosis tools that can improve both the performance and safety of operations. To this end, a novel algorithm based on conditional generative adversarial networks (CGANs) was introduced. Trained on the normal and fault data on actual fault conditions, this algorithm generates fault data from normal data of target conditions. The proposed method was validated on a real-world bearing dataset, and fault data were generated for different conditions. Several state-of-the-art classifiers and visualization models were implemented to evaluate the quality of the synthesized data. The results demonstrate the efficacy of the proposed algorithm.Entities:
Keywords: bearing fault detection; condition monitoring; fault detection and diagnosis; generative adversarial networks; signal processing
Year: 2022 PMID: 35891092 PMCID: PMC9320677 DOI: 10.3390/s22145413
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1A flow diagram using N2FGAN for fault data generation.
Figure 2A basic structure of LSTM Network.
Figure 3A schematic representation of the major CNN layers.
Figure 4A comparison between GAN, CGAN, and our proposed methods(N2FGAN).
Figure 5The proposed data generation framework (N2FGAN).
Figure 6CWRU bearing data collection test bed.
Figure 7Some samples of generated fault data in 1797 RPM.
Figure 8Some samples of the generated fault data for different RPMs.
Selected features for analysis of the generated fault data.
| Time Domain Feature | Formula | Frequency Domain Feature | Formula |
|---|---|---|---|
| Mean |
| Mean |
|
| Standard Deviation |
| Standard Deviation |
|
| Skewness |
| Skewness |
|
| Crest Factor |
| Crest Factor |
|
| Kurtosis |
| Shannon Entropy |
|
Figure 9t-SNE visualization of the generated data.
Classifier descriptions.
| Framework | Description |
|---|---|
| ConvLSTM | The architecture consists of two CNN blocks (containing 1D-convolutional layers, batch normalization, ReLU, and max pooling,), an LSTM block, a dense layer with sigmoid activation function, a dropout, and a SoftMax layer. |
| CNN | It consists of four CNN blocks (containing one 1D-convolutional layer, Batch Normalization, ReLU, and Max Pooling layer), A flattened layer, a fully connected layer, and a SoftMax classification layer. |
| ConvAE | It is a multi-layer network consisting of an encoder and a decoder. Each includes three CNN blocks (containing 1D-convolutional layers, ReLU, and max pooling, or upsampling),a flattened, a fully connected layer, and a SoftMax classification layer. |
Classifier accuracy, score, precision, and recall for test data in different conditions while the training condition is 1797 RPM.
| Condition | ConvLSTM Classifier | CNN Classifier | ConvAE Classifier | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| |
| 1797 | 98.89% | 98.89% | 98.9% | 98.89% | 99.34% | 99.34% | 99.35% | 99.34% | 99.38% | 99.37% | 99.38% | 99.37% |
| 1772 | 98.78% | 98.78% | 98.81% | 98.78% | 98.85% | 98.85% | 98.89% | 98.85% | 99.27% | 99.70% | 99.28% | 99.27% |
| 1750 | 99.24% | 99.24% | 99.24% | 99.24% | 98.47% | 98.47% | 98.6% | 98.47% | 98.65% | 98.65% | 98.66% | 98.65% |
| 1730 | 98.72% | 98.71% | 98.74% | 98.72% | 98.61% | 98.61% | 98.63% | 98.61% | 97.57% | 97.57% | 97.65% | 97.57% |
Comparison between different architectures of the N2FGAN tested for 1772 RPM.
| ConvLSTM Classifier | CNN Classifier | ConvAE Classifier | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 535.71 | 3(Input length-256-64) | 2(64-256) | 256 | 90.94% | 90.37% | 91.81% | 90.94% | 92.57% | 92.45% | 92.92% | 92.6% | 92.50% | 92.48% | 92.79% | 92.50% |
| 682.18 | 3(Input length-256-64) | 2(64-256) | 512 | 98.54% | 98.53% | 98.60% | 98.40% | 97.67% | 97.66% | 97.75% | 97.67% | 91.87% | 91.57% | 93.79% | 91.87% |
| 1282.18 | 3(Input length-256-64) | 2(64-256) | 1024 | 99.2% | 99.20% | 99.23% | 99.20% | 99.72% | 99.72% | 99.73% | 99.72% | 99.34% | 99.34% | 99.36% | 99.34% |
| 674.56 | 4(Input length-256-128-64) | 3(64-128-256) | 256 | 94.44% | 94.35% | 94.81% | 94.44% | 92.20% | 92.01% | 93.00% | 92.19% | 88.89% | 88.74% | 90.16% | 88.89% |
| 1346.31 | 4(Input length-256-128-64) | 3(64-128-256) | 512 | 98.78% | 98.78% | 98.81% | 98.78% | 98.85% | 98.85% | 98.89% | 98.85% | 99.27% | 99.70% | 99.28% | 99.27% |
| 1381.87 | 4(Input length-256-128-64) | 3(64-128-256) | 1024 | 98.10% | 98.05% | 98.21% | 98.06% | 99.83% | 99.83% | 99.83% | 99.83% | 86.60% | 84.12% | 92.16% | 86.60% |
| 775.10 | 5(Input length-512-256-128-64) | 4(64-128-256-512) | 256 | 81.11% | 74.74% | 71.00% | 81.11% | 81.11% | 74.96% | 71.36% | 81.11% | 78.37% | 72.21% | 69.11% | 78.37% |
| 1102.08 | 5(Input length-512-256-128-64) | 4(64-128-256-512) | 512 | 99.24% | 99.23% | 99.25% | 99.24% | 98.26% | 98.26% | 98.35% | 98.26% | 96.15% | 96.11% | 96.74% | 96.15% |
| 1812.25 | 5(Input length-512-256-128-64) | 4(64-128-256-512) | 1024 | 99.72% | 99.72% | 99.72% | 99.72% | 98.04% | 98.38% | 98.48% | 98.4% | 88.02% | 86.10% | 92.20% | 88.02% |
Training and test set configuration for comparing N2FGAN, CGAN, WGAN, and classical augmentation.
| Classes | Training Set | Test Set | |||
|---|---|---|---|---|---|
| RPM | #Real Samples | #Synthetic Samples | RPM | #Real Samples | |
| health | 1797 and 1772 | 3000 | 0 | 1772 | 150 |
| inner | 1797 | 150 | 100 | 1772 | 150 |
| ball | 1797 | 150 | 0 | 1772 | 150 |
| outer1 | 1797 | 150 | 0 | 1772 | 150 |
| outer2 | 1797 | 150 | 0 | 1772 | 150 |
| outer3 | 1797 | 150 | 0 | 1772 | 150 |
Figure 10Comparing the effect of each augmentation framework on the classifier performance when the inner class is augmented; t stands for true labeled classes.