| Literature DB >> 35746252 |
Weiyun Mao1, Bengang Wei2, Xiangyi Xu1, Lu Chen1, Tianyi Wu1, Zhengrui Peng1, Chen Ren1.
Abstract
The fault diagnosis of power transformers is a challenging problem. The massive multisource fault is heterogeneous, the type of fault is undetermined sometimes, and one device has only met a few kinds of faults in the past. We propose a fault diagnosis method based on deep neural networks and a semi-supervised transfer learning framework called adaptive reinforcement (AR) to solve the above limitations. The innovation of this framework consists of its enhancement of the consistency regularization algorithm. The experiments were conducted on real-world 110 kV power transformers' three-phase fault grounding currents of the iron cores from various devices with four types of faults: Phases A, B, C and ABC to ground. We trained the model on the source domain and then transferred the model to the target domain, which included the unbalanced and undefined fault datasets. The results show that our proposed model reaches over 95% accuracy in classifying the type of fault and outperforms other popular networks. Our AR framework fits target devices' fault data with fewer dozen epochs than other novel semi-supervised techniques. Combining the deep neural network and the AR framework helps diagnose the power transformers, which lack diagnosis knowledge, with much less training time and reliable accuracy.Entities:
Keywords: deep neural network; fault type diagnosis of power transformers; semi-supervised transfer learning; three-phase grounding current of the iron core
Year: 2022 PMID: 35746252 PMCID: PMC9231397 DOI: 10.3390/s22124470
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Methodology of the whole framework.
Figure 2One sample of the 3-phase grounding current of iron core data with the fault of Phase A to ground.
Figure 3Distribution of the fault types in source domain.
Fault distribution of the target domain.
| Power Transformer | Type 1 | Type 2 | Type 3 | Type 4 | Type 5 |
|---|---|---|---|---|---|
| No. 1 | 164 | None | 1107 | 57 | 512 |
| No. 2 | None | 176 | 113 | 1319 | 413 |
| No. 3 | None | 872 | None | 491 | 634 |
Figure 4Model Architecture of FCNN–Attention–BiLSTM.
Figure 5The framework of adaptive reinforcement (AR).
Figure 6Training history of the validation set on the source domain.
Evaluation results of different deep learning models’ performances on the source domain.
| Evaluation/Methods | FCNN–Att–BiLSTM | FCNN–Att | FCNN | FCNN–BiLSTM–Att |
|---|---|---|---|---|
| Accuracy | 95.37% | 94.18% | 93.33% | 93.63% |
| Recall | 95.38% | 94.19% | 93.33% | 93.63% |
| Precision | 95.37% | 94.18% | 93.33% | 93.63% |
| F1 Score | 95.37% | 94.18% | 93.33% | 93.63% |
Epochs required to achieve over accuracy three times for each transfer learning method.
| Methods/Transformer | No. 1 Power Transformer | No. 2 Power Transformer | No. 3 Power Transformer |
|---|---|---|---|
| Fine-tuning | 24 | 25 | 29 |
| Pseudolabel | 26 | 27 | 32 |
| Freeze | 35 | 37 | 35 |
| ACR | 13 | 15 | 15 |
| EM | 13 | 11 | 13 |
| LR | 14 | 15 | 14 |
| AR without experience replay | 14 | 13 | 14 |
| AR | 9 | 9 | 10 |
Sample distribution of each power transformer.
| No. 1 Power Transformer | No. 2 Power Transformer | No. 3 Power Transformer | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Methods/Labels | Type 1 | Type 2 | Type 3 | Type 4 | Type 1 | Type 2 | Type 3 | Type 4 | Type 1 | Type 2 | Type 3 | Type 4 |
| Pseudolabel | 287 | 113 | 1107 | 333 | 189 | 259 | 175 | 1398 | 347 | 913 | 122 | 615 |
| ACR | 357 | 97 | 1188 | 198 | 341 | 212 | 143 | 1325 | 280 | 954 | 165 | 598 |
| EM | 291 | 143 | 1217 | 189 | 303 | 271 | 183 | 1264 | 263 | 931 | 186 | 617 |
| LR | 350 | 103 | 1174 | 213 | 339 | 225 | 140 | 1317 | 291 | 944 | 169 | 593 |
| AR | 1347 | 1347 | 1347 | 1347 | 1366 | 1366 | 1366 | 1366 | 993 | 993 | 993 | 993 |
Evaluation results of our proposed semi-supervised transfer learning models after training on the raw target datasets without unlabelled samples.
| Evaluation/Methods | No. 1 Power Transformer | No. 2 Power Transformer | No. 3 Power Transformer |
|---|---|---|---|
| Accuracy | 93.13% | 93.27% | 93.07% |
| Recall | 93.13% | 93.27% | 93.07% |
| Precision | 93.14% | 93.27% | 93.07% |
| F1 Score | 93.13% | 93.27% | 93.07% |