| Literature DB >> 27119052 |
Lukun Wang1, Xiaoying Zhao2, Jiangnan Pei3, Gongyou Tang1.
Abstract
This paper proposes a novel continuous sparse autoencoder (CSAE) which can be used in unsupervised feature learning. The CSAE adds Gaussian stochastic unit into activation function to extract features of nonlinear data. In this paper, CSAE is applied to solve the problem of transformer fault recognition. Firstly, based on dissolved gas analysis method, IEC three ratios are calculated by the concentrations of dissolved gases. Then IEC three ratios data is normalized to reduce data singularity and improve training speed. Secondly, deep belief network is established by two layers of CSAE and one layer of back propagation (BP) network. Thirdly, CSAE is adopted to unsupervised training and getting features. Then BP network is used for supervised training and getting transformer fault. Finally, the experimental data from IEC TC 10 dataset aims to illustrate the effectiveness of the presented approach. Comparative experiments clearly show that CSAE can extract features from the original data, and achieve a superior correct differentiation rate on transformer fault diagnosis.Entities:
Keywords: Continuous sparse autoencoder; Deep belief network; Deep learning; Dissolved gas analysis; Transformer fault
Year: 2016 PMID: 27119052 PMCID: PMC4830783 DOI: 10.1186/s40064-016-2107-7
Source DB: PubMed Journal: Springerplus ISSN: 2193-1801
Fault classification
| Symbol | Transformer fault |
|---|---|
| PD | Partial discharges |
| LED | Low energy discharge |
| HED | High energy discharge |
| TF1 | Thermal faults <700 °C |
| TF2 | Thermal faults >700 °C |
Gas importance by faults
| Cause of gas generation | H2 | CH4 | C2H6 | C2H4 | C2H2 |
|---|---|---|---|---|---|
| Electrical fault | |||||
| PD | ● | ○ | |||
| LED | ● | ● | |||
| HED | ● | ○ | ● | ||
| Thermal fault | |||||
| TF1 | ○ | ● | ● | ● | |
| TF2 | ○ | ○ | ● | ○ | |
●: high importance, ○: medium importance
Fig. 1DBN model
Fig. 2Model of AE
Fig. 3Reconstruction of swiss-roll mainfold a raw swiss-roll mainfold. b reconstruction of swiss-roll mainfold by autoencoder, c reconstruction of swiss-roll mainfold by CSAE
Fig. 4Network structure
Fig. 5Flowchart of proposed method
Fig. 6CSAE and BP error curve
Classification accuracy of K-NN
| K | 10 (%) | 15 (%) | 20 (%) | 60 (%) |
|---|---|---|---|---|
| Accuracy (%) | 88.9 | 90 | 83.9 | 77.8 |
Classification accuracy of SVM
| Kernel function | SVM_RBF (%) | SVM_SIG (%) | SVM_PLOY (%) |
|---|---|---|---|
| Accuracy (%) | 79.9 | 59.5 | 68.8 |
Classification accuracy of BP and CSAE
| Classification | CSAE (%) | BP (%) |
|---|---|---|
| TF1 (%) | 100 | 86.6 |
| TF2 (%) | 93.7 | 81.2 |
| PD (%) | 83.3 | 83.3 |
| LED (%) | 95.6 | 82.6 |
| HED (%) | 95.5 | 86.6 |
Results of Wilcoxon rank sum test
| State | CSAE (%) | BP (%) |
|---|---|---|
| Standard deviation (%) | 6.22 | 2.44 |
| Average accuracy (%) | 93.6 | 84.1 |
| p-value | 0.0195 | |
A part of training results
| No | CH4/H2 | C2H2/C2H4 | C2H4/C2H6 | Actual fault | Forecast fault |
|---|---|---|---|---|---|
| 1 | 0.06 | 0 | 1.35 | LED | LED |
| 2 | 1 | 0.007 | 2.52 | TF1 | TF1 |
| 3 | 0.96 | 0.025 | 8.12 | TF2 | TF2 |
| 4 | 2.3 | 0 | 3.83 | TF2 | TF2 |
| 5 | 7.19 | 0.005 | 8.63 | TF2 | TF2 |
| 6 | 0.235 | 1.1 | 7.67 | PD | PD |
| 7 | 1.3 | 0 | 1.22 | TF1 | TF1 |
| 8 | 1.23 | 0.05 | 9.22 | TF2 | TF2 |
| 9 | 0.17 | 1 | 9.615 | PD | PD |