| Literature DB >> 30562920 |
Yunzhao Jia1, Minqiang Xu2, Rixin Wang3.
Abstract
Hydraulic pump is a driving device of the hydraulic system, always working under harsh operating conditions, its fault diagnosis work is necessary for the smooth running of a hydraulic system. However, it is difficult to collect sufficient status information in practical operating processes. In order to achieve fault diagnosis with poor information, a novel fault diagnosis method that is the based on Symbolic Perceptually Important Point (SPIP) and Hidden Markov Model (HMM) is proposed. Perceptually important point technology is firstly imported into rotating machine fault diagnosis; it is applied to compress the original time-series into PIP series, which can depict the overall movement shape of original time series. The PIP series is transformed into symbolic series that will serve as feature series for HMM, Genetic Algorithm is used to optimize the symbolic space partition scheme. The Hidden Markov Model is then employed for fault classification. An experiment involves four operating conditions is applied to validate the proposed method. The results show that the fault classification accuracy of the proposed method reaches 99.625% when each testing sample only containing 250 points and the signal duration is 0.025 s. The proposed method could achieve good performance under poor information conditions.Entities:
Keywords: Hidden Markov Model; Perceptually Important Point; fault diagnosis; hydraulic pump
Year: 2018 PMID: 30562920 PMCID: PMC6308457 DOI: 10.3390/s18124460
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The vertical distance from the point to the line connecting adjacent important points.
Perceptually Important Point algorithm.
|
|
| Input: Original data series |
| Target length of PIP series |
| Output: PIP series |
| Begin; |
| Set |
| Calculating vertical distance |
| Selecting |
| Repeat until P all filled; |
| Arranging |
| Return PIP series |
| End; |
Figure 2(a) Waveform of simulating signal; (b) Waveform of PIP series.
Figure 3(a) Schematic diagram of reconstruction error calculation; (b) Local enlarge of the marked region in (a).
Figure 4The relation between reconstruction error and the length of PIP series.
Figure 5Symbolization results of simulating signal based on 3σ criterion.
Figure 6Probability density distribution of important points.
Symbolization algorithm.
|
|
| Input: Original data series |
| PIP series: |
| Symbol set: |
| Output: Symbolic series |
| Fractile series: |
| Begin; |
| Calculating mean value |
| Determining the number of region |
| Optimizing partition scheme with Genetic Algorithm; |
| Computing fractile series |
| Partition phase space with partition nodes; |
| Encoding important points according to their location in symbolic space. |
| For |
| For |
| Repeat until all the points in |
| Return symbolic series |
| End; |
Figure 7Topological structure of the Hidden Markov Model.
Figure 8Hydraulic pump fault diagnosis framework.
Figure 9(a) Experiment rig; (b) Sensor location; (c) Fault screw; and, (d) Fault rolling bearing.
Figure 10Time domain waveform; (a) Condition 1; (b) Condition 2; (c) Condition 3; and, (d) Condition 4.
Figure 11Frequency spectrum of each operating condition; (a) Condition 1; (b) Conditon 2; (c) Condition 3; and, (d) Condition 4.
Fault modes and sample partition.
| Fault Mode | Training Samples | Testing Samples | Signal Duration | |
|---|---|---|---|---|
| Operating condition 1 | normal | 200 | 400 | 0.025s |
| Operating condition 2 | Screw wearing 0.2 mm | 200 | 400 | 0.025s |
| Operating condition 3 | Screw wearing 0.4 mm | 200 | 400 | 0.025s |
| Operating condition 4 | Rolling bearing crack in inner race | 200 | 400 | 0.025s |
Figure 12Waveform of PIP series (a) PIP series with 25 points; (b) PIP series with 50 points.
Figure 13Symbol distribution of each operating condition: (a) Condition 1; (b) Condition 2; (c) Condition 3; and, (d) Condition 4.
Figure 14Classification results of the method: (a) Model 1; (b) Model 2; (c) Model 3; and, (d) Model 4.
Average Log likelihood outputted by models.
| Model 1 | Model 2 | Model 3 | Model 4 | |
|---|---|---|---|---|
| Condition 1 | −50.91 | −60.35 | −68.21 | −73.34 |
| Condition 2 | −68.02 | −57.29 | −78.47 | −82.83 |
| Condition 3 | −60.34 | −61.97 | −44.39 | −57.21 |
| Condition 4 | −64.08 | −73.32 | −65.86 | −45.98 |
Confusion matrix of the method.
| Estimated Class | |||||
|---|---|---|---|---|---|
| Condition 1 | Condition 2 | Condition 3 | Condition 4 | ||
| True class | Condition 1 | 398 | 2 | 0 | 0 |
| Condition 2 | 2 | 398 | 0 | 0 | |
| Condition 3 | 0 | 0 | 399 | 1 | |
| Condition 4 | 0 | 0 | 1 | 399 | |
Model’s performance with different number of symbol.
| Number of Symbols | 25 PIPs | 50 PIPs | ||||
|---|---|---|---|---|---|---|
| Training Time | Testing Time | Accuracy | Training Time | Testing Time | Accuracy | |
| 4 | 26.93 | 1.5529 | 97.875% | 50.364 | 2.6857 | 98% |
| 5 | 30.8365 | 2.2310 | 99.5% | 55.6846 | 2.4327 | 99.875% |
| 6 | 25.2357 | 1.2569 | 96.375% | 60.5648 | 2.9680 | 98.625% |
| 7 | 39.1613 | 1.4089 | 99.625% | 55.6987 | 2.2786 | 99.25% |
| 8 | 41.8503 | 1.3914 | 99.625% | 68.9321 | 2.5137 | 99.625% |
| 9 | 47.6847 | 1.7234 | 98.625% | 82.3438 | 2.6525 | 99% |
Model’s property with different length of PIP series (seven symbols).
| Length of PIP Series | Training Time | Testing Time | Reconstruction Error | Accuracy | |||
|---|---|---|---|---|---|---|---|
| Condition 1 | Condition 2 | Condition 3 | Condition 4 | ||||
| 10 | 16.8218 | 1.3258 | 0.9935 | 0.8056 | 1.2394 | 1.5299 | 91.125% |
| 15 | 24.7076 | 1.5122 | 0.7119 | 0.6203 | 1.1112 | 1.2891 | 96.375% |
| 20 | 37.2211 | 1.6450 | 0.6040 | 0.5168 | 1.0264 | 1.1272 | 98.625% |
| 25 | 55.274 | 1.4089 | 0.5515 | 0.4491 | 0.9534 | 0.9699 | 99.625% |
| 30 | 61.1820 | 2.0051 | 0.5175 | 0.4061 | 0.9039 | 0.8502 | 99.5% |
| 35 | 75.2692 | 1.1486 | 0.4891 | 0.3736 | 0.8554 | 0.7608 | 99.375% |
| 40 | 73.7563 | 1.2493 | 0.4633 | 0.3459 | 0.8101 | 0.6502 | 99.5% |
| 45 | 83.1325 | 1.3547 | 0.4395 | 0.3223 | 0.7639 | 0.6116 | 99.5% |
| 50 | 104.8866 | 1.6880 | 0.4176 | 0.3008 | 0.7198 | 0.5501 | 99% |
Figure 15(a) The relation between classification accuracy and the length of PIP series; (b) The relation between reconstruction error and the length of PIP series.
Performance of two similar feature extraction methods.
| Classifier | Number of Testing Sample | Training Time | Testing Time | Signal Duration | Accuracy |
|---|---|---|---|---|---|
| SPIP + HMM (25 points) | 300 | 39.2659 | 1.40 | 0.025 s | 99.67% |
| SAX + HMM (25 points) | 300 | 36.2714 | 2.43 | 0.025 s | 97.33% |
| SAX + HMM (50 points) | 300 | 62.1353 | 4.07 | 0.025 s | 99% |
| ZC + HMM (25 intervals) | 300 | 157.19 | 0.79 | 0.025 s | 92.67% |
Performance of machine learning classifiers.
| Classifier | Number of Testing Samples | Training Time | Testing Time | Signal Duration | Accuracy |
|---|---|---|---|---|---|
| SPIP + HMM | 300 | 39.2659 | 1.40 | 0.025 s | 99.67% |
| SVM | 180 | 1.86 | 0.23 | 0.5 s | 100% |
| RBFNN | 180 | - | 0.2424 | 0.5 s | 97.78% |
| BPNN | 180 | 1.67 | 0.0922 | 0.5 s | 88.88% |