| Literature DB >> 31382706 |
Yi Shi1, Yuanye Wang2, Lei Zhao2, Zhun Fan2.
Abstract
Phase-sensitive optical time domain reflectometer (Φ-OTDR) based distributed optical fiber sensing system has been widely used in many fields such as long range pipeline pre-warning, perimeter security and structure health monitoring. However, the lack of event recognition ability is always being the bottleneck of Φ-OTDR in field application. An event recognition method based on deep learning is proposed in this paper. This method directly uses the temporal-spatial data matrix from Φ-OTDR as the input of a convolutional neural network (CNN). Only a simple bandpass filtering and a gray scale transformation are needed as the pre-processing, which achieves real-time. Besides, an optimized network structure with small size, high training speed and high classification accuracy is built. Experiment results based on 5644 events samples show that this network can achieve 96.67% classification accuracy in recognition of 5 kinds of events and the retraining time is only 7 min for a new sensing setup.Entities:
Keywords: convolutional neural network; deep learning; event recognition; Φ-OTDR
Year: 2019 PMID: 31382706 PMCID: PMC6695721 DOI: 10.3390/s19153421
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The distributed optical fiber sensing system.
The number of every type of data.
| Event Type | I | II | III | IV | V |
|---|---|---|---|---|---|
| Training Set | 307 | 1122 | 1101 | 1237 | 748 |
| Validation Set | 77 | 280 | 275 | 310 | 187 |
| Total Number | 384 | 1402 | 1376 | 1547 | 935 |
Figure 2The data matrix after bandpass filter.
Figure 3The typical gray image of each event type.
The performance results of common CNNs.
| Model Name | Model Size (MB) | Training Speed (step/s) | Classification Accuracy (%) | Top 2 (%) |
|---|---|---|---|---|
| LeNet | 39.3 | 90.9 | 60 | 86.5 |
| AlexNet | 554.7 | 19.6 | 94.25 | 99.08 |
| VggNet | 1638.4 | 2.53 | 95.25 | 100 |
| GoogLeNet | 292.2 | 4.1 | 97.08 | 99.25 |
| ResNet | 282.4 | 7.35 | 91.9 | 97.75 |
Figure 4The relationship between model size and classification accuracy.
Figure 5The optimized network structure (the red cube denotes convolution operation and the blue cube denotes pooling operation).
Figure 6The learning curve of training.
Figure 7Loss curve (a) and classification accuracy curve (b) of training.
The classification accuracy of five events.
| Type of Accuracy | I | II | III | IV | V |
|---|---|---|---|---|---|
| Accuracy (%) | 98.02 | 98.67 | 100 | 92.1 | 95.5 |
| Top 2 accuracy (%) | 100 | 100 | 100 | 99 | 100 |
Figure 8Confusion matrix of five events’ classification.
Figure 9Accuracy curve of optimized network (green) and Inception-v3 (red).
Performance comparison with Inception-v3.
| Network | Accuracy (%) | Top 2 Accuracy (%) | Training Speed (steps/s) | Model Size (MB) |
|---|---|---|---|---|
| The optimized network | 96.67 | 99.75 | 35.61 | 20 |
| Inception-v3 | 97.08 | 99.25 | 4.35 | 292.2 |