| Literature DB >> 35054512 |
Hengyang Fang1, Changhua Lu1, Feng Hong1, Weiwei Jiang1,2, Tao Wang1.
Abstract
Aiming at the fact that traditional convolutional neural networks cannot effectively extract signal features in complex application scenarios, a sleep apnea (SA) detection method based on multi-scale residual networks is proposed. First, we analyze the physiological mechanism of SA, which uses the RR interval signals and R peak signals derived from the ECG signals as input. Then, a multi-scale residual network is used to extract the characteristics of the original signals in order to obtain sensitive characteristics from various angles. Because the residual structure is used in the model, the problem of model degradation can be avoided. Finally, a fully connected layer is introduced for SA detection. In order to overcome the impact of class imbalance, a focal loss function is introduced to replace the traditional cross-entropy loss function, which makes the model pay more attention to learning difficult samples in the training phase. Experimental results from the Apnea-ECG dataset show that the accuracy, sensitivity and specificity of the proposed multi-scale residual network are 86.0%, 84.1% and 87.1%, respectively. These results indicate that the proposed method not only achieves greater recognition accuracy than other methods, but it also effectively resolves the problem of low sensitivity caused by class imbalance.Entities:
Keywords: ECG signals; focal loss; multi-scale; residual network; sleep apnea
Year: 2022 PMID: 35054512 PMCID: PMC8781811 DOI: 10.3390/life12010119
Source DB: PubMed Journal: Life (Basel) ISSN: 2075-1729
Figure 1The process of the proposed method.
Figure 2Positioned R peak and extracted derived RR interval signal and R peak signal.
Figure 3Residual block.
Figure 4Multi-scale residual network. (a) Network structure; (b) multi-scale block.
Multi-scale residual network structure parameters.
| Layer | Output Size | Network Architecture |
|---|---|---|
| conv1 | 100 × 1 | Convolutional layer: 7 × 1, 64, Stride: 3 |
| conv2_ | 50 × 1 | Pooling layer: 3 × 1, Stride: 2 |
|
| ||
| conv3_ | 25 × 1 |
|
| conv4_ | 13 × 1 |
|
| conv5_ | 7 × 1 |
|
| 1 × 1 | Dropout: 0.5, | |
| Computing power | 0.144 × 109 | |
The performance of the proposed method on the test set.
| Forecast Result | Accuracy/% | Sensitivity/% | Specificity/% | ||||
|---|---|---|---|---|---|---|---|
| N | AH | Total | |||||
| Realitylabel | N | 9158 | 1353 | 10,511 | 86.0 | 84.1 | 87.1 |
| AH | 1036 | 5462 | 6498 | ||||
| Total | 10,194 | 6815 | 17,009 | ||||
Performance comparison before and after using multi-scale convolution topology.
| Method | Accuracy/% | Sensitivity/% | Specificity/% | AUC% | F1-Score/% |
|---|---|---|---|---|---|
| ResNet | 84.6 | 82.2 | 86.1 | 0.918 | 80.3 |
| ResNet + Multiscale | 86.0 | 84.1 | 87.1 | 0.931 | 82.1 |
Figure 5Performance comparison of focus loss function, class weight and without using any data imbalance technology. (a) The proposed method; (b) without using any data imbalance technology method; (c) class weight method.
Performance of ResNet + Multiscale and ResNet in per-recording classification.
| Method | Accuracy/% | Sensitivity/% | Specificity/% | AUC | Corr/% |
|---|---|---|---|---|---|
| ResNet | 91.2 | 100 | 75 | 0.985 | 0.945 |
| ResNet + Multiscale | 97.1 | 100 | 91.7 | 1 | 0.956 |
The performance of our proposed model on the UCD dataset.
| Method | Accuracy/% | Sensitivity/% | Specificity/% |
|---|---|---|---|
| ResNet | 67.1 | 35.5 | 72.2 |
| ResNet + Multiscale | 72.4 | 36.5 | 83.6 |
The performance comparison between the proposed method and similar research.
| Work | Method | Accuracy/% | Sensitivity/% | Specificity/% |
|---|---|---|---|---|
| Sharma and Sharma | LS-SVM | 83.4 | 79.5 | 88.4 |
| Pinho et al. | ANN/SVM | 82.1 | 88.4 | 72.3 |
| Viswabhargav et al. | SVM | 78.1 | 78.0 | 78.1 |
| Surrel et al. | LS-SVM | 82.2 | 73.3 | 87.6 |
| Li et al. | DNN + HMM | 84.7 | 88.9 | 82.1 |
| Feng et al. | TDCS | 85.1 | 86.2 | 84.4 |
| Martin-Gonzalez et al. | LDA + QDA + LR | 84.8 | 81.5 | 86.8 |
| Chang et al. | 1D CNN | 87.9 | 81.1 | 92.0 |
| Singh et al. | CNN + Decision Fusion | 86.2 | 90.0 | 83.8 |
| Our method | ResNet + Multiscale | 86.0 | 84.1 | 87.1 |