| Literature DB >> 32509259 |
Hongpo Zhang1,2, Renke He2,3, Honghua Dai2,4, Mingliang Xu3, Zongmin Wang2.
Abstract
Atrial fibrillation is the most common arrhythmia and is associated with high morbidity and mortality from stroke, heart failure, myocardial infarction, and cerebral thrombosis. Effective and rapid detection of atrial fibrillation is critical to reducing morbidity and mortality in patients. Screening atrial fibrillation quickly and efficiently remains a challenging task. In this paper, we propose SS-SWT and SI-CNN: an atrial fibrillation detection framework for the time-frequency ECG signal. First, specific-scale stationary wavelet transform (SS-SWT) is used to decompose a 5-s ECG signal into 8 scales. We select specific scales of coefficients as valid time-frequency features and abandon the other coefficients. The selected coefficients are fed to the scale-independent convolutional neural network (SI-CNN) as a two-dimensional (2D) matrix. In SI-CNN, a convolution kernel specifically for the time-frequency characteristics of ECG signals is designed. During the convolution process, the independence between each scale of coefficient is preserved, and the time domain and the frequency domain characteristics of the ECG signal are effectively extracted, and finally the atrial fibrillation signal is quickly and accurately identified. In this study, experiments are performed using the MIT-BIH AFDB data in 5-s data segments. We achieve 99.03% sensitivity, 99.35% specificity, and 99.23% overall accuracy. The SS-SWT and SI-CNN we propose simplify the feature extraction step, effectively extracts the features of ECG, and reduces the feature redundancy that may be caused by wavelet transform. The results shows that the method can effectively detect atrial fibrillation signals and has potential in clinical application.Entities:
Mesh:
Year: 2020 PMID: 32509259 PMCID: PMC7251457 DOI: 10.1155/2020/7526825
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1Flowchart of the proposed framework.
Figure 2Flowchart of SS-SWT.
Figure 3Comparison of signals obtained by reconstructing information at different scales of coefficients. (a) Original ECG signal. (b) Reconstructed signal by coefficients of D1–D8. (c) Reconstructed signal by coefficients of D1–D8.
The detailed architecture of the proposed SI-CNN.
| CNN parameters | Volume |
|---|---|
| 1st convolutional layer kernel size | 1 × 3 × 32 |
| 2nd convolutional layer kernel size | 1 × 3 × 32 |
| 1st max-pooling layer kernel size | 1 × 3 × 32 |
| 1st batch normalization layer | — |
| 1st dropout layer rate | 0.25 |
| 3rd convolutional layer kernel size | 1 × 3 × 32 |
| 4th convolutional layer kernel size | 1 × 3 × 32 |
| 2nd max-pooling layer kernel size | 1 × 3 × 32 |
| 2nd batch normalization layer | — |
| 2nd dropout layer rate | 0.25 |
| 5th convolutional layer kernel size | 1 × 3 × 64 |
| 6th convolutional layer kernel size | 1 × 3 × 64 |
| 3rd max-pooling layer kernel size | 1 × 3 × 64 |
| 3rd batch normalization layer | — |
| 3rd dropout layer rate | 0.25 |
| 7th convolutional layer kernel size | 1 × 3 × 64 |
| 8th convolutional layer kernel size | 1 × 3 × 64 |
| 4th max-pooling layer kernel size | 1 × 3 × 64 |
| 4th batch normalization layer | — |
| 4th dropout layer rate | 0.25 |
| 9th convolutional layer kernel size | 1 × 3 × 128 |
| 10th convolutional layer kernel size | 1 × 3 × 128 |
| 5th max-pooling layer kernel size | 1 × 3 × 128 |
| 5th batch normalization layer | — |
| 5th dropout layer rate | 0.25 |
| Global average pooling layer | — |
| 6th dropout layer rate | 0.25 |
| The number of neurons in the fully connected layer | 128 |
| 7th dropout layer rate | 0.25 |
| The number of neurons in the softmax layer | 2 |
Figure 4Receptive field of continuous conv layers.
Figure 5Partitioning of training, validation, and test datasets in this article.
Figure 6Comparison of signals obtained by reconstructing information at different scales of coefficients. Structural optimization. (a) Train and validation accuracy on different dropout rates. (b) Train and validation loss on different dropout rates.
SI-CNN training results under different batch sizes.
| Batch size | Se (%) | Sp (%) | Acc (%) |
|---|---|---|---|
| 64 | 98.76 | 99.40 | 99.15 |
|
|
|
|
|
| 256 | 99.38 | 99.57 | 99.08 |
| 512 | 98.92 | 99.00 | 98.97 |
| 1024 | 97.99 | 99.54 | 98.92 |
Optimal CNN parameter set for AF detection.
| The CNN optimization parameter | Value |
|---|---|
| Batch size | 128 |
| Epochs | 50 |
| Optimizer | Adam |
|
| 0.9 |
|
| 0.99 |
| Initial learning rate | 0.001 |
| Dropout | 0.25 |
Figure 7Results obtained in this study. (a) Train and validation accuracy iters. (b) Train and validation loss iters.
Comparison of the performances of AF classification algorithms.
| Algorithm | Signal lengths | Methodology | Se (%) | Sp (%) | Acc (%) |
|---|---|---|---|---|---|
| Tateno and Glass [ | 50 s | RR interval irregular | 94.4 | 97.2 | — |
| Dash et al. [ | 128 beats | RR interval irregular | 94.4 | 95.1 | — |
| Babaeizadeh et al. [ | >60 s | RR interval irregular | 92 | 95.5 | — |
| Huang et al. [ | 101 beats | RR interval irregular | 96.1 | 98.1 | — |
| Asgari et al. [ | 9.8 s | SWT + SVM | 97 | 97.1 | 97.1 |
| Ladavich and Ghoraani [ | 7 beats | P-wave absence (PWA) | 98.09 | 91.66 | 93.12 |
| García et al. [ | 7 beats | SWT | 91.21 | 94.63 | 93.32 |
| He et al. [ | 5 beats | SWT | 99.41 | 98.91 | 99.23 |
| Xia et al. [ | 5 s | SFWT | 98.34 | 98.24 | 98.29 |
| Xia et al. [ | 5 s | SWT | 98.6 | 97.17 | 97.74 |
| Proposed framework | 5 s | SWT | 99.03 | 99.35 | 99.23 |
Figure 8Confusion matrix in this study.