| Literature DB >> 34461905 |
Yao Zhang1, Yineng Zheng2, Menglu Wang1, Xingming Guo3.
Abstract
BACKGROUND ANDEntities:
Keywords: Deep learning; Exercise sudden death; Exhaustive swimming experiment; Heart sounds
Mesh:
Year: 2021 PMID: 34461905 PMCID: PMC8404258 DOI: 10.1186/s12938-021-00925-0
Source DB: PubMed Journal: Biomed Eng Online ISSN: 1475-925X Impact factor: 2.819
Studies for HS feature extraction and classification using machine learning
| Year | Author | Dataset | Feature extraction methods | Classifier | Results | ||
|---|---|---|---|---|---|---|---|
| Sens (%) | Spec (%) | Acc (%) | |||||
| 2015 | Zheng et al. [ | 88 normal heart sounds, 64 abnormal heart sounds | MF-DFA, MESE, EMD | HMM | 82.95 | 79.68 | 81.58 |
| BP-ANN | 85.23 | 82.81 | 84.21 | ||||
| LS-SVM | 96.59 | 93.75 | 95.39 | ||||
| 2016 | Thomae et al. [ | PhysioNet | 1D CNN | Bidirectional GRU | 96 | 83 | – |
| 2016 | Potes et al. [ | PhysioNet | LR-HMMS, MFCC | AdaBoost | 70 | 88 | – |
| Frequency bands decomposition | CNN | 79 | 86 | – | |||
| 2019 | Li et al. [ | 2532 recordings from healthy subjects, 664 recordings from patients | DAE | 1D CNN | – | – | 97.85 |
| MFCC | 1D CNN | – | – | 91.02 | |||
| 2020 | Li et al. [ | PhysioNet | Eight domains | CNN | 87 | 86.6 | 86.8 |
| 2020 | Gao et al. [ | 1286 normal recordings form PhysioNet, 108 abnormal heart sounds from patients | – | SVM | – | – | 87.62 |
| FCN | – | – | 94.65 | ||||
| LSTM | – | – | 96.29 | ||||
| GRU | – | – | 98.82 | ||||
| 2020 | Deng et al. [ | PhysioNet | MFCC | CRNN | 98.66 | 98.01 | 98.34 |
| PRCNN | 97.33 | 97.33 | 97.34 | ||||
MF-DFA multifractal detrended fluctuation analysis, MESE maximum entropy spectra estimation, EMD empirical mode decomposition, CNN convolutional neural network, 1D CNN one-dimensional convolutional neural network, DAE denoising autoencoder, MFCC Mel-frequency cepstrum coefficient, HMM hidden Markov model, LR-HSMM logistic regression-based hidden semi-Markov model, BP-ANN back-propagation artificial neural network, LS-SVM least square support vector machine, GRU gated recurrent unit, FCN Fully Convolutional Network, LSTM long-short term memory network, CRNN convolutional recurrent neural networks, PRCNN paralleling recurrent convolutional neural network
Fig. 1The illustration of the workflow in this paper. The CNN–GRU is the proposed network while others are the networks compared
Fig. 2The training and validation performance of the CNN–GRU network at 50 epochs: a accuracy; b loss
The performance comparison of different networks
| Networks | Acc (%) | Sens (%) | Spec (%) |
|---|---|---|---|
| CNN | 86.65 | 83.84 | 89.50 |
| GRU | 73.55 | 74.41 | 72.35 |
| Proposed network | 89.57 | 89.38 | 92.20 |
Acc accuracy, Sens sensitivity, Spec specificity
The performance of proposed network with four different time nodes
| Dataset | Acc (%) | Sens (%) | Spec (%) |
|---|---|---|---|
| Dataset A | 50.98 | 60.59 | 42.40 |
| Dataset B | 64.34 | 74.97 | 64.36 |
| Dataset C | 85.41 | 84.18 | 79.34 |
| Dataset D | 89.57 | 89.38 | 92.20 |
Dataset A to Dataset D, respectively, represent the HS signals at different time points in the experiment
Acc accuracy, Sens sensitivity, Spec specificity
Fig. 3The classification results of HS signals at different time points by CNN–GRU network. When Dataset D is used as input, the CNN–GRU achieves the best performance
Results of the different convolution kernel shapes and different numbers of convolution layers
| Different layers | Convolution kernel size | Acc (%) | Sens (%) | Spec (%) |
|---|---|---|---|---|
| 4 layers | 10 | 72.71 | 80.71 | 74.25 |
| 20 | 75.33 | 78.16 | 70.72 | |
| 30 | 65.77 | 72.15 | 51.38 | |
| 6 layers | 10 | 73.45 | 77.02 | 79.90 |
| 20 | 77.48 | 79.58 | 77.35 | |
| 30 | 75.03 | 86.97 | 73.05 | |
| 8 layers | 10 | 85.35 | 87.91 | 84.75 |
| 20 | ||||
| 30 | 85.72 | 88.29 | 87.42 |
The best result is highlighted in bold
Acc accuracy, Sens sensitivity, Spec specificity
Fig. 4The accuracy comparison of different units in CNN–GRU. When units is set as 128, the CNN–GRU achieves the best accuracy
Fig. 5The accuracy comparison between the different learning rates and the different dropout rates: a learning rate; b dropout rate
Fig. 6The variation of HR and D/S between survival and exercise sudden death group in different datasets: a shows that the HR values of Dataset C and Dataset D are different between the two groups; b shows that the D/S between the two groups of rabbits is different in Dataset D. *P < 0.05
Fig. 7The experimental procedures of repeated weight-bearing exhaustive swimming: the left is the overall experiment process, and the right is the exhaustive swimming experiment
The HS dataset collected from different time nodes
| Dataset | Time node | Description |
|---|---|---|
| Dataset A | Pre-test signal | 2482 recording form 10 survival samples and 1955 recording form 11 exercise sudden death samples |
| Dataset B | 24 h after the first exhaustive swimming | 2245 recording form 10 survival samples and 2049 recording form 10 exercise sudden death samples |
| Dataset C | 24 h after the second exhausting swimming | 2246 recording form 10 survival samples and 1317 recording form 5 exercise sudden death samples |
| Dataset D | 96 h after the third exhausting swimming and exercise sudden death during experiment | 2037 recording form 10 survival samples and 1251 recording from 11 exercise sudden death samples |
Fig. 8The time–frequency information of a resting New Zealand rabbit: a HS of a resting New Zealand Rabbit; b fast Fourier transform of HS; c short-time Fourier transform of HS
Fig. 9The location and segmentation of HS in a rabbit at four different time points: a before the experiment; b 24 h after the first exhaustive swimming; c 24 h after the second exhaustive swimming; d 96 h after the third exhaustive swimming. The blue and magenta dashed lines indicate the start and end of segmentation, respectively
Fig. 10The structure of the proposed network
The detailed information of the proposed network
| Layers | Layers types | Output size | Kernel/pool size | Filter numbers | Stride | Activation function |
|---|---|---|---|---|---|---|
| 0 | Input | 1001 × 1 | – | – | – | – |
| 1 | 1D conv | 982 × 9 | 20 | 9 | 1 | ReLU |
| 2 | 1D max pooling | 245 × 9 | 4 | – | 4 | – |
| 3 | 1D conv | 226 × 9 | 20 | 9 | 1 | ReLU |
| 4 | 1D max pooling | 56 × 9 | 4 | – | 4 | – |
| 5 | 1D conv | 37 × 9 | 20 | 9 | 1 | ReLU |
| 6 | 1D max pooling | 9 × 9 | 4 | – | 4 | – |
| 7 | GRU | 128 | – | – | – | dropout = 0.5 |
| 8 | dense | 2 | – | – | – | softmax |