| Literature DB >> 35709101 |
Changyuan Liu1, Yunfu Yin1, Yuhan Sun1, Okan K Ersoy2.
Abstract
Sleep staging is the basis of sleep evaluation and a key step in the diagnosis of sleep-related diseases. Despite being useful, the existing sleep staging methods have several disadvantages, such as relying on artificial feature extraction, failing to recognize temporal sequence patterns in the long-term associated data, and reaching the accuracy upper limit of sleep staging. Hence, this paper proposes an automatic Electroencephalogram (EEG) sleep signal staging model, which based on Multi-scale Attention Residual Nets (MAResnet) and Bidirectional Gated Recurrent Unit (BiGRU). The proposed model is based on the residual neural network in deep learning. Compared with the traditional residual learning module, the proposed model additionally uses the improved channel and spatial feature attention units and convolution kernels of different sizes in parallel at the same position. Thus, multiscale feature extraction of the EEG sleep signals and residual learning of the neural networks is performed to avoid network degradation. Finally, BiGRU is used to determine the dependence between the sleep stages and to realize the automatic learning of sleep data staging features and sleep cycle extraction. According to the experiment, the classification accuracy and kappa coefficient of the proposed method on sleep-EDF data set are 84.24% and 0.78, which are respectively 0.24% and 0.21 higher than the traditional residual net. At the same time, this paper also verified the proposed method on UCD and SHHS data sets, and the figure of classification accuracy is 79.34% and 81.6%, respectively. Compared to related existing studies, the recognition accuracy is significantly improved, which validates the effectiveness and generalization performance of the proposed method.Entities:
Mesh:
Year: 2022 PMID: 35709101 PMCID: PMC9202858 DOI: 10.1371/journal.pone.0269500
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
Fig 1GRU structure.
Fig 2Two-way GRU structure.
Fig 3CAU mechanisms.
Fig 4SAU mechanisms.
Fig 5RSCAM overall module.
Fig 6Overall structure of the model.
Sleep -EDF sleep staging.
| index | quantity | Proportion |
|---|---|---|
|
| 7927 | 18.90 |
|
| 2804 | 6.68 |
|
| 17799 | 42.43 |
|
| 5703 | 13.59 |
|
| 7717 | 18.40 |
Recall rate of each sleep stage classification under different algorithms under label smoothing on Sleep-EDF dataset.
| Net model | W | S1 | S2 | S3 | REM | Total recognition rate |
|---|---|---|---|---|---|---|
|
| 88.24% | 67.2% | 83.53% | 92.28% | 88.95% | 84.24% |
|
| 83.37% | 59.48% | 79.89% | 88.34% | 82.77% | 78.77% |
|
| 80.12% | 58.27% | 77.45% | 85.12% | 81.34% | 76.46% |
|
| 80.34% | 59.34% | 76.56% | 86.67% | 78.39% | 76.26% |
|
| 76.20% | 53.88% | 71.30% | 79.04% | 77.12% | 74.70% |
|
| 69.20% | 49.32% | 60.26% | 67.38% | 64.84% | 62.20% |
The experimental results of Sleep-EDF data set are compared with the existing research results.
| Number of output channels | Method | Author | Accuracy |
|---|---|---|---|
|
| XGBoost | Guo Yanping | 79.7% |
|
| K mean | Yu Ying et al | 72% |
|
| CNNs | Phan H et al | 82.6% |
|
| Residual CNNs | Humayun et al | 79.2% |
|
| Residual CNNs+BiLSTM | Seo H et al | 80.6% |
|
| MAResnet-BiGRU | Methods of this paper | 84.24% |
Recall rate of each sleep stage classification under different algorithms under label smoothing on UCD dataset.
| Net model | W | S1 | S2 | S3 | REM | Total recognition rate |
|---|---|---|---|---|---|---|
|
| 84.50% | 63.10% | 79.50% | 86.30% | 83.30% | 79.34% |
|
| 80.28% | 57.31% | 75.84% | 83.74% | 81.60% | 75.75% |
|
| 78.20% | 55.76% | 74.34% | 80.47% | 79.26% | 73.60% |
|
| 77.83% | 56.34% | 74.56% | 81.28% | 78.54% | 73.71% |
|
| 72.24% | 48.69% | 69.30% | 73.26% | 71.38% | 67.39% |
|
| 64.37% | 43.10% | 61.20% | 66.31% | 63.52% | 59.70% |
Recall rate of each sleep stage classification under different algorithms under label smoothing on SHHS dataset.
| Net model | W | S1 | S2 | S3 | REM | Total recognition rate |
|---|---|---|---|---|---|---|
|
| 85.24% | 64.37% | 82.90% | 89.29% | 86.2% | 81.60% |
|
| 83.36% | 56.48% | 78.41% | 87.49% | 81.73% | 77.49% |
|
| 79.12% | 55.21% | 74.62% | 83.12% | 79.32% | 74.27% |
|
| 78.64% | 56.14% | 75.16% | 84.76% | 76.30% | 74.20% |
|
| 74.22% | 50.26% | 69.83% | 79.04% | 70.42% | 68.75% |
|
| 65.10% | 45.32% | 63.74% | 65.28% | 61.8% | 60.24% |
Kappa coefficient under different algorithms.
| Method | kappa coefficient |
|---|---|
|
| 0.79±0.03 |
|
| 0.68±0.05 |
|
| 0.68±0.04 |
|
| 0.68±0.06 |
|
| 0.57±0.08 |
|
| 0.70±0.06 |
|
| 0.63±0.06 |
Fig 7Confusion matrix of Sleep-EDF data set.
Fig 8On the Sleep-EDF dataset, training accuracy and loss value under different algorithms.
Fig 9On the Sleep-EDF dataset, training loss values of six models.
Fig 10On the Sleep-EDF dataset, classification recall rates of each sleep period under different algorithms.