| Literature DB >> 31063501 |
Sajad Mousavi1, Fatemeh Afghah1, U Rajendra Acharya2,3,4.
Abstract
Electroencephalogram (EEG) is a common base signal used to monitor brain activities and diagnose sleep disorders. Manual sleep stage scoring is a time-consuming task for sleep experts and is limited by inter-rater reliability. In this paper, we propose an automatic sleep stage annotation method called SleepEEGNet using a single-channel EEG signal. The SleepEEGNet is composed of deep convolutional neural networks (CNNs) to extract time-invariant features, frequency information, and a sequence to sequence model to capture the complex and long short-term context dependencies between sleep epochs and scores. In addition, to reduce the effect of the class imbalance problem presented in the available sleep datasets, we applied novel loss functions to have an equal misclassified error for each sleep stage while training the network. We evaluated the performance of the proposed method on different single-EEG channels (i.e., Fpz-Cz and Pz-Oz EEG channels) from the Physionet Sleep-EDF datasets published in 2013 and 2018. The evaluation results demonstrate that the proposed method achieved the best annotation performance compared to current literature, with an overall accuracy of 84.26%, a macro F1-score of 79.66% and κ = 0.79. Our developed model can be applied to other sleep EEG signals and aid the sleep specialists to arrive at an accurate diagnosis. The source code is available at https://github.com/SajadMo/SleepEEGNet.Entities:
Mesh:
Year: 2019 PMID: 31063501 PMCID: PMC6504038 DOI: 10.1371/journal.pone.0216456
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Illustration of the proposed sequence to sequence deep learning network architecture for automated sleep stage scoring.
The input signal is a sequence of 30-s EEG epochs and the outputs are their corresponding stages (or classes) generated by our proposed method.
Fig 2Detailed sketch of the utilized CNN model in the proposed work.
Fig 3A schematic diagram of the bidirectional recurrent neural network.
Details of number of sleep stages in each version of Sleep-EDF dataset.
| Dataset | W | N1 | N2 | N3-N4 | REM | Total |
|---|---|---|---|---|---|---|
| Sleep-EDF-13 | 8,285 | 2,804 | 17,799 | 5,703 | 7,717 | 42,308 |
| Sleep-EDF-18 | 65,951 | 21,522 | 96,132 | 13,039 | 25,835 | 222,479 |
Confusion matrix and per-class performance achieved by the proposed method using Fpz-Cz EEG channel of the EDF-Sleep-2013 database.
| Predicted | Per-class Performance (%) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| W1 | N1 | N2 | N3 | REM | Pre | Rec | Spe | F1 | |
| W1 | 7161 | 432 | 67 | 27 | 219 | 87.84 | 90.58 | 96.97 | 89.19 |
| N1 | 442 | 1486 | 364 | 25 | 409 | 50.05 | 54.51 | 96.08 | 52.19 |
| N2 | 359 | 735 | 14187 | 1035 | 837 | 91.26 | 82.71 | 94.20 | 86.77 |
| N3 | 37 | 9 | 560 | 4857 | 2 | 81.69 | 88.87 | 96.90 | 85.13 |
| REM | 153 | 307 | 368 | 2 | 6520 | 81.63 | 88.71 | 95.59 | 85.02 |
Confusion matrix and per-class performance achieved by the proposed method using Pz-Oz EEG channel of the EDF-Sleep-2013 database.
| Predicted | Per-class Performance (%) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| W1 | N1 | N2 | N3 | REM | Pre | Rec | Spe | F1 | |
| W1 | 7094 | 398 | 82 | 41 | 238 | 90.20 | 90.33 | 97.65 | 90.27 |
| N1 | 539 | 1167 | 455 | 29 | 492 | 45.84 | 43.51 | 96.36 | 44.64 |
| N2 | 114 | 655 | 14220 | 1157 | 971 | 88.58 | 83.07 | 92.19 | 85.74 |
| N3 | 17 | 12 | 791 | 4658 | 10 | 78.48 | 84.88 | 96.36 | 81.55 |
| REM | 100 | 314 | 506 | 50 | 6489 | 79.13 | 87.00 | 94.84 | 82.88 |
Comparison of performance obtained by our approach with other state-of-the-art algorithms.
| Method | Dataset | CV | EEG Channel | Overall Performance | Per-class Performance (F1) | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| ACC | MF1 | W | N1 | N2 | N3 | REM | |||||
| Sleep-EDF-13 | 20-fold CV | Fpz-Cz | |||||||||
| Supratak et al. [ | Sleep-EDF-13 | 20-fold CV | Fpz-Cz | 82.0 | 76.9 | 0.76 | 84.7 | 46.6 | 85.9 | 84.8 | 82.4 |
| Tsinalis et al. [ | Sleep-EDF-13 | 20-fold CV | Fpz-Cz | 78.9 | 73.7 | - | 71.6 | 47.0 | 84.6 | 84.0 | 81.4 |
| Tsinalis et al. [ | Sleep-EDF-13 | 20-fold CV | Fpz-Cz | 74.8 | 69.8 | - | 65.4 | 43.7 | 80.6 | 84.9 | 74.5 |
| Sleep-EDF-13 | 20-fold CV | Pz-Oz | |||||||||
| Supratak et al. [ | Sleep-EDF-13 | 20-fold CV | Pz-Oz | 79.8 | 73.1 | 0.72 | 88.1 | 37 | 82.7 | 77.3 | 80.3 |
| Sleep-EDF-18 | 10-fold CV | Fpz-Cz | |||||||||
| Sleep-EDF-18 | 10-fold CV | Pz-Oz | |||||||||
Sleep-EDF-13: Sleep-EDF 2013; Sleep-EDF-18: Sleep-EDF 2018; CV: Cross Validation
Fig 4Graphs of the performance of the accuracy (a) and the loss function (b) of the proposed model in each epoch for a randomly selected fold (i.e., the fold 4).
Fig 5A example of hypnograms generated by the machine (i.e., the proposed method) and a sleep expert of a subject from the Sleep-EDF-13 dataset; approximately 85% coverage.
Fig 6Attention maps of two sequence inputs (EEG epochs) and their corresponding sleep stage scores provided by our proposed method.