| Literature DB >> 34064694 |
Amelia A Casciola1, Sebastiano K Carlucci1, Brianne A Kent2,3, Amanda M Punch1, Michael A Muszynski1, Daniel Zhou1, Alireza Kazemi4, Maryam S Mirian2, Jason Valerio2, Martin J McKeown2, Haakon B Nygaard2.
Abstract
Sleep disturbances are common in Alzheimer's disease and other neurodegenerative disorders, and together represent a potential therapeutic target for disease modification. A major barrier for studying sleep in patients with dementia is the requirement for overnight polysomnography (PSG) to achieve formal sleep staging. This is not only costly, but also spending a night in a hospital setting is not always advisable in this patient group. As an alternative to PSG, portable electroencephalography (EEG) headbands (HB) have been developed, which reduce cost, increase patient comfort, and allow sleep recordings in a person's home environment. However, naïve applications of current automated sleep staging systems tend to perform inadequately with HB data, due to their relatively lower quality. Here we present a deep learning (DL) model for automated sleep staging of HB EEG data to overcome these critical limitations. The solution includes a simple band-pass filtering, a data augmentation step, and a model using convolutional (CNN) and long short-term memory (LSTM) layers. With this model, we have achieved 74% (±10%) validation accuracy on low-quality two-channel EEG headband data and 77% (±10%) on gold-standard PSG. Our results suggest that DL approaches achieve robust sleep staging of both portable and in-hospital EEG recordings, and may allow for more widespread use of ambulatory sleep assessments across clinical conditions, including neurodegenerative disorders.Entities:
Keywords: EEG headband; deep learning; machine learning; neurodegenerative disease; sleep; sleep staging
Mesh:
Year: 2021 PMID: 34064694 PMCID: PMC8151443 DOI: 10.3390/s21103316
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Distribution of sleep stages assigned by the Senior Polysomnographic Technologist (Wake: 22.45%, N1: 6.95%, N2: 47.86%, N3: 11.07%, rapid eye movement (REM): 11.66%).
Headband data characteristics.
| Bandwidth | Sample Rate | Amplifier Gain | Resolution | Noise |
|---|---|---|---|---|
| 0–131 Hz | 500 samples/sec | 6 | 24 bits/sample | 0.7 μV |
Figure 2Valid electroencephalogram (EEG) signal shown on the left, corrupted data on the right.
Figure 3Overlapping windows applied over the EEG signal (black line). The dotted vertical lines delimit two 30-s epochs of the same class. The red, green and yellow rectangles correspond to the newly generated epochs after applying a specified overlap (purple).
Figure 4Convolutional and long short-term memory (CNN + LSTM) model architecture inspired by [37].
Convolutional and long short-term memory (CNN + LSTM) model summary.
| Layer Type | Output Shape | Param # |
|---|---|---|
| Conv1D | (None, 2993, 8) | 136 |
| Activation (ReLU) | (None, 2993, 8) | 0 |
| MaxPooling1D | (None, 997, 8) | 0 |
| Conv1D | (None, 990, 16) | 1040 |
| Activation (ReLU) | (None, 990, 16) | 0 |
| MaxPooling1D | (None, 330, 16) | 0 |
| Conv1D | (None, 323, 32) | 4128 |
| Activation (ReLU) | (None, 323, 32) | 0 |
| MaxPooling1D | (None, 107, 32) | 0 |
| LSTM | (None, 107, 64) | 24,832 |
| LSTM | (None, 64) | 33,024 |
| Dense | (None, 5) | 325 |
| Total Params | 63,485 | |
| Trainable Params | 63,485 | |
| Non-Trainable Params | 0 |
Type and number of features extracted from each EEG channel.
| Feature Category | Feature Group | Feature Size |
|---|---|---|
| Frequency Domain | RSP | 11 |
| HP | 15 | |
| SWI | 3 | |
| Time Domain | Hjorth | 3 |
| Skewness | 1 | |
| Kurtosis | 1 | |
| HOSA | Bi-Spectrum | 20 |
| Wavelet | Relative Power | 8 |
| Total Features | 62 |
Figure 5Per-patient validation accuracy during leave-one-out cross validation for the convolutional and long short-term memory (CNN+LSTM) model with headband (HB) and polysomnography (PSG).
Confusion matrices for deep learning model predictions on each subject.
| Headband (HB) | Polysomnography (PSG) | |
|---|---|---|
| Subject 1 |
|
|
| Subject 2 |
|
|
| Subject 3 |
|
|
| Subject 4 |
|
|
| Subject 5 |
|
|
| Subject 6 |
|
|
| Subject 7 |
|
|
| Subject 8 |
|
|
| Subject 9 |
|
|
| Subject 10 |
|
|
| Subject 11 |
|
|
| Subject 12 |
|
|
Average stage-wise performance over all folds for the deep learning model.
| Data | N1 | N2 | N3 | REM | Wake | Accuracy | Balanced Accuracy |
|---|---|---|---|---|---|---|---|
|
| 29.80% | 74.87% | 84.02% | 73.96% | 80.60% | 74.01% | 68.65% |
|
| 31.08% | 77.82% | 85.27% | 75.38% | 84.64% | 77.00% | 70.84% |
Figure 6Confusion matrices for inputs passed through the following bandpass filters: (a) delta; (b) theta; (c) alpha and beta.
Figure 7Per-patient validation accuracy during leave-one-out cross validation for the ensemble bagged trees model with headband (HB) and polysomnography (PSG).
Average stage-wise performance over all folds for the ensemble bagged trees method.
| Data | N1 | N2 | N3 | REM | Wake | Accuracy | Balanced Accuracy |
|---|---|---|---|---|---|---|---|
|
| 4.11% | 82.32% | 49.65% | 28.26% | 80.03% | 67.53% | 48.88% |
|
| 13.13% | 88.36% | 35.90% | 55.17% | 83.49% | 73.06% | 55.21% |