| Literature DB >> 35264939 |
Meili Zhu1, Qingqing Wang1, Jianglin Luo1.
Abstract
Among electroencephalogram (EEG) signal emotion recognition methods based on deep learning, most methods have difficulty in using a high-quality model due to the low resolution and the small sample size of EEG images. To solve this problem, this study proposes a deep network model based on dynamic energy features. In this method, first, to reduce the noise superposition caused by feature analysis and extraction, the concept of an energy sequence is proposed. Second, to obtain the feature set reflecting the time persistence and multicomponent complexity of EEG signals, the construction method of the dynamic energy feature set is given. Finally, to make the network model suitable for small datasets, we used fully connected layers and bidirectional long short-term memory (Bi-LSTM) networks. To verify the effectiveness of the proposed method, we used leave one subject out (LOSO) and 10-fold cross validation (CV) strategies to carry out experiments on the SEED and DEAP datasets. The experimental results show that the accuracy of the proposed method can reach 89.42% (SEED) and 77.34% (DEAP).Entities:
Keywords: Bi-LSTM; EEG; dynamic energy feature; emotion recognition; energy sequence
Year: 2022 PMID: 35264939 PMCID: PMC8900638 DOI: 10.3389/fncom.2021.741086
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 2.380
FIGURE 1Data preprocessing.
Regional division and coefficient setting.
| SEED | DEAP | a | |
| 1 | FP1,AF3,F7,F5,F3,F1,FT7, FC5,FC3,FC5,T7,C5,C3,C1 | EP1,AF3,F7,F3, FC5,FC1,T7,C3 |
|
| 2 | FP2,AP4,F2,F4,F6,F8 FC2, FC4,FC6,FC8,FPZ,FZ,FCZ,CZ | FP2,AF4,F4,F8, FC2,FC6,Fz,Cz |
|
| 3 | TP7,CP5,CP3,CP1,P7,P5,P3,P1, PO7,PO5,PO3,CB1,O1,CPZ,PZ,POZ,OZ | CP5,CP1,P7,P3, PO3,01,Pz,Oz | |
| 4 | CP2,CP4,CP6,TP8,P2,P4,P6,P8,PO4, PO6,PO8,CB2,O2,C2,C4,C6,C8 | CP2,CP6,P4,P8, PO4,O2,C4,T8 |
Data parameter setting.
| Array name | SEED | DEAP |
| Number of Channels | 62 | 32 |
| Length of time | 60 | 60 |
| Time window width | 4s | 4s |
| Time window move step | 2s | 2s |
| Number of time windows | 29 | 29 |
| Number of sample points in time window | 800 | 512 |
| Label type | 2/3 | 2 |
FIGURE 2Pseudocode for computing dynamic energy feature set.
FIGURE 3Schematic diagram of network structure.
Model parameter setting.
| Array name | SEED | DEAP |
| Input_size | 17 | 19 |
| Bi-LSTM | 128 | 128 |
| Drop out | 0.3 | 0.3 |
| LSTM | 64 | 64 |
| Dense 1 | 128 | 128 |
| Dense 2 | 3/2 | 3 |
| Optimizer | RMSprop | RMSprop |
| Learning rate | 0.001 | 0.001 |
| Epochs | 46 | 55 |
FIGURE 4Comparison of recognition rates between proposed and traditional methods.
FIGURE 5Cumulative contribution rate of principal components.
Top features contributing to the prediction of emotion using the MIPCA method for SEED.
| Feature type | SEED |
| Time domain features | 1. The first-order differential, 2. The normalized first-order differential, 3. The second-order differential, 4. The normalized second-order differential, 5. Instability index, 6. Fractal dimension, 7. Hjorth-mobility, and 8. Hjorth-complexity |
| Frequency domain features | 1. Hjorth-mobility (beta), 2. Hjorth-mobility (all), 3. Hjorth-complexity (beta), 4. Hjorth-complexity (all), 5. Maximum power spectrum (beta) frequency, and 6. Maximum power spectrum (all) frequency |
| Dynamical features | 1. De-gamma, 2. De-all, and 3. Wavelet entropy (all) |
FIGURE 6The results of the SEED datasets: (A) leave one subject out (LOSO) for the 2-classification. (B) LOSO for the 3-classification classes. (C) 10-fold CV for the 2-classification. (D) 10-fold CV for the 3-classification.
FIGURE 7The results of the DEAP dataset: (A) the result of LOSO and (B) the result of 10-fold CV.
Comparison results.
| Dataset | Method | No. of classes | Validation strategy | Accuracy |
| SEED | ApEn, PerEn, ShEn, etc. +SVM ( | 2 | LOSO | 83.33% |
| ST-SBSSVM ( | 2 | LOSO | 89% | |
| Holo-FM/CNN + SVM ( | 2 | 10-fold CV | 88.45% | |
| Differential entropy ( | 3 | LOSO | 60.93 | |
| Our method | 2 | 10-fold CV | 89.42% | |
| 3 | 10-fold CV | 81.23% | ||
| 2 | LOSO | 83.87 | ||
| 3 | LOSO | 69.76 | ||
| DEAP | Holo-FM/CNN + SVM ( | 2 | 10-fold CV | V:76.61% |
| RGB Heat-map/CNN + ELM (S. | 2 | 10-fold CV | V:71.09% | |
| ApEn, PerEn, ShEn, etc. +SVM ( | 2 | LOSO | 59.06 | |
| Our method | 2 | 10-fold CV | V:75.78% | |
| 2 | LOSO | V:61.39% |