| Literature DB >> 32260445 |
Yucel Cimtay1, Erhan Ekmekcioglu1.
Abstract
The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network's power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.Entities:
Keywords: EEG; convolutional neural network; dataset independency; dense layer; emotion recognition; filtering on output; pretrained models; raw data; subject independency
Mesh:
Year: 2020 PMID: 32260445 PMCID: PMC7181114 DOI: 10.3390/s20072034
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Windowing with overlapping on raw electroencephalogram (EEG) data.
Figure 2Windowing, Reshaping and Normalization on EEG data.
Figure 3The structure of proposed network model.
The properties of Layers following InceptionResnetV2 base model.
| Layer (Type) | Output Shape | Connected to | Activation Function |
|---|---|---|---|
| Global_Average_Pooling | (None, 1536) | convolution | - |
| Dense 1 | (None, 1024) | Global_Average_Pooling | Relu |
| Dense 2 | (None, 1024) | Dense 1 | Relu |
| Dense 3 | (None, 1024) | Dense 2 | Relu |
| Dense 4 | (None, 512) | Dense 3 | Relu |
| Dense 5 | (None, z 1) | Dense 4 | Softmax |
1 z is set according to the number of output classes.
The training parameters of network.
| Property | Value |
|---|---|
| Base model | InceptionResnetV2 |
| Additional layers | Global Average Pooling, 5 Dense Layers |
| Regularization | L2 |
| Optimizer | Adam |
| Loss | Categorical cross entropy |
| Max. # Epochs | 100 |
| Shuffle | True |
| Batch size | 64 |
| Environment | Win 10, 2 Parallel GPU(s), TensorFlow |
| # Output classes | 2 (Pos-Neg) or 3 (Pos-Neu-Neg) |
Figure 4Filtering on Output.
Figure 5Overall training and testing process of the EEG-based emotion recognition model.
Figure 6An example of network training.
“One subject out” classification accuracies for the SEED dataset.
| Users | Accuracy | Accuracy | Accuracy | Accuracy | Accuracy | Accuracy |
|---|---|---|---|---|---|---|
| User 1 | 85.7 | 74.2 | 88.5 | 73.3 | 55.2 | 78.2 |
| User 2 | 83.6 | 76.3 | 86.7 | 72.8 | 57.4 | 78.5 |
| User 3 | 69.2 | 56.7 | 74.3 | 61.6 | 53.9 | 67.7 |
| User 4 | 95.9 | 69.4 | 96.1 | 83.4 | 72.4 | 88.3 |
| User 5 | 78.4 | 70.1 | 83.2 | 74.1 | 54.3 | 76.5 |
| User 6 | 95.8 | 81.8 | 96.4 | 85.3 | 67.7 | 89.1 |
| User 7 | 72.9 | 56.2 | 77.7 | 64.4 | 53.5 | 70.3 |
| User 8 | 69.2 | 49.3 | 75.2 | 62.9 | 51.3 | 69.2 |
| User 9 | 88.6 | 61.5 | 90.5 | 79.2 | 64.8 | 82.7 |
| User 10 | 77.8 | 70.1 | 82.7 | 69.3 | 56.0 | 74.5 |
| User 11 | 78.6 | 65.7 | 83.1 | 73.0 | 59.9 | 78.1 |
| User 12 | 81.6 | 72.0 | 85.7 | 75.6 | 63.2 | 78.4 |
| User 13 | 91.2 | 80.2 | 94.2 | 81.3 | 73.1 | 84.9 |
| User 14 | 86.4 | 72.3 | 91.8 | 73.9 | 61.1 | 78.5 |
| User 15 | 89.2 | 73.9 | 92.3 | 75.4 | 60.3 | 80.3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 Pos–Neg: Positive–Negative, 2 Pos–Neu–Neg: Positive–Neutral–Negative.
“One-subject-out” prediction accuracies of reference studies using the SEED dataset.
| Work | Accuracy | Accuracy |
|---|---|---|
| ST-SBSSVM [ | 89.0 | - |
| RGNN [ | - | 85.3 |
| Proposed | 86.5 | 78.3 |
| CNN-DDC [ | - | 82.1 |
| ASFM [ | - | 80.4 |
| SAAE [ | - | 77.8 |
| TCA [ | - | 71.6 |
| GFK [ | 67.5 | |
| KPCA [ | 62.4 | |
| MIDA [ | 72.4 |
“One-subject-out” prediction accuracies for DEAP dataset using two-classes (Pos-Neg).
| Users | Accuracy | Accuracy |
|---|---|---|
| User 1 | 65.1 | 69.2 |
| User 2 | 71.2 | 73.4 |
| User 3 | 67.8 | 69.1 |
| User 4 | 61.7 | 65.3 |
| User 5 | 73.1 | 75.9 |
| User 6 | 82.5 | 85.4 |
| User 7 | 75.5 | 77.2 |
| User 8 | 67.6 | 71.3 |
| User 9 | 62.8 | 67.9 |
| User 10 | 61.9 | 66.6 |
| User 11 | 68.8 | 72.5 |
| User 12 | 64.3 | 69.8 |
| User 13 | 69.1 | 74.9 |
| User 14 | 64.3 | 68.8 |
| User 15 | 65.6 | 70.2 |
| User 16 | 68.7 | 72.1 |
| User 17 | 65.6 | 70.7 |
| User 18 | 75.8 | 78.3 |
| User 19 | 66.9 | 72.1 |
| User 20 | 70.4 | 73.2 |
| User 21 | 64.5 | 68.8 |
| User 22 | 61.6 | 68.3 |
| User 23 | 80.7 | 83.6 |
| User 24 | 62.5 | 69.4 |
| User 25 | 64.9 | 70.1 |
| User 26 | 69.7 | 72.9 |
| User 27 | 82.7 | 85.3 |
| User 28 | 68.9 | 73.8 |
| User 29 | 61.7 | 69.9 |
| User 30 | 72.9 | 77.7 |
| User 31 | 73.1 | 78.4 |
| User 32 | 63.6 | 68.1 |
|
|
|
|
|
|
|
|
One-subject-out accuracy comparison of several studies for the DEAP dataset (Pos-Neg).
| Work | Accuracy |
|---|---|
| FAWT [ | 79.9 |
| T-RFE [ | 78.7 |
| Proposed | 72.8 |
| ST-SBSSVM [ | 72 |
| VMD-DNN [ | 62.5 |
| MIDA [ | 48.9 |
| TCA [ | 47.2 |
| SA [ | 38.7 |
| ITL [ | 40.5 |
| GFK [ | 46.5 |
| KPCA [ | 39.8 |
One-subject-out prediction accuracies for the LUMED dataset.
| Users | Accuracy | Accuracy |
|---|---|---|
| User 1 | 85.8 | 87.1 |
| User 2 | 56.3 | 62.7 |
| User 3 | 82.2 | 86.4 |
| User 4 | 73.8 | 78.5 |
| User 5 | 92.1 | 95.3 |
| User 6 | 67.8 | 74.1 |
| User 7 | 66.3 | 71.4 |
| User 8 | 89.7 | 93.5 |
| User 9 | 86.3 | 89.9 |
| User 10 | 89.1 | 93.4 |
| User 11 | 58.9 | 67.6 |
|
|
|
|
|
|
|
|
Cross-dataset prediction accuracy results (Trained on SEED and Tested on DEAP).
| Users | Accuracy | Accuracy |
|---|---|---|
| User 1 | 50.5 | 54.9 |
| User 2 | 61.7 | 63.7 |
| User 3 | 43.3 | 47.3 |
| User 4 | 46.0 | 51.5 |
| User 5 | 68.9 | 71.9 |
| User 6 | 45.3 | 49.4 |
| User 7 | 73.4 | 77.2 |
| User 8 | 51.9 | 56.3 |
| User 9 | 62.3 | 67.9 |
| User 10 | 63.8 | 68.6 |
| User 11 | 48.6 | 53.6 |
| User 12 | 46.4 | 51.3 |
| User 13 | 50.1 | 57.1 |
| User 14 | 70.4 | 76.9 |
| User 15 | 58.8 | 62.8 |
| User 16 | 59.7 | 66.3 |
| User 17 | 46.6 | 53.1 |
| User 18 | 64.7 | 68.5 |
| User 19 | 47.9 | 53.3 |
| User 20 | 39.1 | 44.6 |
| User 21 | 62.1 | 68.8 |
| User 22 | 45.6 | 51.3 |
| User 23 | 61.4 | 69.9 |
| User 24 | 54.0 | 59.2 |
| User 25 | 50.8 | 56.3 |
| User 26 | 40.8 | 44.7 |
| User 27 | 39.2 | 45.3 |
| User 28 | 42.4 | 48.4 |
| User 29 | 46.2 | 50.3 |
| User 30 | 41.7 | 46.2 |
| User 31 | 61.4 | 65.7 |
| User 32 | 53.8 | 57.1 |
|
|
|
|
|
|
|
|
One-subject-out cross-dataset prediction accuracy and standard deviation comparison of several studies (Trained on the SEED and tested on the DEAP).
| Work | Accuracy (Pos-Neg) | Standard Deviation |
|---|---|---|
| Proposed | 58.10 | 9.51 |
| MIDA [ | 47.1 | 10.60 |
| TCA [ | 42.6 | 14.69 |
| SA [ | 37.3 | 7.90 |
| ITL [ | 34.5 | 13.17 |
| GFK [ | 41.9 | 11.33 |
| KPCA [ | 35.6 | 6.97 |
Cross-dataset prediction accuracy results (Trained on the SEED/DEAP and tested on the LUMED).
| Users | Trained on SEED | Trained on DEAP | ||
|---|---|---|---|---|
| Accuracy | Accuracy | Accuracy | Accuracy | |
| User 1 | 68.2 | 72.3 | 42.7 | 48.3 |
| User 2 | 54.5 | 61.7 | 50.4 | 56.8 |
| User 3 | 54.3 | 59.6 | 49.7 | 54.1 |
| User 4 | 59.6 | 64.1 | 51.3 | 57.9 |
| User 5 | 44.8 | 53.7 | 87.1 | 89.7 |
| User 6 | 67.1 | 73.5 | 52.6 | 58.4 |
| User 7 | 53.2 | 60.8 | 53.8 | 59.2 |
| User 8 | 64.5 | 71.2 | 46.0 | 49.3 |
| User 9 | 48.6 | 50.9 | 84.7 | 85.6 |
| User 10 | 64.9 | 76.3 | 51.5 | 58.8 |
| User 11 | 57.1 | 64.8 | 63.8 | 67.1 |
|
|
|
|
|
|
|
|
|
|
|
|