| Literature DB >> 35590938 |
Ante Topic1, Mladen Russo1, Maja Stella1, Matko Saric1.
Abstract
An important function of the construction of the Brain-Computer Interface (BCI) device is the development of a model that is able to recognize emotions from electroencephalogram (EEG) signals. Research in this area is very challenging because the EEG signal is non-stationary, non-linear, and contains a lot of noise due to artifacts caused by muscle activity and poor electrode contact. EEG signals are recorded with non-invasive wearable devices using a large number of electrodes, which increase the dimensionality and, thereby, also the computational complexity of EEG data. It also reduces the level of comfort of the subjects. This paper implements our holographic features, investigates electrode selection, and uses the most relevant channels to maximize model accuracy. The ReliefF and Neighborhood Component Analysis (NCA) methods were used to select the optimal electrodes. Verification was performed on four publicly available datasets. Our holographic feature maps were constructed using computer-generated holography (CGH) based on the values of signal characteristics displayed in space. The resulting 2D maps are the input to the Convolutional Neural Network (CNN), which serves as a feature extraction method. This methodology uses a reduced set of electrodes, which are different between men and women, and obtains state-of-the-art results in a three-dimensional emotional space. The experimental results show that the channel selection methods improve emotion recognition rates significantly with an accuracy of 90.76% for valence, 92.92% for arousal, and 92.97% for dominance.Entities:
Keywords: Brain-Computer Interface; Neighborhood Component Analysis; ReliefF; computer-generated holography; deep learning; electroencephalogram; gender specific emotion recognition; valence-arousal-dominance model
Mesh:
Year: 2022 PMID: 35590938 PMCID: PMC9101362 DOI: 10.3390/s22093248
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Comparison between datasets.
| Dataset | DEAP | DREAMER | AMIGOS | SEED |
|---|---|---|---|---|
| Participants | 32 | 23 | 40 | 15 |
| Trials | 40 | 18 | 16 | 10 |
| Channels | 32 | 14 | 14 | 62 |
| Affective states | Valence | Valence | Valence | Valence |
| Rating scale range | 1–9 | 1–5 | 1–9 | N/A |
Figure 13D emotional space: Valence–arousal–dominance.
Figure 2Decomposition of the EEG signal in sub bands with Discrete Wavelet Transform using ‘db5′ mother wavelet.
Figure 3The construction of holographic feature maps using Computer-Generated Holography.
Figure 4Example of holographic feature maps for each dataset.
The percentages of use of each EEG channel.
| Percentage | Channels |
|---|---|
| >75% | F4, F3 |
| 60–75% | T7, FP1, FP2, T8, F7, F8 |
| 45–60% | O1, P7, P8, O2 |
| 30–45% | FC5, FC6, C4, C3, AF3, AF4 |
| <30% | P3, P4, Pz |
Figure 5Head map with top 10 channels highlighted in yellow that are selected with (a) ReliefF and (b) NCA for each dataset.
Figure 6Head map with top 10 channels selected with ReliefF on (a) male (highlighted in blue) and (b) female (highlighted in red) participants for each dataset.
Figure 7Flow chart of feature extraction, fusion, and classification.
Accuracy and F1-score for proposed and baseline methods in 3D emotional space per dataset.
| DEAP | DREAMER | AMIGOS | SEED | |||
|---|---|---|---|---|---|---|
| R-HOLO-FM | Valence | Accuracy |
|
|
| 88.19 |
| F1-score |
|
|
| 88.51 | ||
| Arousal | Accuracy |
|
|
| N/A | |
| F1-score |
|
|
| N/A | ||
| Dominance | Accuracy |
|
|
| N/A | |
| F1-score | 86.56 |
| 87.83 | N/A | ||
| N-HOLO-FM | Valence | Accuracy | 81.88 | 86.12 | 88.53 |
|
| F1-score | 86.16 | 88.48 | 87.90 |
| ||
| Arousal | Accuracy | 82.45 | 89.07 | 91.32 | N/A | |
| F1-score | 85.76 | 88.58 | 87.58 | N/A | ||
| Dominance | Accuracy | 88.35 | 89.82 | 86.10 | N/A | |
| F1-score |
| 89.04 |
| N/A | ||
| Random | Valence | Accuracy | 51.33 | 48.79 | 49.57 | 43.33 |
| F1-score | 50.42 | 48.21 | 49.43 | 43.21 | ||
| Arousal | Accuracy | 49.06 | 51.45 | 49.78 | N/A | |
| F1-score | 48.13 | 49.21 | 46.96 | N/A | ||
| Dominance | Accuracy | 50.70 | 51.21 | 57.33 | N/A | |
| F1-score | 49.35 | 45.87 | 57.33 | N/A | ||
| Majority | Valence | Accuracy | 63.13 | 61.11 | 56.47 | 50.00 |
| F1-score | 38.70 | 37.93 | 36.09 | 33.33 | ||
| Arousal | Accuracy | 63.75 | 72.43 | 65.95 | N/A | |
| F1-score | 38.93 | 42.02 | 39.74 | N/A | ||
| Dominance | Accuracy | 66.72 | 77.05 | 54.74 | N/A | |
| F1-score | 40.02 | 43.52 | 35.38 | N/A | ||
| Class ratio | Valence | Accuracy | 45.94 | 48.79 | 51.72 | 50.67 |
| F1-score | 47.66 | 48.79 | 49.14 | 49.33 | ||
| Arousal | Accuracy | 45.94 | 39.61 | 45.69 | N/A | |
| F1-score | 42.03 | 40.10 | 43.97 | N/A | ||
| Dominance | Accuracy | 42.03 | 35.75 | 51.72 | N/A | |
| F1-score | 38.59 | 30.92 | 51.72 | N/A |
Comparison with other studies with reduced channel set on DEAP dataset.
| Study | Used Feature(s) | Classification Method(s) | Number of | Best |
|---|---|---|---|---|
| Koelstra et al. [ | PSD | NB | 32 | V: 57.60 |
| Li et al. [ | Entropy and energy | kNN | 18 | V: 85.74 |
| Bazgir et al. [ | Entropy and energy | SVM | 10 |
|
| Mohammadi et al. [ | Entropy and energy | kNN | 10 | V: 86.75 |
| Özerdem et al. [ | Various time and | MLPNN | 5 | V: 77.14 |
| Wang et al. [ | Band energy | SVM | 8 for V | V: 74.41 |
| Msonda et al. [ | EMD IMFs | LSVC | 8 | V: 67.00 |
| Menon et al. [ | Various time and | HDC | Feature channel vector set | V: 76.70 |
| Gupta et al. [ | IP | RF | 6 | V: 79.99 |
| Mert et al. [ | Various time and | MEMD + ANN | 18 | V: 72.87 |
| Zhang et al. [ | Band power | PNN | 9 for V | V: 81.21 |
| Our method | R-HOLO-FM | CNN + SVM | 10 | V: 83.26 |
| Our method | N-HOLO-FM | CNN + SVM | 10 | V: 81.88 |
V: Valence, A: Arousal, D: Dominance, PSD: Power Spectral Density, NB: Naïve Bayes, kNN: k-Nearest Neighbor, SVM: Support Vector Machine, MLPNN: MultiLayer Perceptron Neural Network, EMD: Empirical Mode Decomposition, IMF: Intrinsic Mode Function, LSVC: Linear Support Vector Classifier, HDC: Hyper-Dimensional Computing, IP: Information potential, RF: Random Forest, MEMD: Multivariate Empirical Mode Decomposition, ANN: Artificial Neural Network, PNN: Probabilistic Neural Network, CNN: Convolutional Neural Network. * The average classification accuracy on Beta band. ** GSR, ECG and EEG signals were used.
Comparison with other studies with reduced channel set on DREAMER dataset.
| Study | Used Feature(s) | Classification Method(s) | Number of | Best |
|---|---|---|---|---|
| Katsigiannis et al. [ | PSD | SVM | 14 | V: 62.49 |
| Msonda et al. [ | EMD IMF | LSVC | 8 | V: 80.00 |
| Our method | R-HOLO-FM | CNN + SVM | 10 |
|
| Our method | N-HOLO-FM | CNN + SVM | 10 | V: 86.12 |
V: Valence, A: Arousal, D: Dominance, PSD: Power Spectral Density, SVM: Support Vector Machine, EMD: Empirical Mode Decomposition, IMF: Intrinsic Mode Function, LSVC: Linear Support Vector Classifier, CNN: Convolutional Neural Network.
Comparison with other studies with reduced channel set on AMIGOS dataset.
| Study | Used Feature(s) | Classification Method(s) | Number of | Best |
|---|---|---|---|---|
| Miranda et al. [ | PSD, SPA | SVM | 14 | V: 57.60 |
| Msonda et al. [ | EMD IMF | LR | 8 | V: 78.00 |
| Menon et al. [ | Various time and | HDC | Feature channel | V: 87.10 |
| Our method | R-HOLO-FM | CNN + SVM | 10 |
|
| Our method | N-HOLO-FM | CNN + SVM | 10 | V: 88.53 |
V: Valence, A: Arousal, D: Dominance, PSD: Power Spectral Density, SPA: Spectral Power Asymmetry, SVM: Support Vector Machine, EMD: Empirical Mode Decomposition, IMF: Intrinsic Mode Function, LR: Linear Regression, HDC: Hyper-Dimensional Computing, CNN: Convolutional Neural Network. * GSR, ECG and EEG signals were used.
Comparison with other studies with reduced channel set on SEED dataset.
| Study | Used Feature(s) | Classification Method(s) | Number of | Best |
|---|---|---|---|---|
| Zheng et al. [ | Feature map from DE | DBN + SVM | 12 | V: 86.65 |
| Gupta et al. [ | IP | RF | 12 | V: 90.48 |
| Pane et al. [ | DE | SDA + LDA | 15 |
|
| Cheah et al. [ | Extracted with VGG14 | VGG14 1D | 10 | V: 91.67 |
| Zheng [ | Raw EEG features | GSCCA | 12 | V: 83.72 |
| Our method | R-HOLO-FM | CNN + SVM | 10 | V: 88.19 |
| Our method | N-HOLO-FM | CNN + SVM | 10 | V: 88.31 |
V: Valence, A: Arousal, D: Dominance, DE: Differential Entropy, DBN: Deep Belief Networks, SVM: Support Vector Machine, IP: Information Potential, RF: Random Forest, SDA: Stepwise Discriminant Analysis, LDA: Linear Discriminant Analysis classifier, VGG: Visual Geometry Group, GSCCA: Group Sparse Canonical Correlation Analysis, CNN: Convolutional Neural Network.
Accuracy for male subjects with the channels selected with the ReliefF.
| Dataset | Valence | Arousal | Dominance |
|---|---|---|---|
| DEAP | 82.55 | 82.27 | 88.81 |
| DREAMER | 90.26 | 91.87 | 93.24 |
| AMIGOS | 83.63 | 87.84 | 90.40 |
| SEED | 82.07 | N/A | N/A |
Accuracy for female subjects with the channels selected with the ReliefF.
| Dataset | Valence | Arousal | Dominance |
|---|---|---|---|
| DEAP | 87.82 | 89.26 | 82.12 |
| DREAMER | 89.58 | 93.27 | 89.42 |
| AMIGOS | 88.92 | 92.08 | 85.44 |
| SEED | 87.70 | N/A | N/A |