| Literature DB >> 35847545 |
Turker Tuncer1, Sengul Dogan1, Abdulhamit Subasi2,3.
Abstract
Electroencephalography (EEG) signals collected from human brains have generally been used to diagnose diseases. Moreover, EEG signals can be used in several areas such as emotion recognition, driving fatigue detection. This work presents a new emotion recognition model by using EEG signals. The primary aim of this model is to present a highly accurate emotion recognition framework by using both a hand-crafted feature generation and a deep classifier. The presented framework uses a multilevel fused feature generation network. This network has three primary phases, which are tunable Q-factor wavelet transform (TQWT), statistical feature generation, and nonlinear textural feature generation phases. TQWT is applied to the EEG data for decomposing signals into different sub-bands and create a multilevel feature generation network. In the nonlinear feature generation, an S-box of the LED block cipher is utilized to create a pattern, which is named as Led-Pattern. Moreover, statistical feature extraction is processed using the widely used statistical moments. The proposed LED pattern and statistical feature extraction functions are applied to 18 TQWT sub-bands and an original EEG signal. Therefore, the proposed hand-crafted learning model is named LEDPatNet19. To select the most informative features, ReliefF and iterative Chi2 (RFIChi2) feature selector is deployed. The proposed model has been developed on the two EEG emotion datasets, which are GAMEEMO and DREAMER datasets. Our proposed hand-crafted learning network achieved 94.58%, 92.86%, and 94.44% classification accuracies for arousal, dominance, and valance cases of the DREAMER dataset. Furthermore, the best classification accuracy of the proposed model for the GAMEEMO dataset is equal to 99.29%. These results clearly illustrate the success of the proposed LEDPatNet19.Entities:
Keywords: Artificial intelligence; Emotion recognition; Led-pattern; Machine learning; RFIChi2; S-Box based feature generation; TQWT
Year: 2021 PMID: 35847545 PMCID: PMC9279545 DOI: 10.1007/s11571-021-09748-0
Source DB: PubMed Journal: Cogn Neurodyn ISSN: 1871-4080 Impact factor: 3.473
Fig. 1S-box of the Led cipher
Fig. 2The pattern of the Led-Pattern. Herein, v1, v2, …, v16 define values of the used overlapping block with a length of 16
Fig. 3Schematic explanation of the proposed LEDPatNet19 a graphical overview of the proposed model, b the proposed fused feature extractor
The obtained performance rates for GAMEEMO and DREAMER arousal case
| Channel | GAMEEMO | DREAMER/arousal | ||||||
|---|---|---|---|---|---|---|---|---|
| Acc | Rec | Pre | F1 | Acc | Rec | Pre | F1 | |
| AF3 | 98.75 | 98.75 | 98.75 | 98.75 | 91.19 | 89.57 | 91.87 | 90.71 |
| AF4 | 98.57 | 98.57 | 98.58 | 98.58 | ||||
| F3 | 99.11 | 99.11 | 99.11 | 99.11 | 90.51 | 88.21 | 92.21 | 90.16 |
| F4 | 98.39 | 98.39 | 98.41 | 98.40 | 91.86 | 89.80 | 93.46 | 91.59 |
| F7 | 98.21 | 98.21 | 98.24 | 98.23 | 89.83 | 87.65 | 91.12 | 89.35 |
| F8 | 98.75 | 98.75 | 98.76 | 98.75 | 91.86 | 89.96 | 93.16 | 91.53 |
| FC5 | 98.57 | 98.57 | 98.59 | 98.58 | 88.14 | 86.27 | 88.57 | 87.41 |
| FC6 | 88.47 | 85.74 | 90.48 | 88.05 | ||||
| O1 | 99.11 | 99.11 | 99.11 | 99.11 | 88.14 | 85.30 | 90.25 | 87.70 |
| O2 | 98.39 | 98.39 | 98.41 | 98.40 | 89.15 | 87.10 | 90.07 | 88.56 |
| P7 | 98.57 | 98.57 | 98.58 | 98.57 | 89.49 | 87.22 | 90.88 | 89.01 |
| P8 | 98.57 | 98.57 | 98.59 | 98.58 | 89.83 | 87.49 | 91.42 | 89.41 |
| T7 | 98.04 | 98.04 | 98.05 | 98.04 | 89.49 | 86.73 | 91.88 | 89.23 |
| T8 | 98.57 | 98.57 | 98.58 | 98.57 | 90.17 | 87.61 | 92.32 | 89.90 |
Acc Accuracy, Rec: Recall, Pre Precision, F1 F1-score
Accuracy rates (%) for the multiclass classification of the Alakus et al.’s method and our presented Led-Pattern and RFIChi2method
| Method | AF3 | AF4 | F3 | F4 | F7 | F8 | FC5 | FC6 | O1 | O2 | P7 | P8 | T7 | T8 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Alakus et al.’s method + kNN (lakus et al. | 42 | 55 | 35 | 43 | 43 | 54 | 47 | 36 | 43 | 38 | 41 | 40 | 38 | 45 |
| Alakus et al.’s method + SVM Alakus et al. | 54 | 50 | 40 | 54 | 70 | 69 | 34 | 34 | 55 | 54 | 66 | 70 | 47 | 79 |
| Alakus et al.’s method + MLPNN Alakus et al. | 80 | 75 | 75 | 82 | 71 | 71 | 75 | 74 | 71 | 65 | 70 | 72 | 65 | 79 |
| LEDPatNet19 | 98.75 | 98.57 | 99.11 | 98.39 | 98.21 | 98.75 | 98.57 | 99.11 | 98.39 | 98.57 | 98.57 | 98.04 | 98.57 |
The number of selected features for each channel using RFIChi2
| Channel | DREAMER | GAMEEMO | ||
|---|---|---|---|---|
| Arousal | Dominance | Valance | ||
| AF3 | 879 | 731 | 806 | 974 |
| AF4 | 451 | 602 | 745 | 890 |
| F3 | 988 | 422 | 890 | 780 |
| F4 | 566 | 351 | 631 | 969 |
| F7 | 580 | 448 | 842 | 937 |
| F8 | 707 | 805 | 848 | 898 |
| FC5 | 402 | 546 | 547 | 856 |
| FC6 | 536 | 616 | 702 | 862 |
| O1 | 548 | 463 | 834 | 932 |
| O2 | 710 | 313 | 960 | 835 |
| P7 | 684 | 373 | 648 | 931 |
| P8 | 717 | 407 | 991 | 824 |
| T7 | 748 | 242 | 976 | 677 |
| T8 | 710 | 882 | 625 | 989 |
The obtained performance rates for DREAMER dominance and DREAMER valance cases
| Channel | DREAMER/dominance | DREAMER/valance | ||||||
|---|---|---|---|---|---|---|---|---|
| Acc | Rec | Pre | F1 | Acc | Rec | Pre | F1 | |
| AF3 | 89.46 | 84.78 | 91.14 | 87.85 | 91.98 | 91.97 | 91.98 | 91.98 |
| AF4 | 92.90 | 92.90 | 92.90 | 92.90 | ||||
| F3 | 89.80 | 85.31 | 91.38 | 88.24 | 91.67 | 91.66 | 91.68 | 91.67 |
| F4 | 89.12 | 84.53 | 90.47 | 87.40 | 90.12 | 90.12 | 90.12 | 90.12 |
| F7 | 87.76 | 81.88 | 90.49 | 85.97 | ||||
| F8 | 92.52 | 88.97 | 94.09 | 91.46 | 93.83 | 93.83 | 93.83 | 93.83 |
| FC5 | 87.41 | 81.90 | 89.24 | 85.42 | 92.28 | 92.27 | 92.33 | 92.30 |
| FC6 | 87.07 | 81.93 | 88.12 | 84.91 | 91.98 | 91.97 | 92.01 | 91.99 |
| O1 | 85.71 | 80.65 | 85.91 | 83.19 | 87.35 | 87.35 | 87.35 | 87.35 |
| O2 | 89.46 | 85.06 | 90.71 | 87.80 | 92.59 | 92.60 | 92.61 | 92.61 |
| P7 | 89.80 | 85.86 | 90.57 | 88.15 | 91.05 | 91.05 | 91.05 | 91.05 |
| P8 | 90.82 | 86.89 | 92.09 | 89.42 | 90.74 | 90.73 | 90.77 | 90.75 |
| T7 | 88.78 | 84.56 | 89.43 | 86.93 | 92.90 | 92.89 | 92.95 | 92.92 |
| T8 | 90.82 | 86.34 | 92.98 | 89.54 | 91.98 | 91.96 | 92.15 | 92.05 |
Fig. 4Channel-wise classification accuracies of the proposed LEDPatNet19 per the used datasets
Comparative results for DREAMER dataset
| Study | Method | Accuracy (%) | ||
|---|---|---|---|---|
| Arousal | Dominance | Valance | ||
| Cheng et al. ( | Deep neural networks | 90.41 | 89.89 | 89.03 |
| Bhattacharyya et al. ( | Fourier–Bessel series expansion based empirical wavelet transform | 85.40 | 84.50 | 86.20 |
| Li et al. ( | 3-D feature representation and dilated fully convolutional networks | 79.91 | 80.23 | 81.30 |
| Liu et al. ( | Deep canonical correlation analysis | 89.00 | 90.70 | 90.60 |
| Wang et al. ( | Frame-level distilling neural network | 87.67 | 90.28 | 89.91 |
| Wang et al. ( | Domain adaptation symmetric and positive definite matrix network | 76.57 | 81.77 | 67.99 |
| Zhang et al. ( | Generative adversarial networks | 94.21 | – | 93.52 |
| Galvão et al. ( | Wavelet energy and entropy | 93.79 | – | 93.65 |
| Our method | LEDPatNet19 | 94.58 | 92.86 | 94.44 |