| Literature DB >> 34177441 |
Jingcong Li1,2, Shuqi Li3, Jiahui Pan1,2, Fei Wang1,2.
Abstract
As a physiological process and high-level cognitive behavior, emotion is an important subarea in neuroscience research. Emotion recognition across subjects based on brain signals has attracted much attention. Due to individual differences across subjects and the low signal-to-noise ratio of EEG signals, the performance of conventional emotion recognition methods is relatively poor. In this paper, we propose a self-organized graph neural network (SOGNN) for cross-subject EEG emotion recognition. Unlike the previous studies based on pre-constructed and fixed graph structure, the graph structure of SOGNN are dynamically constructed by self-organized module for each signal. To evaluate the cross-subject EEG emotion recognition performance of our model, leave-one-subject-out experiments are conducted on two public emotion recognition datasets, SEED and SEED-IV. The SOGNN is able to achieve state-of-the-art emotion recognition performance. Moreover, we investigated the performance variances of the models with different graph construction techniques or features in different frequency bands. Furthermore, we visualized the graph structure learned by the proposed model and found that part of the structure coincided with previous neuroscience research. The experiments demonstrated the effectiveness of the proposed model for cross-subject EEG emotion recognition.Entities:
Keywords: SEED dataset; cross-subject; emotion recognition; graph construction; graph neural network
Year: 2021 PMID: 34177441 PMCID: PMC8221183 DOI: 10.3389/fnins.2021.611653
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1General brain graph construction function.
Figure 2Self-organized graph construction module.
Figure 3Self-organized graph neural network for EEG emotion prediction.
Leave-one-subject-out emotion recognition accuracy (mean/standard deviation) on SEED and SEED-IV.
| SVM (Zhong et al., | 43.06/8.27 | 40.07/6.50 | 43.97/10.89 | 48.64/10.29 | 51.59/11.83 | 56.73/16.29 | 37.99/12.52 |
| TCA (Pan et al., | 44.10/8.22 | 41.26/9.21 | 42.93/14.33 | 43.93/10.06 | 48.43/9.73 | 63.64/14.88 | 56.56/13.77 |
| SA (Fernando et al., | 54.23/7.47 | 50.60/8.31 | 55.06/10.60 | 56.72/10.78 | 64.47/14.96 | 69.00/10.89 | 64.44/9.46 |
| T-SVM (Collobert et al., | - | - | - | - | - | 72.53/14.00 | - |
| TPT (Sangineto et al., | - | - | - | - | - | 76.31/15.89 | - |
| DGCNN (Song et al., | 49.79/10.94 | 46.36/12.06 | 48.29/12.28 | 56.15/14.01 | 54.87/17.53 | 79.95/9.02 | 52.82/9.23 |
| A-LSTM (Song et al., | - | - | - | - | - | - | 55.03/9.28 |
| DAN (Li et al., | - | - | - | - | - | 83.81/8.56 | 58.87/8.13 |
| BiDANN-S (Li et al., | 63.01/7.49 | 63.22/7.52 | 63.50/9.50 | 73.59/9.12 | 73.72/8.67 | 84.14/6.87 | 65.59/10.39 |
| BiHDM (Li et al., | - | - | - | - | - | 85.40/7.53 | 69.03/8.66 |
| RGNN (Zhong et al., | 64.88/6.87 | 60.69/5.79 | 60.84/7.57 | 85.30/6.72 | 73.84/8.02 | ||
| SOGNN (Ours) | 72.54/8.97 | 71.70/8.03 | |||||
Figure 4Emotion recognition performance of SOGNN with DE, PSD, ASM, DASM, DCAU, and RASM features.
Figure 5Performance changes of SOGNN as the variance of top-k sparse graph varies.
Figure 6Emotion recognition performance based on different graphs. Wilcoxon signed rank test: ~ non-significant, †p < 0.05, ††p < 0.01.
Figure 7Adjacent matrixes (A) and topographic maps (B) learned by SOGNN.