| Literature DB >> 35846606 |
Shuang Liang1,2, Mingbo Yin3, Yecheng Huang3, Xiubin Dai1, Qiong Wang4.
Abstract
Electroencephalography (EEG) based emotion recognition enables machines to perceive users' affective states, which has attracted increasing attention. However, most of the current emotion recognition methods neglect the structural information among different brain regions, which can lead to the incorrect learning of high-level EEG feature representation. To mitigate possible performance degradation, we propose a novel nuclear norm regularized deep neural network framework (NRDNN) that can capture the structural information among different brain regions in EEG decoding. The proposed NRDNN first utilizes deep neural networks to learn high-level feature representations of multiple brain regions, respectively. Then, a set of weights indicating the contributions of each brain region can be automatically learned using a region-attention layer. Subsequently, the weighted feature representations of multiple brain regions are stacked into a feature matrix, and the nuclear norm regularization is adopted to learn the structural information within the feature matrix. The proposed NRDNN method can learn the high-level representations of EEG signals within multiple brain regions, and the contributions of them can be automatically adjusted by assigning a set of weights. Besides, the structural information among multiple brain regions can be captured in the learning procedure. Finally, the proposed NRDNN can perform in an efficient end-to-end manner. We conducted extensive experiments on publicly available emotion EEG dataset to evaluate the effectiveness of the proposed NRDNN. Experimental results demonstrated that the proposed NRDNN can achieve state-of-the-art performance by leveraging the structural information.Entities:
Keywords: affective brain-computer interface (aBCI); electroencephalography (EEG); emotion recognition; nuclear norm regularization; structural information
Year: 2022 PMID: 35846606 PMCID: PMC9278805 DOI: 10.3389/fpsyg.2022.924793
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1A general EEG-based aBCI.
Figure 2Framework of the proposed NRDNN for EEG-based emotion recognition. NRDNN first utilizes deep neural networks to learn high-level feature representations of multiple brain regions. Then, a set of weights indicating the contributions of each brain region can be automatically learned using a region-attention layer. Subsequently, the weighted feature representations of multiple brain regions are stacked into a feature matrix, and the nuclear norm regularization is adopted to capture the structural information within feature matrix.
Network architecture of AConvNet.
|
|
|
|
|
|
|---|---|---|---|---|
| Input |
| |||
| Feature extractor | Reshape | 1 | ||
| Convolution | Conv2D | 1 × 25, 40 | 40 | |
| Convolution | Conv2D | 40 × 1 × ( | ||
| Normalization | BatchNorm | / | 40 × 1 × ( | |
| Activation | Square | / | 40 × 1 × ( | |
| Pooling | AveragePool | 1 × 75, 15 | 40 × 1 × [( | |
| Activation | log | / | 40 × 1 × [( | |
| Flatten | / | 40 [( | ||
| Classification | Fully connected | Dense | 40 [( | 300 |
| Fully connected | Dense | 300 |
|
Figure 3Schema of the region-attention network.
The EEG electrodes associated with each brain region.
|
|
|
|---|---|
| Frontal | Fp1,Fp2,AF3,AF4, F7,F3,Fz,F4,F8 |
| Temporal | T7,T8 |
| Central | FC5,FC1,FC2,FC6,C3,Cz,C4 |
| Parietal | CP1,CP2,CP5,CP6,P7,P3,Pz,P4,P8,PO3,PO4 |
| Occipital | O1,Oz,O2 |
Classification performances (ACC) of our NRDNN against the comparison methods.
|
|
| |||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| |
|
| 0.4750 | 0.5250 | 0.6250 | 0.6250 | 0.7250 | 0.7280 |
|
|
|
| 0.5500 | 0.5750 | 0.6000 | 0.6250 | 0.6500 | 0.7000 | 0.7000 |
|
|
| 0.5750 | 0.6000 | 0.5500 | 0.6000 | 0.6250 | 0.6375 |
|
|
|
| 0.5500 | 0.4750 | 0.6750 | 0.6750 | 0.7000 | 0.7050 | 0.7250 |
|
|
| 0.5500 | 0.7000 | 0.6500 | 0.7000 | 0.7250 | 0.7250 |
|
|
|
| 0.5500 | 0.7500 | 0.7000 | 0.7250 | 0.7000 | 0.7500 |
|
|
|
| 0.5750 | 0.7000 | 0.6250 | 0.7000 | 0.7250 | 0.7250 |
|
|
|
| 0.5250 | 0.6250 | 0.6000 | 0.6250 | 0.6250 | 0.6000 | 0.6500 |
|
|
|
| 0.5250 | 0.6000 | 0.5750 | 0.6500 | 0.6750 | 0.6500 |
|
|
| 0.5250 | 0.6250 | 0.6250 | 0.6250 | 0.6500 | 0.6750 | 0.7000 |
|
|
| 0.4000 | 0.6500 | 0.6500 | 0.6750 | 0.6500 | 0.6750 | 0.7500 |
|
|
| 0.4750 | 0.5500 | 0.6250 | 0.7000 | 0.7000 | 0.7000 | 0.7000 |
|
|
| 0.5500 | 0.5500 | 0.6500 | 0.6500 | 0.7000 | 0.7000 | 0.7250 |
|
|
| 0.5750 | 0.6000 | 0.5750 | 0.5750 | 0.6000 | 0.6250 | 0.6500 |
|
|
| 0.5000 | 0.5250 | 0.6250 | 0.6500 | 0.6750 | 0.6750 |
|
|
|
| 0.5250 | 0.4000 | 0.6000 | 0.6000 | 0.6250 | 0.6250 |
|
|
| 0.5375 | 0.5859 | 0.6234 | 0.6453 | 0.6703 | 0.6825 | 0.7109 |
| |
The best classification results are boldfaced.
Classification performances (AUC) of our NRDNN against the comparison methods.
|
|
| |||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| |
|
| 0.3885 | 0.3985 | 0.4862 | 0.5238 | 0.6241 | 0.6479 | 0.6717 |
|
|
| 0.2146 | 0.2652 | 0.3409 | 0.5126 | 0.5000 | 0.5379 | 0.5732 |
|
|
| 0.4520 | 0.3409 | 0.3972 | 0.4596 | 0.4975 | 0.4950 | 0.5177 |
|
|
| 0.4167 | 0.2839 | 0.5208 | 0.5651 | 0.5339 | 0.5776 | 0.6042 |
|
|
| 0.4714 | 0.5417 | 0.5859 | 0.6120 | 0.6380 | 0.6484 | 0.6250 |
|
|
| 0.3400 | 0.2667 | 0.4233 | 0.4533 |
| 0.5300 | 0.5400 | 0.5467 |
|
| 0.4524 | 0.3512 | 0.6012 | 0.4970 |
|
| 0.6250 | 0.6190 |
|
| 0.4116 | 0.3965 | 0.4722 | 0.4722 | 0.4874 | 0.4091 | 0.4899 |
|
|
| 0.5600 | 0.3425 | 0.4725 | 0.4275 | 0.4950 | 0.5025 | 0.5500 |
|
|
| 0.3500 | 0.3775 | 0.4650 | 0.4675 | 0.5000 | 0.5225 | 0.5800 |
|
|
| 0.1563 | 0.2630 | 0.4661 | 0.4479 | 0.4583 | 0.4974 | 0.5677 |
|
|
| 0.3058 | 0.2281 | 0.4236 | 0.5639 | 0.5564 | 0.5539 | 0.5664 |
|
|
| 0.2864 | 0.4271 | 0.4348 | 0.5703 | 0.5217 | 0.5703 | 0.5985 |
|
|
| 0.3500 | 0.3600 | 0.3900 | 0.4225 | 0.4375 | 0.4175 | 0.5300 |
|
|
| 0.2725 | 0.1350 | 0.4200 | 0.4600 | 0.4550 | 0.4700 |
| 0.5800 |
|
| 0.4747 | 0.4027 | 0.5120 | 0.5387 | 0.3947 | 0.4400 |
| 0.6480 |
| 0.3689 | 0.3363 | 0.4632 | 0.4996 | 0.5183 | 0.5287 | 0.5813 |
| |
The best classification results are boldfaced.
Statistical significance comparisons of ACC and F1 of NRDNN and other comparison methods.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
| ACC | 1.71E-07 | 3.57E-06 | 8.18E-13 | 1.09E-08 | 5.63E-07 | 5.75E-06 | 1.50E-04 |
| F1 | 2.84E-07 | 1.06E-08 | 1.65E-11 | 2.305E-08 | 3.11 E-05 | 8.00E-05 | 2.20E-05 |
| AUC | 4.59E-07 | 1.93E-09 | 5.75E-08 | 3.97E-08 | 1.16E-05 | 1.31E-05 | 4.14E-04 |
Figure 4Classification results of both EEGNet and the proposed framework NRDNN using EEGNet as its network backbone. (A) Classification performance (ACC), and (B) Classification performance (F1).
Classification performances (F1) of our NRDNN against the comparison methods.
|
|
| |||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| |
|
| 0.4747 | 0.4861 | 0.5807 | 0.6248 | 0.7206 | 0.7255 | 0.7469 |
|
|
| 0.4000 | 0.4133 | 0.5657 | 0.6389 | 0.6616 | 0.7222 | 0.7020 |
|
|
| 0.5726 | 0.4984 | 0.5343 | 0.5990 | 0.6248 | 0.6375 |
|
|
|
| 0.5396 | 0.4130 | 0.6484 | 0.6698 | 0.6703 | 0.6918 | 0.7163 |
|
|
| 0.5500 | 0.6581 | 0.6465 | 0.6931 | 0.7163 | 0.7206 | 0.7382 |
|
|
| 0.4872 | 0.5098 | 0.6000 | 0.6204 | 0.6429 | 0.6667 | 0.6257 |
|
|
| 0.5402 | 0.4805 | 0.6190 | 0.6238 |
| 0.6925 | 0.6865 | 0.7024 |
|
| 0.5223 | 0.5636 | 0.6000 | 0.6132 | 0.6190 | 0.5733 | 0.6267 |
|
|
| 0.6970 | 0.4473 | 0.5908 | 0.5248 | 0.6354 | 0.6698 | 0.6419 |
|
|
| 0.5247 | 0.5807 | 0.6229 | 0.6229 | 0.6491 | 0.6647 | 0.6970 |
|
|
| 0.3600 | 0.5333 | 0.6491 | 0.6577 | 0.6419 | 0.6698 | 0.7396 |
|
|
| 0.4667 | 0.4357 | 0.6132 | 0.6992 | 0.7000 | 0.6992 | 0.7000 |
|
|
| 0.4643 | 0.5396 | 0.6011 | 0.6491 | 0.6703 | 0.6970 | 0.7163 |
|
|
| 0.5248 | 0.5442 | 0.5616 | 0.5726 | 0.5908 | 0.5943 | 0.6465 |
|
|
| 0.4885 | 0.3866 | 0.6229 | 0.6491 | 0.6577 | 0.6647 |
| 0.7494 |
|
| 0.5175 | 0.3750 | 0.5833 | 0.5833 | 0.3846 | 0.4398 |
|
|
| 0.5081 | 0.4916 | 0.6025 | 0.6276 | 0.6430 | 0.6581 | 0.6905 |
| |
The best classification results are boldfaced.