| Literature DB >> 35095696 |
Jiangsheng Cao1, Xueqin He1, Chenhui Yang1, Sifang Chen2, Zhangyu Li2, Zhanxiang Wang3,4.
Abstract
Due to the non-invasiveness and high precision of electroencephalography (EEG), the combination of EEG and artificial intelligence (AI) is often used for emotion recognition. However, the internal differences in EEG data have become an obstacle to classification accuracy. To solve this problem, considering labeled data from similar nature but different domains, domain adaptation usually provides an attractive option. Most of the existing researches aggregate the EEG data from different subjects and sessions as a source domain, which ignores the assumption that the source has a certain marginal distribution. Moreover, existing methods often only align the representation distributions extracted from a single structure, and may only contain partial information. Therefore, we propose the multi-source and multi-representation adaptation (MSMRA) for cross-domain EEG emotion recognition, which divides the EEG data from different subjects and sessions into multiple domains and aligns the distribution of multiple representations extracted from a hybrid structure. Two datasets, i.e., SEED and SEED IV, are used to validate the proposed method in cross-session and cross-subject transfer scenarios, experimental results demonstrate the superior performance of our model to state-of-the-art models in most settings.Entities:
Keywords: EEG; SEED; affective computing; deep learning; domain adaption; emotion recognition
Year: 2022 PMID: 35095696 PMCID: PMC8792438 DOI: 10.3389/fpsyg.2021.809459
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1Process flow for brain-computer interfaces.
Information about SEED dataset.
| Attributes | Details Information |
| Source | BCMI laboratory |
| Sessions | Three |
| Subjects | Fifteen |
| Trials | Fifteen |
| Emotions | Positive Neutral Negative |
| Channels of recorded | 62 EEG channel |
Information about SEED-IV dataset.
| Attributes | Details Information |
| Source | BCMI laboratory |
| Sessions | Three |
| Subjects | Fifteen |
| Trials | Twenty four |
| Emotions | Neutral Sad Fear Happy |
| Channels of recorded | 62 EEG channel |
FIGURE 2EEG electrode placement.
FIGURE 3Architecture of proposed Multi-Source and Multi-Representation Adaptation (MSMRA) method.
Overview of MSMRA module.
|
|
FIGURE 4Extract 48-dimensional, 32-dimensional, and 16-dimensional high-level feature representations process from 64-dimensional low-level features, respectively.
Comparisons of the average accuracies and standard deviations of cross-session and cross-subject scenarios on SEED database among the various methods.
| Dataset | Method | Cross-session | Cross-subject |
| DGCNN | – | 79.95 ± 9.02 | |
| DDC | 81.53 ± 6.83 | 68.99 ± 3.23 | |
| DAN | 79.93 ± 7.06 | 65.84 ± 2.25 | |
| SEED | DCORAL | 76.86 ± 7.61 | 66.29 ± 4.53 |
| DANN | – | 79.19 ± 13.14 | |
| PPDA | – | 86.70 ± 7.10 | |
| MS-MDA | 88.56 ± 7.80 | ||
| MSMRA (Ours) | 87.62 ± 7.53 | ||
Bold indicates the maximum average accuracies of cross-session and cross-subject scenarios among the various methods.
Comparisons of the average accuracies and standard deviations of cross-session and cross-subject scenarios on SEED-IV database among the various methods.
| Dataset | Method | Cross-session | Cross-subject |
| DDC | 57.63 ± 11.28 | 37.71 ± 6.36 | |
| DAN | 55.14 ± 12.79 | 32.44 ± 9.02 | |
| SEED-IV | DCORAL | 44.63 ± 11.38 | 37.43 ± 3.08 |
| MS-MDA | 61.43 ± 15.71 | 59.34 ± 5.48 | |
| MSMRA (Ours) |
Bold indicates the maximum average accuracies of cross-session and cross-subject scenarios among the various methods.
Ablation study of MSMRA on SEED and SEED-IV.
| Dataset | Method | Cross-session | Cross-subject |
| SEED | Ours full | ||
| w/o normalization | 80.21 ± 9.95 | 84.12 ± 6.17 | |
| w/o MDSFE | 89.55 ± 5.17 | 87.17 ± 5.41 | |
| w/o normalization + MDSFE | 77.02 ± 11.11 | 80.88 ± 7.22 | |
| SEED-IV | Ours full | ||
| w/o normalization | 43.39 ± 6.97 | 52.27 ± 5.18 | |
| w/o MDSFE | 71.97 ± 12.53 | 60.19 ± 9.60 | |
| w/o normalization + MDSFE | 43.33 ± 4.79 | 50.94 ± 3.24 |
Bold indicates the maximum average accuracies of cross-session and cross-subject scenarios among the various methods.