| Literature DB >> 35573289 |
Jianwen Tao1, Yufang Dan1, Di Zhou2, Songsong He1.
Abstract
In practical encephalogram (EEG)-based machine learning, different subjects can be represented by many different EEG patterns, which would, in some extent, degrade the performance of extant subject-independent classifiers obtained from cross-subjects datasets. To this end, in this paper, we present a robust Latent Multi-source Adaptation (LMA) framework for cross-subject/dataset emotion recognition with EEG signals by uncovering multiple domain-invariant latent subspaces. Specifically, by jointly aligning the statistical and semantic distribution discrepancies between each source and target pair, multiple domain-invariant classifiers can be trained collaboratively in a unified framework. This framework can fully utilize the correlated knowledge among multiple sources with a novel low-rank regularization term. Comprehensive experiments on DEAP and SEED datasets demonstrate the superior or comparable performance of LMA with the state of the art in the EEG-based emotion recognition.Entities:
Keywords: co-adaptation; emotion recognition; encephalogram; latent space; maximum mean discrepancy
Year: 2022 PMID: 35573289 PMCID: PMC9091911 DOI: 10.3389/fnins.2022.850906
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Notations and descriptions.
| Notations | Descriptions |
|
| Sample number of each source–target pair |
|
| Feature dimensionality number |
| χ | Sample/feature space |
| Γ | Label/prediction space |
| Vector | |
| Matrix | |
|
| The ( |
| The | |
| (⋅) | Transpose operator |
| Trace operator | |
| ⟨ | The inner product of two matrices |
|
| The |
|
| The |
|
| The trace-norm of |
|
| Identity matrix of size |
| 1 | |
| 0 |
FIGURE 1Flowchart of LMA on EEG-based emotion recognition.
| Algorithm 1: Multi-source adaptation learning. |
Multi-source adaptation emotion recognition accuracies of derived methods as well as LMA.
| Method | {DEAP,SII,SIII} | {DEAP,SI,SIII} | {DEAP,SI,SII} | {SI,SII,SIII} | {SI,SII} | {SI,SIII} |
| LMA_NF | 73.52 | 69.10 | 69.58 | 55.43 | 52.01 | 56.22 |
| LMA_NL | 69.13 | 65.36 | 66.11 | 52.32 | 53.16 | 52.71 |
| LMA_NS | 72.68 | 68.23 | 68.38 | 54.69 | 51.08 | 54.20 |
| LMA |
|
|
|
|
|
|
Bold denotes the best recognition rates (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 2Domain adaptation emotion recognition on within-dataset (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 3Domain adaptation emotion recognition on within-dataset with multi-kernel learning (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 4Domain adaptation emotion recognition on cross-dataset (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 5Multi-source adaptation emotion recognition accuracy (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 6Emotion recognition accuracies of different methods using deeply extracted features (SI: Session I, SII: Session II, SIII: Session III).
FIGURE 7Emotion recognition accuracies with different values of δ (SI: Session I, SII: Session II, SIII: Session III).