| Literature DB >> 35600616 |
Yufang Dan1,2, Jianwen Tao1, Di Zhou3.
Abstract
In machine learning community, graph-based semi-supervised learning (GSSL) approaches have attracted more extensive research due to their elegant mathematical formulation and good performance. However, one of the reasons affecting the performance of the GSSL method is that the training data and test data need to be independently identically distributed (IID); any individual user may show a completely different encephalogram (EEG) data in the same situation. The EEG data may be non-IID. In addition, noise/outlier sensitiveness still exist in GSSL approaches. To these ends, we propose in this paper a novel clustering method based on structure risk minimization model, called multi-model adaptation learning with possibilistic clustering assumption for EEG-based emotion recognition (MA-PCA). It can effectively minimize the influence from the noise/outlier samples based on different EEG-based data distribution in some reproduced kernel Hilbert space. Our main ideas are as follows: (1) reducing the negative impact of noise/outlier patterns through fuzzy entropy regularization, (2) considering the training data and test data are IID and non-IID to obtain a better performance by multi-model adaptation learning, and (3) the algorithm implementation and convergence theorem are also given. A large number of experiments and deep analysis on real DEAP datasets and SEED datasets was carried out. The results show that the MA-PCA method has superior or comparable robustness and generalization performance to EEG-based emotion recognition.Entities:
Keywords: clustering assumption; emotion recognition; encephalogram; fuzzy entropy; multi-model adaptation; semi-supervised learning
Year: 2022 PMID: 35600616 PMCID: PMC9114636 DOI: 10.3389/fnins.2022.855421
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Algorithm description of MA-PCA.
| 1. Initialize the label memberships of unlabeled data, |
| 2. Obtain the initial W0 by Eq. (7); |
| 3. Obtain the initial v0 by Eq. (10); |
| 4. Calculate the |
| { |
| 5.1 Fix the current v |
| 5.2 Fix the current W |
| 5.3 Fix the current W |
| } |
FIGURE 1Domain adaptation emotion recognition on within dataset. SI, session I; SII, session II; SIII, session III.
FIGURE 2Emotion recognition on within-dataset with multiple kernel learning. SI, session I; SII, session II; SIII, session III (similarly hereinafter).
FIGURE 3Domain adaptation emotion recognition on cross-dataset.
FIGURE 4Emotion recognition with multi-source adaptation settings.
FIGURE 5Adaptive emotion recognition using deeply extracted features.
Multi-source adaptation emotion recognition accuracies of derived methods as well as MA-PCA.
| Method | {DEAP,SII,SIII} →SI | {DEAP,SI,SIII} →SII | {DEAP,SI,SII} →SIII | {SI,SII,SIII} →DEAP | {SI,SII} →DEAP | {SI,SIII} →DEAP |
| MA-PCA_NTS | 72.81 | 70.52 | 68.57 | 55.90 | 54.20 | 55.81 |
| MA-PCA_NSS | 71.30 | 70.05 | 65.87 | 53.17 | 53.77 | 55.43 |
| MA-PCA-NOS | 71.61 | 69.86 | 66.28 | 53.49 | 54.23 | 55.66 |
| MA-PCA |
|
|
|
|
|
|
Values in bold denote the best recognition rates. SI, session I; SII, session II; SIII, session III.