| Literature DB >> 35619785 |
Yang Ruan1, Mengyun Du1, Tongguang Ni1,2.
Abstract
Electroencephalogram (EEG) signals are not easily camouflaged, portable, and noninvasive. It is widely used in emotion recognition. However, due to the existence of individual differences, there will be certain differences in the data distribution of EEG signals in the same emotional state of different subjects. To obtain a model that performs well in classifying new subjects, traditional emotion recognition approaches need to collect a large number of labeled data of new subjects, which is often unrealistic. In this study, a transfer discriminative dictionary pair learning (TDDPL) approach is proposed for across-subject EEG emotion classification. The TDDPL approach projects data from different subjects into the domain-invariant subspace, and builds a transfer dictionary pair learning based on the maximum mean discrepancy (MMD) strategy. In the subspace, TDDPL learns shared synthesis and analysis dictionaries to build a bridge of discriminative knowledge from source domain (SD) to target domain (TD). By minimizing the reconstruction error and the inter-class separation term for each sub-dictionary, the learned synthesis dictionary is discriminative and the learned low-rank coding is sparse. Finally, a discriminative classifier in the TD is constructed on the classifier parameter, analysis dictionary and projection matrix, without the calculation of coding coefficients. The effectiveness of the TDDPL approach is verified on SEED and SEED IV datasets.Entities:
Keywords: across-subject; dictionary pair learning; electroencephalogram signals; emotion classification; transfer learning
Year: 2022 PMID: 35619785 PMCID: PMC9128594 DOI: 10.3389/fpsyg.2022.899983
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Specification comparison between the SEED and SEED-IV datasets.
|
|
| |
|---|---|---|
| Number of leads | 62 | 62 |
| Original sampling rate | 1,000 Hz | 1,000 Hz |
| Downsampling | 200 Hz | 200 Hz |
| Number of subjects | 15 | 15 |
| Emotional stimulation | Chinese movie clips | Chinese movie clips |
| Emotional types | Positive, neutral, negative | Positive, fear, neutral, negative |
| Number of sessions | 3 | 3 |
| Number of trials | 15 | 24 |
| Trial test length | about 4 min | about 2min |
Transfer discriminative dictionary pair learning approach.
| Input: |
| Output: parameters {Ω, |
| Initialize: initialize |
| While not converge do |
| Fixing Ω, |
| Fixing Ω, |
| Fixing Ω, |
| Fixing Ω, |
| Fixing Ω, |
| Fixing |
| Fixing |
| end while |
Average classification accuracies on session 1 of SEED dataset in the o → o scenario.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
| 1 | 52.16 | 54.65 | 55.66 | 52.83 | 60.68 | 62.68 |
|
| 2 | 52.05 | 52.08 | 55.09 | 57.65 | 60.93 | 64.94 |
|
| 3 | 54.38 | 54.53 | 57.61 | 60.87 | 61.21 | 63.21 |
|
| 4 | 48.58 | 55.73 | 59.33 | 62.54 | 69.12 | 70.12 |
|
| 5 | 56.29 | 55.05 | 56.30 | 58.73 | 58.27 | 60.27 |
|
| 6 | 54.19 | 60.04 | 56.35 | 59.27 | 64.28 | 66.28 |
|
| 7 | 50.40 | 49.32 | 55.25 | 58.00 | 56.57 |
| 57.44 |
| 8 | 56.34 | 49.47 | 53.60 | 55.43 | 54.94 | 56.93 |
|
| 9 | 55.68 | 62.64 | 58.52 | 60.84 | 65.78 |
| 66.84 |
| 10 | 53.62 | 48.35 | 55.61 | 58.75 | 58.80 | 60.80 |
|
| 11 | 53.93 | 52.00 | 59.75 | 61.75 | 62.46 | 64.45 |
|
| 12 | 42.74 | 59.67 | 63.35 | 64.64 | 64.88 | 66.89 |
|
| 13 | 52.71 | 61.53 | 59.16 | 55.51 | 66.67 | 68.67 |
|
| 14 | 56.14 | 57.64 | 56.05 | 54.25 | 59.11 | 60.11 |
|
| 15 | 56.21 | 61.11 | 66.62 | 68.42 | 69.07 | 72.08 |
|
| Average | 53.03 | 55.59 | 57.88 | 59.30 | 62.18 | 64.25 |
|
The bold values in .
Average classification accuracies on session 1 of SEED dataset in the m → o scenario.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
| 1 | 59.42 | 61.25 | 70.94 | 66.96 | 74.59 | 75.70 |
|
| 2 | 58.58 | 58.46 | 70.26 | 73.55 | 75.52 | 77.15 |
|
| 3 | 59.89 | 61.27 | 73.27 | 74.83 | 74.43 | 77.32 |
|
| 4 | 54.21 | 61.91 | 75.15 | 76.15 | 83.12 | 83.78 |
|
| 5 | 62.15 | 61.47 | 71.41 | 73.49 | 72.46 | 74.19 |
|
| 6 | 60.41 | 66.48 | 71.13 | 73.27 | 78.55 | 79.72 |
|
| 7 | 57.50 | 55.79 | 70.83 | 71.10 | 70.86 |
| 71.50 |
| 8 | 63.50 | 55.03 | 68.52 | 69.73 | 68.41 | 71.06 |
|
| 9 | 61.36 | 67.37 | 73.07 | 75.10 | 80.13 |
| 80.40 |
| 10 | 60.37 | 54.03 | 72.01 | 73.02 | 72.60 | 73.83 |
|
| 11 | 60.32 | 56.54 | 73.95 | 75.90 | 76.57 | 78.02 |
|
| 12 | 48.67 | 65.70 | 77.51 | 78.28 | 78.34 | 79.16 |
|
| 13 | 58.35 | 66.38 | 71.63 | 69.53 | 80.03 | 82.27 |
|
| 14 | 62.06 | 62.85 | 66.46 | 68.40 | 73.27 | 73.49 |
|
| 15 | 63.11 | 66.31 | 80.56 | 81.18 | 83.23 | 84.62 |
|
| Average | 59.33 | 61.39 | 72.45 | 73.37 | 76.14 | 77.62 |
|
Figure 1Average classification accuracies on session 2 of the SEED dataset.
Figure 2Average classification accuracies on session 3 of the SEED dataset.
Average classification accuracies on session 1 of SEED IV dataset in the o → o scenario.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
| 1 | 38.78 | 40.41 | 50.47 | 48.63 | 54.56 | 54.36 |
|
| 2 | 43.81 | 42.88 | 56.74 | 55.78 | 56.95 | 63.72 |
|
| 3 | 42.77 | 43.61 | 50.74 | 48.55 | 53.34 | 56.82 |
|
| 4 | 31.91 | 33.00 | 51.41 | 54.16 | 58.97 | 60.47 |
|
| 5 | 39.61 | 40.53 | 41.38 | 42.04 | 43.18 | 50.32 |
|
| 6 | 34.96 | 35.34 | 47.89 | 47.56 | 55.08 | 58.42 |
|
| 7 | 42.24 | 43.39 | 54.67 | 56.50 | 59.32 | 61.48 |
|
| 8 | 38.36 | 39.37 | 48.70 | 48.55 | 56.02 |
| 58.66 |
| 9 | 41.97 | 43.05 | 48.06 | 46.42 | 49.18 | 47.37 |
|
| 10 | 38.73 | 39.78 | 49.32 | 48.27 | 52.89 | 52.95 |
|
| 11 | 33.63 | 33.00 | 39.28 | 39.32 | 40.66 | 43.96 |
|
| 12 | 30.29 | 33.27 | 36.02 | 38.24 | 37.08 | 39.85 |
|
| 13 | 37.85 | 36.79 | 48.96 | 48.11 | 46.00 | 49.31 |
|
| 14 | 37.23 | 40.18 | 45.25 | 44.63 | 50.26 | 50.21 |
|
| 15 | 41.26 | 43.10 | 58.45 | 57.84 | 62.66 | 63.64 |
|
| Average | 38.23 | 39.18 | 48.49 | 48.31 | 51.74 | 54.16 |
|
Average classification accuracies on session 1 of SEED IV dataset in the m → o scenario.
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|
| 1 | 48.06 | 49.01 | 57.35 | 57.44 | 62.36 | 63.03 |
|
| 2 | 51.16 | 52.70 | 62.23 | 64.98 | 63.72 | 74.63 |
|
| 3 | 50.08 | 52.23 | 57.62 | 56.28 | 60.45 | 64.99 |
|
| 4 | 39.21 | 40.67 | 61.26 | 60.73 | 67.62 | 75.28 |
|
| 5 | 45.77 | 46.21 | 49.96 | 49.12 | 52.01 | 58.09 |
|
| 6 | 42.00 | 43.16 | 55.96 | 56.59 | 62.82 | 70.67 |
|
| 7 | 53.09 | 54.86 | 61.69 | 65.43 | 66.39 | 71.53 |
|
| 8 | 45.18 | 46.82 | 57.87 | 57.87 | 63.97 | 65.50 |
|
| 9 | 49.67 | 51.14 | 58.12 | 55.75 | 57.65 | 55.66 |
|
| 10 | 45.04 | 45.74 | 57.85 | 57.42 | 60.12 | 61.75 |
|
| 11 | 41.03 | 41.66 | 46.69 | 49.38 | 48.48 | 51.68 |
|
| 12 | 39.49 | 40.17 | 45.41 | 44.95 | 44.74 | 48.71 |
|
| 13 | 44.78 | 44.91 | 56.93 | 57.54 | 51.78 | 63.54 |
|
| 14 | 47.22 | 49.26 | 51.64 | 53.32 | 56.50 | 59.99 |
|
| 15 | 49.71 | 50.78 | 66.81 | 67.76 | 74.04 | 75.43 |
|
| Average | 46.10 | 47.29 | 56.49 | 56.97 | 59.51 | 64.03 |
|
Figure 3Average classification accuracies on session 2 of the SEED IV dataset.
Figure 4Average classification accuracies on session 3 of the SEED IV dataset.
Figure 5The average classification accuracy of the TDDPL approach under varying m and r on the SEED dataset. (A) o → o scenario, (B) m → o scenario.
Figure 6The average classification accuracy of the TDDPL approach under varying m and r on the SEED IV dataset. (A) o → o scenario, (B) m → o scenario.
Figure 7The average classification accuracy of the TDDPL approach under varying N in the TD on (A) SEED dataset, (B) SEED IV dataset.