| Literature DB >> 35585922 |
Jun Liu2, Lechan Sun1, Jun Liu2, Min Huang2, Yichen Xu2, Rihui Li3.
Abstract
Recognizing the emotional states of humans through EEG signals are of great significance to the progress of human-computer interaction. The present study aimed to perform automatic recognition of music-evoked emotions through region-specific information and dynamic functional connectivity of EEG signals and a deep learning neural network. EEG signals of 15 healthy volunteers were collected when different emotions (high-valence-arousal vs. low-valence-arousal) were induced by a musical experimental paradigm. Then a sequential backward selection algorithm combining with deep neural network called Xception was proposed to evaluate the effect of different channel combinations on emotion recognition. In addition, we also assessed whether dynamic functional network of frontal cortex, constructed through different trial number, may affect the performance of emotion cognition. Results showed that the binary classification accuracy based on all 30 channels was 70.19%, the accuracy based on all channels located in the frontal region was 71.05%, and the accuracy based on the best channel combination in the frontal region was 76.84%. In addition, we found that the classification performance increased as longer temporal functional network of frontal cortex was constructed as input features. In sum, emotions induced by different musical stimuli can be recognized by our proposed approach though region-specific EEG signals and time-varying functional network of frontal cortex. Our findings could provide a new perspective for the development of EEG-based emotional recognition systems and advance our understanding of the neural mechanism underlying emotion processing.Entities:
Keywords: EEG channel selection; Xception architecture; dynamic functional connectivity; emotion recognition; sequential backward feature selection
Year: 2022 PMID: 35585922 PMCID: PMC9108496 DOI: 10.3389/fnins.2022.884475
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
FIGURE 1The position of the electrodes according to the International 10–20 system. Note that M1 and M2 were used as reference electrodes.
Detailed information of all music clips.
| Classification | Performer | Name of music clip | Duration (mm:ss) |
| LVA | Richard Clayderman | A comme amour (L for love) | 0:00–0:20 |
| LVA | Yiruma | Kiss the rain | 0:50–1:10 |
| LVA | Kevin Kern | In the enchanted garden (Sundial dreams) | 0:10–0:30 |
| LVA | The Daydream | Dreaming (Tears) | 0:00–0:20 |
| LVA | Kevin Kern | In the enchanted garden (Through the arbor) | 0:05–0:25 |
| LVA | Jin Shi | Melody of the night (Five) | 0:13–0:33 |
| HVA | Hiphop | My view (A little story) | 0:18–0:38 |
| HVA | Fryderyk Franciszek Chopin | Chopin’s revolutionary etude in c minor | 0:22–0:42 |
| HVA | Richard Clayderman | Ballade pour Adeline (Mariage d’amour) | 1:42–2:02 |
| HVA | July | My soul | 0:25–0:45 |
| HVA | Wolfgang Amadeus Mozart | Alla turca | 1:33–1:53 |
| HVA | Richard Clayderman | Lyphard melodie | 1:07–1:27 |
FIGURE 2The experimental protocol. (A) Power spectrum of the music clips. (B) The experimental paradigm of music-evoked emotions.
FIGURE 3The analysis pipeline of the proposed Xception network-based emotion recognition.
Statistical analyses of the emotional behavior.
| Valence | Arousal | |
| LVA | 2.75 ± 1.17 | 2.98 ± 1.08 |
| HVA | 6.93 ± 1.25 | 7.01 ± 1.21 |
| <0.0001 | <0.0001 |
Classification performance using whole-head EEG signal (30 channels) and frontal EEG signal (8 channels).
| Channels | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 (%) |
| 30 Channels | 71.16 ± 3.67 | 70.84 ± 5.87 | 70.54 ± 2.39 | 71.29 ± 3.98 |
| 8 Channels | 71.59 ± 2.49 | 71.19 ± 2.48 | 72.11 ± 3.26 | 73.82 ± 3.01 |
Classification accuracy of the sequential backward selection algorithm using channels in frontal cortex.
| Channels | Remove | Remove | Remove | Remove | Remove | Remove | Remove | Remove |
| Fp1_Fpz_Fp2_F7_F3_F4_F8_Fz | 72.81% | 69.56% | 72.36% | 70.95% | 71.82% | 73.26% | 73.26% | 74.47% |
| Fp1_Fpz_Fp2_F7_F3_F4_F8 | 68.27% | 67.94% | 70.67% | 69.52% | 70.51% | 75.59% | 75.87% | |
| Fp1_Fpz_Fp2_F7_F3_F8 | 70.79% | 72.82% | 73.28% | 72.83% | 74.61% | 76.84% | ||
| Fp1_Fpz_Fp2_F7_F8 | 70.32% | 70.37% | 70.75% | 69.66% | 72.91% | |||
| Fpz_Fp2_F7_F8 | 66.92% | 68.26% | 69.11% | 70.38% | ||||
| Fp2_F7_F8 | 64.45% | 67.56% | 69.35% |
FIGURE 4The classification performance (accuracy, sensitivity, specificity, and F1-score) obtained from different channel combinations.
FIGURE 5The classification performance [accuracy (A) and F1 score (B)] obtained from functional networks constructed by different numbers of trials.
FIGURE 6The averaged functional network of frontal cortex for both HVA (A) and LVA (B) states.