| Literature DB >> 27669247 |
Jianhai Zhang1, Ming Chen2, Shaokai Zhao3, Sanqing Hu4, Zhiguo Shi5, Yu Cao6.
Abstract
Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the realization of a practical EEG-based emotion recognition system.Entities:
Keywords: EEG; ReliefF; emotion recognition; sensor selection
Mesh:
Year: 2016 PMID: 27669247 PMCID: PMC5087347 DOI: 10.3390/s16101558
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The update process of channels’ weights considering their contribution to classification accuracy.
Figure 2Average classification accuracy over varying number of features across 16 subjects with SVM. Point A achieves the best classification accuracy.
Average results (standard deviation) using top-N features and classifier SVM across 16 subjects.
| Top-N Features | Average Number of Channels Involved | Average Accuracy (%) |
|---|---|---|
| 4 | 3.25 (0.77) | 51.04 (8.32) |
| 8 | 6.56 (1.36) | 53.47 (11.21) |
| 12 | 9.44 (2.06) | 55.13 (10.77) |
| 16 | 12.13 (2.58) | 55.49 (11.31) |
| 20 | 14.88 (2.99) | 56.99 (11.71) |
| 24 | 17.50 (3.39) | 57.58 (12.14) |
| 28 | 19.50 (3.65) | 58.93 (12.83) |
| 32 | 21.44 (4.21) | 59.70 (12.38) |
| 36 | 23.25 (4.09) | 60.69 (12.74) |
| 40 | 24.50 (3.72) | 61.09 (12.54) |
| 50 | 27.25 (2.96) | 61.86 (12.80) |
| 60 | 29.63 (2.60) | 62.59 (12.81) |
| 80 | 31.13 (1.15) | 60.76 (11.54) |
The average proportion of each class of features among the top-N features across 16 subjects.
| Top-N Features | The Average Proportion of Each Class of Features | |||
|---|---|---|---|---|
| Theta | Alpha | Beta | Gamma | |
| 4 | 0.03 | 0.02 | 0.27 | 0.68 |
| 8 | 0.04 | 0.09 | 0.18 | 0.69 |
| 16 | 0.07 | 0.11 | 0.25 | 0.57 |
| 32 | 0.10 | 0.12 | 0.29 | 0.49 |
| 64 | 0.13 | 0.13 | 0.35 | 0.39 |
| 96 | 0.20 | 0.21 | 0.29 | 0.30 |
| 128 | 0.25 | 0.25 | 0.25 | 0.25 |
Figure 3Average classification accuracy over varying number of channels using SVM based on the MRCS method. Point A achieves the best classification accuracy.
Figure 4SVM average classification accuracy over a varying number of channels for subject 7 based on MRCS (blue solid line) and SVM-MRCS methods (red hashed line). (a) adjustment on the channel-selection dataset; (b) evaluation on the performance-validation dataset.
Figure 5Average classification accuracy over a varying number of channels using SVM based on the SVM-MRCS method. Point A achieves the best classification accuracy.
Figure 6Average performance comparison across 16 subjects between MRCS and SVM-MRCS using SVM.
The comparison of MRCS and SVM-MRCS methods using top-N channels.
| Average Accuracy (Standard Deviation) Using Top-N Channels | |||||
|---|---|---|---|---|---|
| Top-N | MRCS | SVM-MRCS | Top-N | MRCS | SVM-MRCS |
| 3 | 51.95 (8.23) | 52.86 (8.41) | 12 | 56.93 (8.95) | 58.40 (10.20) * |
| 4 | 52.94 (7.49) | 55.08 (8.98) | 13 | 57.06 (9.01) | 58.54 (10.08) * |
| 5 | 54.51 (8.30) | 56.39 (8.95) | 14 | 57.38 (9.38) | 58.78 (10.27) * |
| 6 | 54.66 (8.89) | 57.39 (9.52) ** | 15 | 57.79 (9.34) | 58.73 (10.40) |
| 7 | 55.34 (8.90) | 57.71 (9.53) ** | 16 | 58.05 (9.83) | 58.97 (10.75) |
| 8 | 56.55 (8.92) | 58.16 (9.47) * | 17 | 58.34 (9.63) | 59.08 (10.92) |
| 9 | 56.62 (8.98) | 58.51 (10.05) * | 18 | 58.21 (9.61) | 59.10 (11.05) |
| 10 | 56.56 (8.75) | 58.43 (9.90) * | 19 | 58.22 (9.80) | 59.13 (11.00) |
| 11 | 56.53 (8.72) | 58.31 (10.05) * | 20 | 58.09 (9.79) | 58.93 (10.95) |
The significant difference was tested under the same number of selected channels (paired t-test, * p < 0.05, ** p < 0.01).
Figure 7Comparison between ReliefF and F-score as a feature selection method with SVM. (a) a feature-selection-based channel selection method; (b) a channel selection method based on the channel’s weight obtained from the mean of the ReliefF or F-score weights of all features belonging to this channel; (c) a channel selection method based on the channel’s weight adjusted according to the channel’s contribution to classification accuracy.
Figure 8The subject-independent channel selection result.
The top 15 common channels across 16 subjects.
| Channel Rank | Electrode | Brain Lobe |
|---|---|---|
| 1 | Fp1 | Frontal |
| 2 | T7 | Temporal |
| 3 | PO4 | Parietal |
| 4 | Pz | Parietal |
| 5 | Fp2 | Frontal |
| 6 | F8 | Frontal |
| 7 | Oz | Occipital |
| 8 | T8 | Temporal |
| 9 | P4 | Parietal |
| 10 | O1 | Occipital |
| 11 | AF4 | Frontal |
| 12 | FC5 | Frontal |
| 13 | C3 | Central |
| 14 | FC2 | Frontal |
| 15 | P3 | Parietal |
Figure 9Top 5 (a); 10 (b); 15 (c) subject-independent channels according to a 10–20 system.