| Literature DB >> 31795095 |
Muhammad Adeel Asghar1, Muhammad Jamil Khan1, Yasar Amin1, Muhammad Rizwan2, MuhibUr Rahman3, Salman Badnava4, Seyed Sajad Mirjavadi5.
Abstract
Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.Entities:
Keywords: bag of deep features; brain computer interface; continuous wavelet transform; emotion recognition
Mesh:
Year: 2019 PMID: 31795095 PMCID: PMC6928944 DOI: 10.3390/s19235218
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
SEED dataset overview.
| No. | Emotion Label | Film Clip Source |
|---|---|---|
| 1 | Negative | Tangshan Earthquake |
| 2 | Negative | 1942 |
| 3 | Positive | Lost in Thailand |
| 4 | Positive | Flirting scholar |
| 5 | Positive | Just another Pandora’s Box |
| 6 | Neutral | World Heritage in Chine |
DEAP dataset classes.
| No. | Emotion Label | States |
|---|---|---|
| 1 | LAHV (Low Arousal High Valence) | Alert |
| 2 | HALV (High Arousal Low Valence) | Calm |
| 3 | HAHV (High Arousal High Valence) | Happy |
| 4 | LALV (Low Arousal Low Valence) | Sad |
Figure 1Electrode to channel mapping.
Figure 2Framework.
Figure 3Continuous wavelet transform. (a) is the 3 classes TFR of SEED dataset. Positve, negative and netral classes of SEED dataset is clearly differentiatable. (b) shows TFR of 4 classes in DEAP dataset.
Figure 4AlexNet layer architecture.
Figure 5Feature tree.
Figure 6Bag of deep features (BoDF).
Figure 7Selecting value of k.
Figure 8Classification accuracy.
Average classification accuracy (%).
| Classifier | SEED | DEAP | ||
|---|---|---|---|---|
| k Value | Accuracy | k Value | Accuracy | |
|
| 10 | 93.8 | 10 | 77.4 |
| 8 | 92.6 | 8 | 76.3 | |
| 6 | 92.4 | 6 | 76.1 | |
| 4 | 91.8 | 4 | 75.3 | |
| 2 | 90.9 | 2 | 75.1 | |
|
| 10 | 91.4 | 10 | 73.6 |
| 8 | 90.2 | 8 | 71.1 | |
| 6 | 87.4 | 6 | 69.8 | |
| 4 | 87.1 | 4 | 68.5 | |
| 2 | 86.6 | 2 | 67.3 | |
Comparison on publicly available dataset with previous studies.
| Ref. | Features | Dataset | Number of Channels | Classifier | Accuracy (%) |
|---|---|---|---|---|---|
| [ | MOCAP | IMOCAP | 62 | CNN | 71.04 |
| [ | MFM | DEAP | 18 | CapsNet | 68.2 |
| [ | MFCC | SEED | 12 | SVM | 83.5 |
| Random Forest | 72.07 | ||||
| DEAP | 6 | Random Forest | 72.07 | ||
| [ | MEMD | DEAP | 12 | ANN | 75 |
| k-NN | 67 | ||||
| [ | STRNN | SEED | 62 | CNN | 89.5 |
| [ | RFE | SEED | 18 | SVM | 90.4 |
| DEAP | 12 | SVM | 60.5 | ||
| [ | DE | MAHNOB | 18 | PNN | 77.8 |
| DEAP | 32 | PNN | 79.3 | ||
| Our work | DWT-BODF | SEED | 62 | SVM | 93.8 |
| k-NN | 91.4 | ||||
| DEAP | 32 | SVM | 77.4 | ||
| k-NN | 73.6 |
Figure 9Classification accuracy at different value of k-clusters.