| Literature DB >> 29958457 |
Lin Shu1, Jinyan Xie2, Mingyue Yang3, Ziyi Li4, Zhenqi Li5, Dan Liao6, Xiangmin Xu7, Xinyi Yang8.
Abstract
Emotion recognition based on physiological signals has been a hot topic and applied in many areas such as safe driving, health care and social security. In this paper, we present a comprehensive review on physiological signal-based emotion recognition, including emotion models, emotion elicitation methods, the published emotional physiological datasets, features, classifiers, and the whole framework for emotion recognition based on the physiological signals. A summary and comparation among the recent studies has been conducted, which reveals the current existing problems and the future work has been discussed.Entities:
Keywords: classifiers; emotion model; emotion recognition; emotion stimulation; features; physiological signals
Mesh:
Year: 2018 PMID: 29958457 PMCID: PMC6069143 DOI: 10.3390/s18072074
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Plutchik’s Wheel of Emotions.
Figure 22D emotion space model.
Figure 33D emotion space model.
Figure 4(a) MAUI—Multimodal Affective User Interface; (b) Framework of AVRS; (c) VR scenes cut show.
Figure 5Position of the bio-sensors.
Figure 6(a) Mean accuracy of different channels; (b) The performance of different windows sizes; (c) The average accuracies of GELM; (d) Spectrogram shows different patterns with different emotions.
The relationship between emotions and physiological features *.
| Anger | Anxiety | Embarrassment | Fear | Amusement | Happiness | Joy | |
|---|---|---|---|---|---|---|---|
| HR | ↑ | ↑ | ↑ | ↑ | ↑↓ | ↑ | ↑ |
| HRV | ↓ | ↓ | ↓ | ↓ | ↑ | ↓ | ↑ |
| LF | ↑ | (--) | (--) | ||||
| LF/HF | ↑ | (--) | |||||
| PWA | ↑ | ||||||
| PEP | ↓ | ↓ | ↓ | ↑ | ↑ | ↑↓ | |
| SV | ↑↓ | (--) | ↓ | (--) | ↓ | ||
| CO | ↑↓ | ↑ | (--) | ↑ | ↓ | (--) | (--) |
| SBP | ↑ | ↑ | ↑ | ↑ | ↑-- | ↑ | ↑ |
| DBP | ↑ | ↑ | ↑ | ↑ | ↑-- | ↑ | (--) |
| MAP | ↑ | ↑ | ↑-- | ↑ | |||
| TPR | ↑ | ↓ | ↑ | ↑ | (--) | ||
| FPA | ↓ | ↓ | ↓ | ↓ | ↑↓ | ||
| FPTT | ↓ | ↓ | ↓ | ↑ | |||
| EPTT | ↓ | ↓ | ↑ | ||||
| FT | ↓ | ↓ | ↓ | (--) | ↑ | ||
| SCR | ↑ | ↑ | ↑ | ↑ | |||
| nSRR | ↑ | ↑ | ↑ | ↑ | ↑ | ↑ | |
| SCL | ↑ | ↑ | ↑ | ↑ | ↑ | ↑-- | (--) |
| RR | ↑ | ↑ | ↑ | ↑ | ↑ | ↑ | |
| Ti | ↓ | ↓ | ↓-- | ↓ | ↓ | ||
| Te | ↓ | ↓ | ↓ | ↓ | |||
| Pi | ↑ | ↑ | ↓ | ||||
| Ti/Ttot | ↑ | ↓ | |||||
| Vt | ↑↓ | ↓ | ↑↓ | ↑↓ | ↑↓ | ||
| Vi/Ti | ↑ | ||||||
| PSD (α wave ) | ↑ | ↑ | ↓ | ↑ | ↑ | ↑ | |
| PSD (β wave) | ↓ | ↑ | |||||
| PSD (γ wave) | ↓ | ↑ | ↑ | ↑ | |||
| DE (average) | ↑ | (--) | ↓ | ↑ | ↑ | ||
| DASM (average) | (--) | ↑ | ↓ | ↓ | ↓ | ||
| RASM (average) | ↑ | ↑ | ↓ | ||||
Note.* Arrows indicate increased (↑), decreased (↓), or no change in activation from baseline (−), or both increases and decreases in different studies (↑↓).
Figure 7(a) Logical scheme of the overall short-time emotion recognition concept; (b) Instantaneous tracking of the HR V indices computed from a representative subject using the proposed NARI model during the passive emotional elicitation (two neutral sessions alternated to a L-M and a M-H arousal session); (c) Diagram of the proposed method; (d) Experimental results.
Figure 8(a) The Emotion Check device; (b) Diagram describing the components of the Emotion Check device; (c) Prototype of glove with sensor unit; (d) Body Media Sense Wear Armband; (e) Left: The physiological measures of EMG and EDA. Middle: The physiological measures of EEG, BVP and TMP. Right: The physiological measures of physiological sensors in the experiments; (g) Illustration of R-TIPS. This platform allows wireless monitoring of cardiac signals. It consists of a transmitter system and three sensors; (f) The transmitter system is placed on the participant’s hip, and the sensors are placed below right breast, on the right side, and on the back.
Figure 9(a) Monitoring of epileptic seizures using EDA; (b,c) Wearable GSR sensor.
Figure 10Emotion recognition process using physiological signals under target emotion stimulation.
Figure 11(a) The decomposition of R-R interval signal (emotion of sadness); (b) The structure of Autoencoder; (c) The structure of Bimodal Deep AutoEncoder.
Figure 12(a) Typical framework of multimodal information fusion; (b) SVM results for different emotions with EEG frequency band; (c) Demo of the proposed feature level fusion. A feature vector created at any time step is valid for the next two steps.
Figure 13Classification models.
Figure 14(a) The structure of standard RNN and LSTM; (b) The structure and settings of CNN.
The confusion matrix of classification.
| True Situation | Prediction | |
|---|---|---|
| Positive | Negative | |
| Positive | true positive (TP) | false negative (FN) |
| Negative | false positive (FP) | true negative (TN) |
Figure 15ROC.
Figure 16Comparative results on the same public accessible datasets: (a) DEAP; (b) MAHNOB; database; (c) SEED.
Summary of previous research.
| No. | Author | Stimulus | Subjects | Subject Dependency | Emotions | Signals | Features | Classifiers | Recognition Rates |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Petrantonakis P C, et al. [ | IAPS | 16 (9 males, 7 females) | No | happiness, surprise, anger, fear, disgust, sadness | EEG | FD, HOC | KNN, QDA, MD, SVM | 85.17% |
| 2 | Samara A, et al. [ | videos | 32 | Yes | arousal, valence | EEG | statistical features, PSD, HOC | SVM | Bipartition: 79.83% |
| 3 | Jianhai Zhang et al. [ | videos | 32 | Yes | arousal, valence | EEG | power | PNN, SVM | 81.76% for PNN |
| 4 | Ping Gong et al. [ | music | - | Yes | joy, anger, sadness, pleasure | ECG, EMG, RSP, SC | statistical features | c4.5 decision tree | 92% |
| 5 | Gyanendra Kumar Verma et al. [ | videos | 32 | Yes | terrible, love, hate, sentimental, lovely, happy, fun, shock, cheerful, depressing, exciting, melancholy, mellow | EEG+8 peripheral signals | different Powers, STD and SE of detail and approximation coefficients. | SVM, MLP, KNN, MMC | EEG only:81% |
| 6 | Vitaliy Kolodyazhniy et al. [ | film clips | 34 (25 males, 19 females) | Both | fear, sadness, neutral | ECG, GSR, RSP, T, EMG, Capnography | HR, RSA, PEP, SBP, SCL, SRR, RR, Vt, pCO2, FT, ACT, SM, CS, ZM | KNN, MLP, QDA, LDA, RBNF | subject dependent:81.90% |
| 7 | Dongmin Shin et al. [ | videos | 30 | Yes | amusement, fear, sadness, joy, anger, and disgust | EEG, ECG | relative power, LF/HF | BN | 98.06% |
| 8 | Foteini Agrafioti et al. [ | IAPS and video game | 44 | No | valence, arousal | ECG | BEMD:Instantaneous Frequency, Local Oscillation | LDA | arousal: |
| 9 | Wanhui Wen et al. [ | videos | - | No | amusement, grief, anger, fear, baseline | OXY, GSR, ECG | 155 HR features and | RF | 74%,(leave-one-out) LOO |
| 10 | Jonghwa Kim et al. [ | music | 3 | Both | valence, arousal | ECG, EMG, RSP, SC | 110 features. | pLDA | subject dependent:95% |
| 11 | Cong Zong et al. [ | music | - | Yes | joy, anger, sadness and pleasure | ECG, EMG, SC, RSP | HHT:instantaneous frequency, weighted mean instantaneous frequency | SVM | 76% |
| 12 | Gaetano Valenza et al. [ | IAPS | 35 | No | 5 level valence | ECG, EDR, RSP | 89 standard features, | QDA | >90% |
| 13 | Wee Ming Wong et al. [ | music | - | Yes | joy, anger, sadness, pleasure | ECG, EMG, SC, RSP | 32 features: mean, STD, breathing rateand amplitude, heartbeat, etc. | PSO of synergetic neural classifer (PSO-SNC) | SBS:86% |
| 14 | Leila Mirmohamadsadeghi et, al. [ | videos | 32 | Yes | valence, arousal | EMG, RSP | slope of the phase difference of the RSA and the respiration | SVM | 74% for valence, 74% for arousal and 76% for liking. |
| 15 | Chi-Keng Wu et al. [ | flims clips | 33 | Yes | love, sadness, joy, anger, fear | RSP | EES | KNN5 | 88% |
| 16 | Xiang Li et al. [ | videos | 32 | Yes | valence, arousal | EEG | CWT, CNN | LSTM | 72.06% for valence, |
| 17 | Zied Guendil et al. [ | music | - | Yes | joy, anger, sadness, pleasure | EMG, RESP, ECG, SC | CWT | SVM | 95% |
| 18 | Yuan-Pin Lin et al. [ | music | 26 (16 males,10 females) | No | joy, anger, sadness, pleasure | EEG | DASM, PSD, RASM | MLP, SVM | 82.29% |
| 19 | Gaetano Valenza et al. [ | IAPS | - | No | valence, arousal | ECG | spectral, HOS | SVM | 79.15% for valence, |
| 20 | Bo Cheng et al. [ | - | - | Yes | joy, anger, sadnes, pleasure | EMG | DWT | BP | 75% |
| 21 | Saikat Basu et al. [ | IAPS | 30 | Yes | valence, arousal | GSR, HR, RSP, SKT | mean, covariance matrix | LDA, QDA | 98% for HVHA, |
| 22 | ingxin Liu et al. [ | videos | 32 | Yes | valence, arousal | EEG | statistical features, | KNN5, RF | 69.9% for valence, |
| 23 | Hernan F. Garcia et al. [ | videos | 32 | Yes | valence, arousal | EEG, EMG, EOG, | Gaussian process latent variable models | SVM | 88.33% for 3 level valence, |
| 24 | Han-Wen Guo et al. [ | movie clips | 25 | Yes | positive, negative | ECG | Mean RRI, CVRR, SDRR, SDSD, LF, HF, LF/HF, Kurtosis, Kurtosis, entropy | SVM | 71.40% |
| 25 | Mahdis Monajati et al. [ | - | 13 (6 males, 7 females) | Yes | negative, neutral | GSR, HR, RSP | GSR-dif = (GSR-max) − (GSR-base), | Fuzzy Adaptive Resonance Theory | 94% |
| 26 | Lan Z et al. [ | IADS | 5 | Yes | positive, negative | EEG | FD, five statistical features, HOC, power | SVM | 73.10% |
| 27 | Zheng W L et al. [ | videos | 47 | Yes | valence, arousal | EEG | PSD, DE, DASM, DASR, DCAU | G extreme Learning Machine | 69.67% in DEAP |
Figure 17The comparation of recognition rate among previous research.
Figure 18Subject-dependent and Subject-independent recognition rate. (The horizontal axis represents the same sequence number as the Table 3. Different colors represent different classification categories: Blue—two categories, yellow—three categories, green four—categories, grey—five categories, purple—six categories).