| Literature DB >> 29587375 |
Terence K L Hui1, R Simon Sherratt2.
Abstract
The present research proposes a novel emotion recognition framework for the computer prediction of human emotions using common wearable biosensors. Emotional perception promotes specific patterns of biological responses in the human body, and this can be sensed and used to predict emotions using only biomedical measurements. Based on theoretical and empirical psychophysiological research, the foundation of autonomic specificity facilitates the establishment of a strong background for recognising human emotions using machine learning on physiological patterning. However, a systematic way of choosing the physiological data covering the elicited emotional responses for recognising the target emotions is not obvious. The current study demonstrates through experimental measurements the coverage of emotion recognition using common off-the-shelf wearable biosensors based on the synchronisation between audiovisual stimuli and the corresponding physiological responses. The work forms the basis of validating the hypothesis for emotional state recognition in the literature and presents coverage of the use of common wearable biosensors coupled with a novel preprocessing algorithm to demonstrate the practical prediction of the emotional states of wearers.Entities:
Keywords: EDA (electrodermal activity); EMG (electromyography); PPG (photoplethysmography); basic emotions; emotion prediction; emotion recognition; orienting response; physiological specificity; skin temperature; wearable biosensors
Mesh:
Year: 2018 PMID: 29587375 PMCID: PMC6023004 DOI: 10.3390/bios8020030
Source DB: PubMed Journal: Biosensors (Basel) ISSN: 2079-6374
Figure 1Simplified emotion recognition using common wearable biosensors. SKT, skin temperature.
Figure 2Block diagram of emotionWear emotion recognition framework. SPHERE, Sensor Platform for HEalthcare in a Residential Environment.
Feature extraction from PPG, EDA and SKT sensors.
| PPG Features | Calculations Based on Python (Import Numpy, Pandas and Scipy) |
|---|---|
| IBI | Peak detection of raw PPG signal and get an array (ppgnn) |
| Interbeat interval (IBI) = ppgnn.interpolate(method = ”cubic”) | |
| HR | Heart Rate = (60 s × sampling frequency)/peak-to-peak duration |
| HR = IBI.rolling (window, min_periods = 1, centre = True).mean() | |
| SDNN | Standard deviation of IBI |
| SDNN = IBI.rolling (window, min_periods = 1, centre = True).std() | |
| SDSD | Standard deviation of the difference between adjacent ppgnn |
| ppgdiff = pd.DataFrame(np.abs(np.ediff1d(ppgnn))) | |
| ppgdiff = ppgdiff.interpolate (method = “cubic”) | |
| SDSD = ppgdiff.rolling (window, min_periods = 1, centre = True).std() | |
| RMSSD | Root Mean Square of the difference between adjacent ppgnn |
| ppgsqdiff = pd.DataFrame (np.power(np.ediff1d (ppgnn), 2)) | |
| ppgsqdiff = ppgsqdiff.interpolate (method = ”cubic”) | |
| RMSSD = np.sqrt (ppgsqdiff.rolling (window, min_periods = 1, centre = True).mean()) | |
| SDNN/RMSSD | Ratio between SDNN and RMSSD |
| SDNN_RMSSD = SDNN / RMSSD | |
| LF | Power Spectral Density (PSD) for low frequency range (0.04 Hz to 0.15 Hz) |
| Y = np.fft.fft (IBI)/window, Y = Y [range(window//2)] | |
| LF = np.trapz (np.abs(Y[(freq ≥ 0.04) & (freq ≤ 0.15)])) | |
| HF | PSD for high frequency range (0.16 Hz to 0.4 Hz) |
| HF = np.trapz(np.abs (Y[(freq ≥ 0.15) & (freq ≤ 0.4)])) | |
| LF/HF | PSD ratio between LF and HF |
| LHF = LF / HF | |
| EDA (filtered) | eda = raw EDA signal sampling at 100 ms |
| B, A = signal.butter (2, 0.005, output = “ba”) | |
| EDAf = signal.filtfilt (B, A, eda) | |
| EDA (mean) | Getting rolling mean of filtered EDA raw signal (EDAf) |
| EDAmean = EDAf.rolling (window, min_periods = 1, centre = True).mean() | |
| EDA (std) | Getting rolling standard deviation of filtered EDA raw signal (EDAf) |
| EDAstd = EDAf.rolling (window, min_periods = 1, centre = True).std() | |
| SKT (filtered) | skt = raw SKT signal sampling at 100 ms |
| B, A = signal.butter (2, 0.005, output = “ba”) | |
| SKTf = signal.filtfilt (B, A, skt) | |
| SKT (mean) | Getting rolling mean of filtered SKT raw signal (SKTf) |
| SKTmean = SKTf.rolling (window, min_periods = 1, centre = True).mean() | |
| SKT (std) | Getting rolling standard deviation of filtered SKT raw signal (SKTf) |
| SKTstd = SKTf.rolling (window, min_periods = 1, centre = True).std() |
Elicitation of basic emotions using referenced stimuli.
| Emotions | Pictures (Numbers Refer to IAPS Database [ | Film Clips (Names and Duration of Film Clips Refer to Schaefer et al. [ |
|---|---|---|
| Happiness/Joy | High valence rating: | (1) Something About Mary [2] |
| #1710 (Puppies), #1750 (Bunnies), | (2) A fish called Wanda | |
| #5833 (Beach), #1460 (Kitten), | (3) When Harry met Sally | |
| #2050 (Baby), #1440 (Seal), | ||
| Anger | #2040 (Baby), #2070 (Baby), | (1) Schindler’s list [2] |
| #8190 (Skier), #2080 (Babies) | (2) Sleepers | |
| (3) Leaving Las Vegas | ||
| Fear | Low valence rating: | (1) The Blair Witch Project |
| #3053 (BurnVictim), #3102 (BurnVictim), | (2) The Shining | |
| #3000 (Mutilation), #3064 (Mutilation), | (3) Misery | |
| #3170 (BabyTumor), #3080 (Mutilation), | ||
| Disgust | #3063 (Mutilation), #9410 (Soldier), | (1) Trainspotting [2] |
| #3131 (Mutilation), #3015 (Accident) | (2) Seven [3] | |
| (3) Hellraiser | ||
| Sadness | (1) City of angels | |
| (2) Dangerous mind | ||
| (3) Philadelphia |
Figure 3Physiological responses for still picture stimuli (IAPS). Each study session of emotion recognition using IAPS stimuli lasts a total of 520 s; the 20 chosen pictures in two categories listed in Table 2 are shown as still images on the display in sequence of six seconds per image ①, and the gaps between images are each filled with a 20-s black screen ②. The x-axis shows the arrangement of the two categories of IAPS pictures according to their unique reference numbers. All participants showed unpleasant emotion when the first negative valence picture was displayed (#3053). The upper picture shows a higher heart rate deceleration during the switching of emotional valence, and the lower picture depicts a more significant skin conductance change.
Film clips’ elicitation analysis.
| Film Clips Target Emotions (And Distribution) | Emotion (Target = Subjective) | Emotion Elicitation (Subjective) | Emotion Elicitation (Measure) | Hit-Rate (Measure = Subjective) | Subjective Arousal (Average) | Subjective Valence (Average) |
|---|---|---|---|---|---|---|
| Joy (19%) | 100% | 19% | 12% | 60% | 3.50 | 3.75 |
| Anger (23%) | 0% | 0% | 4% | 0% | 0.00 | 0.00 |
| Fear (15%) | 43% | 27% | 12% | 43% | 5.43 | 7.43 |
| Disgust (21%) | 71% | 26% | 19% | 57% | 5.57 | 8.14 |
| Sadness (23%) | 57% | 18% | 7% | 43% | 4.20 | 7.14 |
Reasons for unsuccessful emotion elicitation.
| Reasons for Unsuccessful Emotion Elicitation | Percentage |
|---|---|
| Saw the film clip before (many times) | 17% |
| Film clip too short | 42% |
| Cannot understand the language | 4% |
| Did not feel the target emotion | 33% |
| Others | 4% |
Figure 4Orienting responses on unsuccessful (upper) and successful (lower) emotion elicitation of film clips.
Emotional recognition before and after validation of emotion elicitation.
| Biosignal | Responses Match with ANS Specificity Before Successful Emotion Elicitation | Responses Match with ANS Specificity After Successful Emotion Elicitation |
|---|---|---|
| HR | ||
| EDA | ||
| SKT |
Figure 5Physiological responses for continuous emotion elicitations.