| Literature DB >> 33014031 |
Nazmi Sofian Suhaimi1, James Mountstephens1, Jason Teo1.
Abstract
Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful "emotional" interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.Entities:
Mesh:
Year: 2020 PMID: 33014031 PMCID: PMC7516734 DOI: 10.1155/2020/8875426
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The limbic system (source: https://courses.lumenlearning.com/boundless-psychology/chapter/biology-of-emotion/#:∼:text=The%20limbic%20system%20is%20the,thalamus%2C%20amygdala%2C%20and%20hippocampus).
Figure 2The 10–20 EEG electrode positioning system (source: [56]).
Figure 3A 14-channel low-cost wearable EEG headset Emotiv EPOC worn by subject (source: [57]).
Figure 48- to 16-channel Ultracortex Mark IV (source: https://docs.openbci.com/docs/04AddOns/01-Headwear/MarkIV).
Figure 5A medical-grade EEG headset B-Alert X10, 10 channels (source: [59]).
Market available for EEG headset between low and middle cost.
| Product tier | Products | Channel positions | Sampling rate | Electrodes | Cost |
|---|---|---|---|---|---|
| Low-cost range (USD99-USD 1,000) | Emotiv EPOC+ | AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 | 32 Hz–64 Hz | 14 | USD 799.00 |
| NeuroSky MindWave | FP1 | 512 Hz | 1 | USD 99.00 | |
| Ultracortex “Mark IV” EEG headset | FP2, FP1, C4, C3, P8, P7, O2, O1 | 128 Hz | 8–16 | USD 349.99 | |
| Interaxon Muse | AF7, AF8, TP9, TP10 | 256 Hz | 4 | USD 250.00 | |
|
| |||||
| Middle-cost range (USD 1,000-USD 25,000) | B-Alert X Series | Fz, F3, F4, Cz, C3, C4, P3, P4, Poz | 256 Hz | 10 | (Undisclosed) |
| ANT-Neuro eego rt | AF7, AF3, AF4, AF8, F5, F1, F2, F6, FT7, FC3, FCZ, FC4, FT8, C5, C1, C2, C6, TP7, CP3, CPz, CP4, TP8, P5, P1, P2, P6, PO7, PO5, PO3, PO4, PO6, PO8 | 2048 Hz | 64 | (Undisclosed) | |
Figure 621-channel OpenBCI electrode cap kit (source: https://docs.openbci.com/docs/04AddOns/01-Headwear/ElectrodeCap).
EEG signals and its frequency bands.
| Band name | Frequency band (Hz) | Functions |
|---|---|---|
| Delta | <4 | Usually associated with the unconscious mind and occurs in deep sleep |
| Theta | 4–7 | Usually associated with the subconscious mind and occurs in sleeping and dreaming |
| Alpha | 8–15 | Usually associated with a relaxed mental state yet aware and are correlated with brain activation |
| Beta | 16–31 | Usually associated with active mind state and occurs during intense focused mental activity |
| Gamma | >32 | Usually associated with intense brain activity |
Publicly available datasets for emotion stimulus and emotion recognition with different methods of collection for neurophysiological signals.
| Item No. | Dataset | Description |
|---|---|---|
| 1 | DEAP | “Dataset for Emotion Analysis using Physiological and Video Signals” is an open-source dataset to analyze human affective states. The dataset consists of 32 recorded participants watching 40 music video clips with a certain level of stimuli evaluated |
| 2 | IADS | “The International Affective Digital Sounds” system is a collection of digital sounds that is used to stimulate emotional responses through acoustics and is used in investigations of emotion and attention of an individual |
| IAPS | “The International Affective Picture” system is a collection of the emotionally evocative picture that is used to stimulate emotional responses to investigate the emotion and attention of an individual | |
| 4 | DREAMER | A dataset that has collected 23 participants with signals from EEG and ECG using audio-visual stimuli responses. The access of this dataset is restricted and can be requested upon filling a request form to the owner |
| 5 | ASCERTAIN | A “database for implicit personality and affect recognition” that collects signals from EEG, ECG, GSR, and facial activities from 58 individuals using 36 movie clips with an average length of 80 seconds |
| 6 | SEED | The “SJTU Emotion EEG Dataset” is a collection of EEG signals collected from 15 individuals watching 15 movie clips and measures the positive, negative, and neutral emotions |
| 7 | SEED-IV | An extension of the SEED dataset that now specifically targets the labels of the emotion specifically, happy, sad, fear, and neutral with an additional eye tracking feature added into the collection data inclusive of the EEG signal |
Comparison of stimuli used for the evocation of emotions, length of stimulus video, and emotion class evaluation.
| Research author | Stimuli | Dataset | Clip length | Emotion classes |
|---|---|---|---|---|
| [ | Music | IADS (4 songs) | 60 sec per clip | Pleasant, happy, frightened, angry |
| [ | Music | Self-Designed (40 songs) | — | Happy, angry, afraid, sad |
| [ | Music | Self-Designed (301 songs collected from different albums) | 30 sec per clip | Happy, angry, sad, peaceful |
| [ | Music | Self-Designed (1080 songs) | — | Anger, sadness, happiness, boredom, calm, relaxation, nervousness, pleased, and peace |
| [ | Music | Self-Designed (3552 songs from Baidu) | — | Contentment, depression, exuberance |
| [ | Music | 1000 songs from MediaEval | 45 sec per clip | Pleasing, angry, sad, relaxing |
| [ | Music | Self-Designed (25 songs + Healing4Happiness dataset) | 247.55 sec | Valence, arousal |
| [ | Music + picture | IAPS, Quran Verse, Self-Designed (Musicovery, AMG, Last.fm) | 60 sec per clip | Happy, fear, sad, calm |
| [ | Music videos | DEAP (40 music videos) | 60 sec per clip | Valence, arousal, dominance, liking |
| [ | Music videos | DEAP (40 music videos) | — | Valence, arousal |
| [ | Music videos | DEAP (40 music videos) | 60 sec per clip | Valence, arousal |
| [ | Music videos | DEAP (40 music videos) | 60 sec per clip | — |
| [ | Music videos | DEAP (40 music videos) | 60 sec per clip | Valence, arousal |
| [ | Music videos | DEAP (40 music videos) | 60 sec per clip | Valence, arousal, dominance |
| [ | Video clips | Self-Designed (12 video clips) | 150-sec per clip | Happy, fear, sad, relax |
| [ | Video clips | DECAF (36 video clips) [ | 51–128 sec per clip | Valence, arousal |
| [ | Video clips | Self-designed (15 video clips) | 120–240 sec per clip | Happy, sad, fear, disgust, neutral |
| [ | Video clips | SEED (15 video clips), DREAMER (18 video clips) | SEED (240 sec per clip), DREAMER (65–393 sec per clip) | Negative, positive, and neutral (SEED). Amusement, excitement, happiness, calmness, anger, disgust, fear, sadness, and surprise (DREAMER) |
| [ | Video clips | SEED (15 video clips) | 240 sec per clip | Positive, neutral, negative |
| [ | Video clips | Self-Designed (20 video clips) | 120 sec per clip | Valence, arousal |
| [ | VR | Self-Designed (4 scenes) | — | Arousal and valence |
| [ | VR | AVRS (8 scenes) | 80 sec per scene | Happy, sad, fear, relaxation, disgust, rage |
| [ | VR | Self-Designed (2 video clips) | 475 sec + 820 sec clip | Horror, empathy |
| [ | VR | Self-Designed (5 scenes) | 60 sec per scene | Happy, relaxed, depressed, distressed, fear |
| [ | VR | Self-Designed (1 scene) | — | Engagement, enjoyment, boredom, frustration, workload |
| [ | VR | Self-Designed (1 scene that changes colour intensity) | — | Anguish, tenderness |
| [ | VR | AVRS (4 scenes) | — | Happy, fear, Peace, disgust, sadness |
| [ | VR | NAPS (Nencki Affective Picture System) (20 pictures) | 15 sec per picture | Happy, fear |
| [ | VR | Self-Designed (1 scene) | 90 sec per clip | Fear |
Common EEG headset recordings, placements, and types of brainwave recordings.
| Research author | EEG headset model used | Brief description of electrode placements | Frequency bands recorded |
|---|---|---|---|
| [ | BioSemi ActiveTwo | Prefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital | Theta, alpha, lower-beta, upper-beta, gamma |
| [ | NeuroSky MindWave | Prefrontal | Delta, theta, low-alpha, high-alpha, low-beta, high-beta, low-gamma, mid-gamma |
| [ | actiChamp | Frontal, central, parietal, occipital | Delta, theta, alpha, beta, gamma |
| [ | AgCl Electrode Cap | — | Delta, theta, alpha, beta, gamma |
| [ | BioSemi ActiveTwo | Frontal | Delta, theta, alpha, beta, gamma |
| [ | BioSemi ActiveTwo | Prefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital | Delta, theta, alpha, beta, gamma |
| [ | BioSemi ActiveTwo | Prefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital | Delta, theta, alpha, beta, gamma |
| [ | Emotiv EPOC+ | Prefrontal-frontal, frontal, frontal-central, temporal, parietal, occipital, frontal-central | Delta, theta, alpha, beta, gamma |
| [ | Muse | Temporal-parietal, prefrontal-frontal | Delta, theta, alpha, beta, gamma |
| [ | NeuroSky MindWave | Prefrontal | Delta, theta, alpha, beta, gamma |
| [ | Emotiv EPOC+ | Prefrontal-frontal, frontal, frontal-central, temporal, parietal, occipital, frontal-central | Alpha, low-beta, high-beta, gamma, theta |
| [ | BioSemi ActiveTwo | Prefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital | Alpha, beta |
| [ | Ag/AgCK Subtered Ring Electrodes | Fp1, T3, F7, O1, T4, Fp2, C3, T5, F3, P3, T6, P4, O2, F4, F8 | — |
| [ | B-Alert X10 | Frontal, central, parietal | — |
| [ | BioSemi ActiveTwo | Prefrontal, prefrontal-frontal, frontal, frontal-central, temporal, central, central-parietal, parietal, parietal-occipital, occipital | — |
Comparison of classifiers used for emotion classification and its performance.
| Research author | Classifiers | Best performance achieved | Intersubject or Intrasubject |
|---|---|---|---|
| [ | Dynamical graph convolutional neural network | 90.40% | Intrasubject and intersubject |
| [ | Support vector machine | 80.76% | Intrasubject and intersubject |
| [ | Random forest, instance-based | 98.20% | Intrasubject |
| [ | Support vector machine | — | Intrasubject |
| [ | Multilayer perceptron | 76.81% | Intrasubject |
| [ | K-nearest neighbor | 95.00% | Intersubject |
| [ | Support vector machine | 73.10% | Intersubject |
| [ | Support vector machine, K-nearest neighbor, convolutional neural network, deep neural network | 82.81% | Intersubject |
| [ | Support vector machine | 81.33% | Intersubject |
| [ | Support vector machine, convolutional neural network | 81.14% | Intersubject |
| [ | Gradient boosting decision tree | 75.18% | Intersubject |
| [ | Support vector machine | 70.00% | Intersubject |
| [ | Support vector machine | 70.52% | Intersubject |
| [ | Support vector machine, naïve Bayes | 61.00% | Intersubject |
| [ | Support vector machine | 57.00% | Intersubject |
| [ | Support vector machine, K-nearest neighbor | — | Intersubject |
| [ | Support vector machine, K-nearest neighbor | 98.37% | — |
| [ | Convolutional neural network | 97.69% | — |
| [ | Support vector machine, backpropagation neural network, late fusion method | 92.23% | — |
| [ | Fisherface | 91.00% | — |
| [ | Haar, Fisherface | 91.00% | — |
| [ | Extreme learning machine | 87.10% | — |
| [ | K-nearest neighbor, support vector machine, multilayer perceptron | 86.27% | — |
| [ | Support vector machine, K-nearest neighbor, fuzzy networks, Bayes, linear discriminant analysis | 83.00% | — |
| [ | Naïve Bayes, support vector machine, K-means, hierarchical clustering | 78.06% | — |
| [ | Support vector machine, naïve Bayes, multilayer perceptron | 71.42% | — |
| [ | Gaussian process | 71.30% | — |
| [ | Naïve Bayes | 68.00% | — |
Reported number of participants used to conduct emotion classification.
| Author | Emotion classes | Participants | Male | Female | Mean age ± SD |
|---|---|---|---|---|---|
| [ | Happy, sad, fear, relaxation, disgust, rage | 100 | 57 | 43 | — |
| [ | Arousal and valence (4 quadrants) | 60 | 16 | 44 | 28.9 ± 5.44 |
| [ | Valence, arousal | 58 (ASCERTAIN) | 37 | 21 | 30 |
| [ | Valence, arousal | 58 (ASCERTAIN) | 37 | 21 | 30 |
| [ | Valence, arousal (high and low) | 40 | 20 | 20 | 26.13 ± 2.79 |
| [ | Negative, positive, and neutral (SEED). Amusement, excitement, happiness, calmness, anger, disgust, fear, sadness, and surprise (DREAMER) | 15 (SEED), 23 (DREAMER) | 21 | 17 | 26.6 ± 2.7 |
| [ | Horror = (fear, anxiety, disgust, surprise, tension), empathy = (happiness, sadness, love, being touched, compassion, distressing, disappointment) | 38 | 19 | 19 | — |
| [ | Valence, arousal, dominance, liking | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | Valence, arousal (high and low) | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | Valence, arousal | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | — | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | Valence, arousal (2 class) | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | Valence, arousal, dominance | 32 (DEAP) | 16 | 16 | 26.9 |
| [ | Happy, fear, peace, disgust, sadness | 13 (watching video materials), 18 (VR materials) | 13 | 18 | — |
| [ | Stress level (low and high) | 28 | 19 | 9 | 27.5 |
| [ | Valence, arousal (high and low) | 25 | — | — | — |
| [ | Fear | 22 | 14 | 8 | — |
| [ | Happy, fear, sad, relax | 20 | — | — | — |
| [ | Engagement, enjoyment, boredom, frustration, workload | 20 | 19 | 1 | 15.29 |
| [ | Happy, sad, fear, disgust, neutral | 16 | 6 | 10 | 23.27 ± 2.37 |
| [ | Anguish, tenderness | 16 | — | — | — |
| [ | Positive, neutral, negative | 15 (SEED) | 7 | 8 | — |
| [ | Happy, fear, sad, calm | 13 | 8 | 5 | — |
| [ | Happy, relaxed, depressed, distressed, fear | 10 | 10 | — | 21 |
| [ | Happy, fear | 6 | 5 | 1 | 26.67 ± 1.11 |
| [ | Pleasant, happy, frightened, angry | 5 | 4 | 1 | — |