| Literature DB >> 31597919 |
Karan Sharma1,2,3, Claudio Castellini4, Egon L van den Broek5, Alin Albu-Schaeffer4, Friedhelm Schwenker6.
Abstract
From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, a direct and real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were obtained from ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature sensors. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.Entities:
Mesh:
Year: 2019 PMID: 31597919 PMCID: PMC6785543 DOI: 10.1038/s41597-019-0209-0
Source DB: PubMed Journal: Sci Data ISSN: 2052-4463 Impact factor: 6.444
Fig. 1The typical experiment setup shows a participant watching a video and annotating using JERI. The central figure shows the video-playback window with the embedded annotation interface. The right-most figure shows the annotation interface in detail, where Self-Assessment Manikin that were added to the valence and arousal axes can be seen.
Fig. 2The plot on the left shows the annotations from one participant for the different videos (see Table 1) in the experiment. The annotations for the ‘scary-2’ video by the first five participants (labelled as p1–p5) can be seen in the plot on the right.
The source, label, ID used, intended valence-arousal attributes and the duration of the videos used for the dataset.
| Source | Video-Label | Video-ID | Intended Attributes | Dur. [ | |
|---|---|---|---|---|---|
| Valence | Arousal | ||||
| Hangover | amusing-1 | 1 | med/high | med/high | 185 |
| When Harry Met Sally | amusing-2 | 2 | med/high | med/high | 173 |
| European Travel Skills | boring-1 | 3 | low | low | 119 |
| Matcha: The way of Tea | boring-2 | 4 | low | low | 160 |
| Relaxing Music with Beach | relaxing-1 | 5 | med/high | low | 145 |
| Natural World: Zambezi | relaxing-2 | 6 | med/high | low | 147 |
| Shutter | scary-1 | 7 | low | high | 197 |
| Mama | scary-2 | 8 | low | high | 144 |
| Great Barrier Reef | startVid | 10 | — | — | 101 |
| Blue screen with end credits | endVid | 12 | — | — | 120 |
| Blue screen | bluVid | 11 | — | — | 120 |
The type, number (No.), manufacturer and model of different sensors and instruments used in the experiment.
| Sensor/Instrument | No. | Manufacturer | Model | Conversions | |
|---|---|---|---|---|---|
| Equations | Units | ||||
| ECG sensor | 1 | Thought Technology | SA9306 |
|
|
| BVP sensor | 1 | Thought Technology | SA9308M |
| % |
| GSR sensor | 1 | Thought Technology | SA9309M |
|
|
| Respiration sensor | 1 | Thought Technology | SA9311M |
| % |
| Skin temp. sensor | 1 | Thought Technology | SA9310M |
| ° |
| EMG sensor | 3 | Thought Technology | SA9401M-50 |
|
|
| Sensor Isolator | 2 | Thought Technology | SE9405AM | — | — |
| ADC module | 1 | National Instruments | NI 9205 | — | — |
| DAQ system | 1 | National Instruments | cDAQ-9181 | — | — |
| Joytick | 1 | Thrustmaster | T.16000M |
| — |
Wherever applicable, the conversion equations used to transform the logged input values to the desired output units/scales (see last column) are also presented.
Fig. 3The schematic shows the various aspects of the experiment and the data acquisition setup. The arrows indicate the direction of the data-flow. The solid and the dotted lines indicate the primary and secondary tasks of the acquisition process, respectively.
The sensors and the various features extracted from the sensor signals.
| Sensor | Extracted Features |
|---|---|
| ECG | Heart Rate (HR) |
| Inter-Beat Interval (IBI) | |
| Standard Deviation (SD) of NN-intervals (SDNN) | |
| BVP | Heart Rate (HR) |
| Inter-Beat Interval (IBI) | |
| Standard Deviation (SD) of NN-intervals (SDNN) | |
| GSR | Skin Conductance Level (SCL) |
| Skin Conductance Response (SCR) | |
| Respiration | Respiration Rate (RR) |
| Interval of Respiration peaks | |
| Skin Temperature | Temperature |
| SD of Temperature (SDT) | |
| EMG–zygomaticus | Amplitude of the signal |
| EMG–corrugator | Amplitude of the signal |
| EMG–trapezius | Amplitude of the signal |
The sensors and the features selected from each sensor.
| Sensor | Feature Selected |
|---|---|
| ECG | mean HR |
| BVP | Standard Deviation (SD) of NN-intervals (SDNN) |
| GSR | mean SCR |
| Respiration | mean RR |
| Skin Temperature | SD of Temperature (SDT) |
| EMG–zygomaticus | mean amplitude (mean Zygo) |
| EMG–corrugator | mean amplitude (mean Corr) |
| EMG–trapezius | mean amplitude (mean Trap) |
Fig. 4“Violin” plots of the distribution of the selected features and the mean annotation (valence & arousal) values across different types of videos. The box plots embedded in each violin plot show the Interquartile Range (IQR) for each considered variable, while a yellow diamond marks the mean of the distribution.
Fig. 5Scatter plots of the mean annotation data and the first two principal components of the physiological data, labelled according to the types of videos. Ellipses denote one standard deviation.
| Measurement(s) | electrocardiogram data • respiration trait • blood flow trait • electrodermal activity measurement • temperature • muscle electrophysiology trait |
| Technology Type(s) | electrocardiography • Hall effect measurement system • photoplethysmography • Galvanic Skin Response • skin temperature sensor • electromyography |
| Factor Type(s) | sex • age |
| Sample Characteristic - Organism | Homo sapiens |