| Literature DB >> 27625593 |
Stephanie Cheung1, Elizabeth Han1, Azadeh Kushki1, Evdokia Anagnostou2, Elaine Biddiss1.
Abstract
For children with profound disabilities affecting communication, it can be extremely challenging to identify salient emotions such as anxiety. If left unmanaged, anxiety can lead to hypertension, cardiovascular disease, and other psychological diagnoses. Physiological signals of the autonomic nervous system are indicative of anxiety, but can be difficult to interpret for non-specialist caregivers. This paper evaluates an auditory interface for intuitive detection of anxiety from physiological signals. The interface, called "Biomusic," maps physiological signals to music (i.e., electrodermal activity to melody; skin temperature to musical key; heart rate to drum beat; respiration to a "whooshing" embellishment resembling the sound of an exhalation). The Biomusic interface was tested in two experiments. Biomusic samples were generated from physiological recordings of typically developing children (n = 10) and children with autism spectrum disorders (n = 5) during relaxing and anxiety-provoking conditions. Adult participants (n = 16) were then asked to identify "anxious" or "relaxed" states by listening to the samples. In a classification task with 30 Biomusic samples (1 relaxed state, 1 anxious state per child), classification accuracy, sensitivity, and specificity were 80.8% [standard error (SE) = 2.3], 84.9% (SE = 3.0), and 76.8% (SE = 3.9), respectively. Participants were able to form an early and accurate impression of the anxiety state within 12.1 (SE = 0.7) seconds of hearing the Biomusic with very little training (i.e., < 10 min) and no contextual information. Biomusic holds promise for monitoring, communication, and biofeedback systems for anxiety management.Entities:
Keywords: anxiety; augmentative and alternative communication (AAC); disability; music; sonification
Year: 2016 PMID: 27625593 PMCID: PMC5003931 DOI: 10.3389/fnins.2016.00401
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Physiological Signal-to-Music Mapping.
| Electrodermal activity | Average skin conductance over a 0.25-s window in μS | Melodic pitch (“goblins” MIDI-sound): Three octaves of a major-key scale are mapped to 1 μS of electrodermal activity. The melody establishes the first windowed and averaged electrodermal activity value as the tonic center in a major scale. Subsequent EDA values are compared to the first, and differences result in a pitch shift relative to the magnitude of change. Extreme levels of electrodermal activity are transposed to keep the melody in a comfortable range. Thus, quickly-changing EDA results in active melodic runs. Melody may be suited to reflect the salience of continuous changes in electrodermal conductivity with anxiety state. |
| Skin temperature | Instantaneous skin temperature in degrees C | Key change (“choir” MIDI-sound chords): Major tonic chords are mapped to instantaneous skin temperature. This feature is updated every two bars. The first skin temperature value is mapped to the tonic chord of a C-major scale, and subsequent increases or decreases in temperature raise or lower the key of the music, and therefore the chord, by one semitone. Chords indicative of key change may be suited to represent gradually-continuous changes in skin temperature. |
| Blood volume pulse | Average interbeat interval over four periods of the signal | Tempo (“melodic tom” MIDI-sound): The pace of the music, marked by a percussive beat, is updated every musical bar. This tempo is determined from the average interbeat interval of the blood volume pulse signal over four periods. Therefore, increased heart rate increases the tempo of the music and its beat. A drum beat is suited to represent the rhythmic, cyclic pulse of the circulatory system. |
| Respiration | Duration of exhalation (peak-to-trough intervals of the signal) | “Whoosh”: Expiratory time is mapped to the duration of a whooshing, seashore embellishment resembling the sound of an exhalation. The “Whoosh embellishment is sustained as long as exhalation is detected. During inhalation, this sound is not played. |
Means and standard error of physiological features recorded in Experiment 1 from typically developing children (.
| EDA (μS) | 1.131, SE = 0.390 | 3.550, SE = 0.706 |
| Skin temperature (°C) | 32.086, SE = 1.826 | 30.412, SE = 2.312 |
| Blood volume inter-pulse interval (s) | 0.797, SE = 0.049 | 0.761, SE = 0.025 |
| Respiration inter-breath interval (s) | 3.333, SE = 0.296 | 2.269, SE = 0.107 |
Indicates features that were significantly different (p < 0.05) between relaxed and anxious state conditions.
Figure 1Physiological signal acquisition procedure in Experiment 1.
Figure 2Signal classification procedure.
Biomusic performance (initial classification).
| Typically-developing (anagram) | 84.9, SE = 2.9 | 89.7, SE = 4.1 | 80.0, SE = 4.5 |
| Typically-developing (Stroop) | 82.4, SE = 3.9 | 91.7, SE = 3.9 | 73.3, SE = 5.1 |
| ASD | 83.4, SE = 3.9 | 83.2, SE = 6.5 | 83.6, SE = 3.6 |
| Typically-developing (anagram) | 84.8, SE = 3.8 | 85.0, SE = 4.4 | 85.0, SE = 4.4 |
| All (Experiment 2) | 83.9, SE = 2.9 | 87.1, SE = 3.4 | 80.6, SE = 3.4 |
Means and standard error of physiological features recorded in Experiment 2 from children with ASD (.
| EDA (μS) | 4.822, SE = 1.627 | 7.028, SE = 1.577 |
| Skin temperature (°C) | 28.424, SE = 1.512 | 29.251, SE = 1.918 |
| Blood volume inter-pulse interval (s) | 0.744, SE = 0.024 | 0.686, SE = 0.022 |
| Respiration inter-breath interval (s) | 2.211, SE = 0.337 | 1.616, SE = 0.174 |
| EDA (μS) | 4.545, SE = 0.924 | 8.976, SE = 1.902 |
| Skin temperature (°C) | 30.208, SE = 2.218 | 29.467, SE = 1.875 |
| Blood volume pulse inter-pulse interval (s) | 0.757, SE = 0.044 | 0.706, SE = 0.036 |
| Respiration inter-breath interval (s) | 2.987, SE = 0.733 | 1.546, SE = 0.093 |
Indicates features that were significantly different (p < 0.05) between relaxed and anxious state conditions.
Figure 3Confusion matrix for classifications of biomusic from typically developing children (anagram) in Experiment 2.
Figure 5Confusion matrix for classifications of biomusic from children with ASD in Experiment 2.