| Literature DB >> 27725686 |
César F Lima1, Olivia Brancatisano2, Amy Fancourt2, Daniel Müllensiefen2, Sophie K Scott1, Jason D Warren3, Lauren Stewart2,4.
Abstract
Some individuals show a congenital deficit for music processing despite normal peripheral auditory processing, cognitive functioning, and music exposure. This condition, termed congenital amusia, is typically approached regarding its profile of musical and pitch difficulties. Here, we examine whether amusia also affects socio-emotional processing, probing auditory and visual domains. Thirteen adults with amusia and 11 controls completed two experiments. In Experiment 1, participants judged emotions in emotional speech prosody, nonverbal vocalizations (e.g., crying), and (silent) facial expressions. Target emotions were: amusement, anger, disgust, fear, pleasure, relief, and sadness. Compared to controls, amusics were impaired for all stimulus types, and the magnitude of their impairment was similar for auditory and visual emotions. In Experiment 2, participants listened to spontaneous and posed laughs, and either inferred the authenticity of the speaker's state, or judged how much laughs were contagious. Amusics showed decreased sensitivity to laughter authenticity, but normal contagion responses. Across the experiments, mixed-effects models revealed that the acoustic features of vocal signals predicted socio-emotional evaluations in both groups, but the profile of predictive acoustic features was different in amusia. These findings suggest that a developmental music disorder can affect socio-emotional cognition in subtle ways, an impairment not restricted to auditory information.Entities:
Mesh:
Year: 2016 PMID: 27725686 PMCID: PMC5057155 DOI: 10.1038/srep34911
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Percentage of correct responses (a) and ambivalent responses (b) as a function of group and emotion recognition task. Values are collapsed across emotion categories. Error bars indicate standard errors of the means. Amusics showed significantly reduced accuracy and provided more ambivalent responses than controls across the three tasks.
Demographic and background characteristics of participants.
| Age (years) | 57.92 (11.35) | 53.18 (13.59) | −0.93 | 0.36 |
| Sex | 9F/4M | 8F/3M | — | 1 |
| Handedness | 13R/0L | 9R/2L | — | 0.20 |
| Musical training (years) | 0.85 (1.34) | 2.30 (4.57) | 0.11 | 0.29 |
| Education (years) | 15.92 (2.84) | 15.64 (2.50) | −0.26 | 0.80 |
| NART (words correctly read, /50) | 44.00 (4.42) | 44.40 (2.76) | 0.24 | 0.81 |
| Digit Span (raw scores) | 21.00 (3.16) | 20.09 (4.35) | −0.54 | 0.59 |
| MBEA (correct responses) | ||||
| Scale (/30) | 19.23 (2.71) | 27.18 (2.36) | 7.59 | <0.001 |
| Contour (/30) | 19.77 (3.39) | 27.73 (2.33) | 6.57 | <0.001 |
| Interval (/30) | 18.00 (2.00) | 27.45 (2.38) | 10.58 | <0.001 |
| Rhythm (/30) | 24.46 (3.80) | 28.45 (1.44) | 3.28 | 0.003 |
| Pitch Composite (/90) | 57.00 (6.70) | 82.36 (6.10) | 9.62 | <0.001 |
| Pitch Change Detection Threshold (semitones) | 0.31 (0.32) | 0.16 (0.06) | −1.56 | 0.13 |
| Pitch Direction Discrimination Threshold (semitones) | 1.28 (1.46) | 0.18 (0.08) | −2.50 | 0.02 |
| CFPT (sum of errors) | ||||
| Upright (/94) | 45.00 (14.55) | 44.40 (7.82) | −0.12 | 0.91 |
| Inverted (/94) | 77.40 (15.06) | 72.80 (13.17) | −0.73 | 0.48 |
Note. F = female; M = male; R = right; L = left; NART = National Adult Reading Test; MBEA = Montreal Battery of Evaluation of Amusia; CFPT = Cambridge Face Perception Test. Standard deviations are provided in parentheses. t values correspond to the statistic of independent samples t-tests (two-tailed, df = 22). For sex and handedness, groups were compared using Fisher’s exact test. There were missing data for some of the background measures, and for these mean, SD and t-tests were computed on reduced sample sizes: for the NART and CFPT, data were missing from three amusics and one control participants; for digit span, data were missing from three amusics; and for the pitch thresholds tasks, data were missing from one amusic.
Figure 2Average ratings provided on the intended ‘correct’ scales (a) and on the non-intended ‘incorrect’ scales (b) as a function of group and emotion recognition task. Values are collapsed across emotion categories. Error bars indicate standard errors of the means. Amusics showed significantly reduced sensitivity to the correct emotions, but not to the incorrect ones, across tasks.
Mixed-effects regression models on the predictive value of acoustic cues for vocal emotion recognition.
| Speech Prosody | Amusics | |||||||
| Amusement | −0.31 | — | — | — | — | — | 0.75 | |
| Anger | — | — | — | — | 0.52 | 0.67 | 0.65* | |
| Disgust | — | — | — | 0.51 | — | — | 0.47* | |
| Fear | −0.39 | — | — | — | — | 0.96 | 0.63* | |
| Pleasure | — | — | 0.46 | — | — | — | 0.80* | |
| Relief | −0.72 | 0.55 | — | — | — | — | 0.66* | |
| Sadness | 1.19 | −0.57 | — | — | — | — | 0.63* | |
| Controls | ||||||||
| Amusement | −0.90 | 1.35 | 1.14 | — | — | −2.88 | 0.76* | |
| Anger | — | 0.40 | — | — | 0.46 | — | 0.81* | |
| Disgust | — | — | — | — | −0.39 | — | 0.59 | |
| Fear | — | −0.79 | — | 2.10 | −0.92 | −0.82 | 0.76* | |
| Pleasure | −1.08 | 0.51 | — | — | — | — | 0.81* | |
| Relief | — | — | — | — | — | — | 0.79 | |
| Sadness | — | — | — | −0.69 | 1.29 | 0.72 | 0.77* | |
| Nonverbal Vocalizations | Amusics | |||||||
| Amusement | 0.71 | — | 0.95 | — | — | 1.23 | 0.83* | |
| Anger | — | — | −0.56 | — | — | — | 0.77* | |
| Disgust | — | — | — | — | — | — | 0.90 | |
| Fear | — | −0.51 | — | — | 0.34 | — | 0.78* | |
| Pleasure | — | — | −0.78 | 1.10 | — | — | 0.84* | |
| Relief | — | — | −0.34 | 0.61 | −0.41 | — | 0.80* | |
| Sadness | −2.24 | 1.56 | −0.63 | — | — | 2.77 | 0.77* | |
| Controls | ||||||||
| Amusement | — | — | — | — | — | 0.74 | 0.86* | |
| Anger | — | −0.84 | −2.79 | — | 2.52 | — | 0.88* | |
| Disgust | −2.17 | 1.65 | −1.02 | — | 0.66 | — | 0.93* | |
| Fear | 1.77 | — | — | 0.62 | 1.56 | −3.67 | 0.85* | |
| Pleasure | 5.87 | −2.27 | — | 3.70 | — | −15.29 | 0.89* | |
| Relief | −1.07 | — | −0.45 | 0.73 | — | 0.88 | 0.81* | |
| Sadness | −1.17 | 1.42 | — | — | — | — | 0.75* | |
Note. Values represent standardized regression coefficients for the acoustic cues retained in the model after the model selection procedure (empty cells indicate that the acoustic cue was not retained in the model). Model accuracy values represent the proportion of participant responses correctly classified by the model, including fixed and random effects. Each model was fitted to the full sample of amusic or control participants, across all stimuli for a given task and emotion; the final models contained between 0 (intercept and random effect only) and six fixed effects predictor variables. F0 = fundamental frequency; COG = centre of gravity. *p < 0.05, likelihood ratio test for the significance of the model with fixed effects (acoustic parameters) compared to a random-effects model only.
Figure 3Magnitude of the difference between spontaneous and posed laughter as a function of group and task, i.e., difference between average authenticity and contagion ratings provided to spontaneous laughter, and average authenticity and contagion ratings provided to posed laughs (as expressed in terms of effect size, Cohen’s d).
Error bars indicate standard errors of the means. Amusics showed significantly reduced sensitivity to laughter authenticity, but not to laughter contagiousness.
Mixed-effects regression models on the predictive value of acoustic cues for authenticity and contagion evaluations of laughter.
| Authenticity | Amusics | |||||||
| Posed | −0.64 | 0.71 | −0.29 | 0.26 | — | — | 0.23* | |
| Spontaneous | — | — | — | — | — | — | 0.14 | |
| Controls | ||||||||
| Posed | — | 0.54 | −0.29 | — | — | — | 0.18* | |
| Spontaneous | 0.44 | — | — | — | — | 0.27 | 0.15* | |
| Contagion | Amusics | |||||||
| Posed | −0.67 | 0.77 | −0.42 | 0.28 | — | — | 0.49* | |
| Spontaneous | 0.24 | — | — | — | — | 0.34 | 0.56* | |
| Controls | ||||||||
| Posed | −0.35 | 0.54 | −0.47 | 0.24 | — | — | 0.28* | |
| Spontaneous | 0.51 | −0.22 | — | — | — | — | 0.19* | |
Note. Values represent standardized regression coefficients for the acoustic cues retained in the model after the model selection procedure (empty cells indicate that the acoustic cue was not retained in the model). R2 values are conditional R2 values representing the amount of variance explained by the model, including fixed and random effects6869. Each model was fitted to the full sample of amusic or control participants, across all stimuli for a given task and stimulus type; the final models contained between 0 (intercept and random effect only) and six fixed effects predictor variables. F0 = fundamental frequency; COG = centre of gravity. *p < 0.05, likelihood ratio test for the significance of the model with fixed effects (acoustic parameters) compared to a random-effects model only.
Acoustic characteristics of vocal emotional stimuli.
| Speech Prosody | ||||||||
| Amusement | 286.9 | 95.3 | 152.5 | 540.4 | −0.33 | 2142 | 18.8 | 813 |
| Anger | 304.5 | 120.2 | 157.6 | 648 | −0.15 | 2127 | 21.1 | 1104.4 |
| Disgust | 262.5 | 141.2 | 122.6 | 700.7 | −0.11 | 3065 | 19.7 | 911 |
| Fear | 328 | 83.5 | 194 | 586.7 | −0.17 | 1835 | 23.4 | 885.3 |
| Pleasure | 221.5 | 117.2 | 142.5 | 632.8 | −0.12 | 2759 | 19.2 | 1051.5 |
| Relief | 246.2 | 105.6 | 133.3 | 543 | −0.37 | 1965 | 22.3 | 786.8 |
| Sadness | 269 | 134.2 | 122.2 | 594.8 | −0.2 | 2482 | 21.6 | 843.3 |
| Nonverbal Vocalizations | ||||||||
| Amusement | 364 | 143.8 | 185.6 | 661.1 | −0.02 | 1068 | 10.8 | 881.4 |
| Anger | 222.5 | 96.9 | 104.8 | 473.6 | −0.21 | 900 | 8.5 | 1039 |
| Disgust | 331.6 | 167.4 | 141.4 | 658.6 | 0.26 | 878 | 8.1 | 931.5 |
| Fear | 420 | 63.1 | 322.2 | 537.6 | −0.28 | 877 | 13 | 948.6 |
| Pleasure | 199.6 | 90.3 | 123.7 | 479.6 | −0.42 | 1029 | 7 | 251.1 |
| Relief | 450.7 | 119.8 | 280.6 | 623.2 | 0.3 | 872 | 9.6 | 1026.2 |
| Sadness | 278.3 | 103.9 | 169.4 | 559.7 | −0.4 | 982 | 8.8 | 481.9 |
| Laughter | ||||||||
| Posed | 276 | 105.5 | 139.5 | 554.5 | −0.29 | 2366 | 14.6 | 783.1 |
| Spontaneous | 467.9 | 132.4 | 249.4 | 780 | −0.02 | 2439 | 14.8 | 944.1 |
Note. F0 = fundamental frequency; Min = minimum; Max = maximum; F0direction = standardized regression coefficients reflecting changes in F0 over time; COG = centre of gravity.