| Literature DB >> 28338796 |
Annett Schirmer1,2, Thomas C Gunter1.
Abstract
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance.Entities:
Keywords: auditory cortex; gender; implicit perception; prosody; sex differences
Mesh:
Year: 2017 PMID: 28338796 PMCID: PMC5472162 DOI: 10.1093/scan/nsx020
Source DB: PubMed Journal: Soc Cogn Affect Neurosci ISSN: 1749-5016 Impact factor: 3.436
Fig. 1Experimental task performance. The top row illustrates the sensitivity with which female (left) and male (right) listeners discriminated between repeated and non-repeated sounds. The bottom row illustrates the speed with which female (left) and male (right) listeners pushed the button to sound repetitions.
Fig. 2ERP traces. Illustrated are average voltages recorded to the four sound conditions for female (left) and male (right) participants.
Fig. 3ERP maps. Topographical maps illustrate the average differences between emotional and neutral as well as vocal and non-vocal sounds for female (top) and male (bottom) participants. Average differences were computed for the four statistical analysis windows capturing the N1, the P2/P3 complex, the N4-like negativity, and the LPP.
Fig. 4Interpretative framework. Voice processing is illustrated by example. When presented with a vocal sound we may first represent its animacy and basic affect, before accessing subordinate level sound categories. Subsequently, these separate representations may be integrated into a holistic percept.