| Literature DB >> 35546888 |
Yehuda I Dor1,2, Daniel Algom1, Vered Shakuf3, Boaz M Ben-David2,4,5.
Abstract
Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults' sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of -15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.Entities:
Keywords: aging; auditory processing; auditory sensory-cognitive interactions; emotions; noise; prosody; semantics; speech perception
Year: 2022 PMID: 35546888 PMCID: PMC9082150 DOI: 10.3389/fnins.2022.846117
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
FIGURE 1General design of the Stimuli. All combinations of prosodic and semantic emotions are presented. Each cell represents two different sentences used in this study. Black cells: congruent sentences (same emotion in both speech channels). Gray cells: incongruent sentences (different emotions in semantics and prosody).
Model Summary and results of MLM analyses.
| A: Psychometric Function’s Parameters | |||
| Threshold | Max recognition | Slope | |
| Age Group | |||
| Speech Channel | |||
| Age Group X Speech Channel | |||
| Model Summary | BIC = 448.87 | BIC = −135.16 | BIC = −53.79 |
|
| |||
|
| |||
|
| |||
|
| |||
|
| |||
| Age Group | |||
| Speech Channel | |||
| Selective Attention | |||
| Age Group X Speech Channel | |||
| Age Group X Selective Attention | |||
| Speech Channel X Selective Attention | |||
| Age Group X Speech Channel X Selective Attention | |||
| Model Summary | BIC = −161.72 | ||
Top panel: analysis of individual psychometric functions’ parameters (left column: Thresholds, x
FIGURE 2Psychometric functions for recognition of emotions in speech in different SNRs, averaged across participants. Blue lines: older adults; red lines: young adults. Dashed lines: recognition of emotional prosody; full lines: recognition of emotional semantics. Diamond-shaped markers indicate statistical recognition thresholds for each condition. Light-blue and light-red lines represent extrapolations of the functions beyond the SNRs tested in the study. The dashed-and-dot horizontal line indicates the functions’ minimal asymptote (0.2 - chance level).
FIGURE 3Analysis of Selective Attention effects. Bars indicate correct recognition rates for congruent (full) vs. incongruent (dashed) sentences in young (red) and older (blue) adults averaged across different SNRs. Error bars indicate 95% CI of their respective means (MLM estimates). The dashed-and-dot horizontal line indicates the chance level (0.2).