| Literature DB >> 26074845 |
Laurence Chaby1, Viviane Luherne-du Boullay2, Mohamed Chetouani3, Monique Plaza3.
Abstract
Social interactions in daily life necessitate the integration of social signals from different sensory modalities. In the aging literature, it is well established that the recognition of emotion in facial expressions declines with advancing age, and this also occurs with vocal expressions. By contrast, crossmodal integration processing in healthy aging individuals is less documented. Here, we investigated the age-related effects on emotion recognition when faces and voices were presented alone or simultaneously, allowing for crossmodal integration. In this study, 31 young adults (M = 25.8 years) and 31 older adults (M = 67.2 years) were instructed to identify several basic emotions (happiness, sadness, anger, fear, disgust) and a neutral expression, which were displayed as visual (facial expressions), auditory (non-verbal affective vocalizations) or crossmodal (simultaneous, congruent facial and vocal affective expressions) stimuli. The results showed that older adults performed slower and worse than younger adults at recognizing negative emotions from isolated faces and voices. In the crossmodal condition, although slower, older adults were as accurate as younger except for anger. Importantly, additional analyses using the "race model" demonstrate that older adults benefited to the same extent as younger adults from the combination of facial and vocal emotional stimuli. These results help explain some conflicting results in the literature and may clarify emotional abilities related to daily life that are partially spared among older adults.Entities:
Keywords: aging; emotion; faces; multimodal integration; non-verbal vocalizations; race model; voices
Year: 2015 PMID: 26074845 PMCID: PMC4445247 DOI: 10.3389/fpsyg.2015.00691
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Participant demographic characteristics.
| Age (years) | 25.8 ± 6.4 | 67.2 ± 5.8 |
| Education (years) | 14.18 ± 1.6 | 13.55 ± 2.8 |
| Mill-hill | 36.87 ± 3.0 | 37.48 ± 4.8 |
| Sex ratio (M/F) | 16/15 | 17/14 |
| BDI-II (/63) | 5.65 ± 6.5 | 5.25 ± 3.8 |
| MMSE (/30) | – | 29.33 ± 0.6 |
FIGURE 1Schematic representation of the stimuli. Examples of the stimuli for the three different modalities, including visual (facial expressions), auditory (non-verbal affective vocalizations) and crossmodal stimuli (congruent facial and vocal emotions presented simultaneously).
Mean accuracy scores (%) and response times (ms) by age group and emotion. Standard errors of the means are shown in parentheses.
| Mean accuracy (%) | |||||||
|---|---|---|---|---|---|---|---|
| Visual | 90.9 (2.7) | 99.3 (0.4) | 83.2 (2.3) | 73.5 (3.4) | 66.1 (4.7) | 72.5 (2.5) | |
| Auditory | 92.6 (2.7) | 95.2 (1.9) | 73.2 (3.9) | 94.5 (1.3) | 47.7 (3.4) | 86.4 (2.3) | |
| Crossmodal | 98.7 (0.6) | 100 (0.0) | 89.6 (2.1) | 95.8 (1.8) | 76.8 (4.2) | 97.7 (0.8) | |
| Visual | 97.1 (1.1) | 99.7 (0.6) | 87.7 (2.2) | 90.9 (2.6) | 84.5 (2.4) | 85.4 (2.5) | |
| Auditory | 96.7 (0.9) | 98.0 (0.9) | 86.7 (2.3) | 94.2 (1.3) | 62.9 (3.0) | 94.8 (1.2) | |
| Crossmodal | 99.6 (0.3) | 99.0 (0.7) | 97.4 (0.9) | 97.4 (0.8) | 94.5 (1.5) | 99.3 (0.4) | |
| Visual | 3447 (227) | 2502 (136) | 4366 (225) | 4805 (325) | 4243 (280) | 4173 (224) | |
| Auditory | 4355 (342) | 3195 (213) | 4370 (265) | 3824 (238) | 4920 (354) | 3871 (250) | |
| Crossmodal | 2796 (158) | 2188 (105) | 3135 (138) | 3115 (146) | 3273 (190) | 2567 (132) | |
| Visual | 1871 (107) | 1555 (50) | 2408 (162) | 2464 (157) | 2569 (177) | 2538 (163) | |
| Auditory | 2148 (157) | 1967 (117) | 2345 (153) | 2082 (74) | 2281 (124) | 2026 (80) | |
| Crossmodal | 1613 (49) | 1370 (35) | 1689 (54) | 1722 (50) | 1571 (45) | 1624 (35) | |
FIGURE 2Mean accuracy scores (%) and response times (ms) for both age groups under the visual, auditory and crossmodal conditions. Error bars indicate standard errors of the means.
FIGURE 3Test for the violation of race model inequality. The figure illustrates the cumulative probability curves of the RT under the visual (blue circles), auditory (green squares), and crossmodal conditions (red circles). The summed probability for the visual and auditory responses is depicted by the race model curve (marked by an asterisk). Note that the crossmodal responses are faster than the race model prediction for the four fastest percentiles, i.e., the 5th, 15th, 25th, and 35th percentiles (all p < 0.01).