| Literature DB >> 30249171 |
Huiwen Goy1, M Kathleen Pichora-Fuller2, Gurjit Singh1,3,4,5, Frank A Russo1,5.
Abstract
Vocal emotion perception is an important part of speech communication and social interaction. Although older adults with normal audiograms are known to be less accurate at identifying vocal emotion compared to younger adults, little is known about how older adults with hearing loss perceive vocal emotion or whether hearing aids improve the perception of emotional speech. In the main experiment, older hearing aid users were presented with sentences spoken in seven emotion conditions, with and without their own hearing aids. Listeners reported the words that they heard as well as the emotion portrayed in each sentence. The use of hearing aids improved word-recognition accuracy in quiet from 38.1% (unaided) to 65.1% (aided) but did not significantly change emotion-identification accuracy (36.0% unaided, 41.8% aided). In a follow-up experiment, normal-hearing young listeners were tested on the same stimuli. Normal-hearing younger listeners and older listeners with hearing loss showed similar patterns in how emotion affected word-recognition performance but different patterns in how emotion affected emotion-identification performance. In contrast to the present findings, previous studies did not find age-related differences between younger and older normal-hearing listeners in how emotion affected emotion-identification performance. These findings suggest that there are changes to emotion identification caused by hearing loss that are beyond those that can be attributed to normal aging, and that hearing aids do not compensate for these changes.Entities:
Keywords: aging; emotions; hearing loss; speech intelligibility
Mesh:
Year: 2018 PMID: 30249171 PMCID: PMC6156210 DOI: 10.1177/2331216518801736
Source DB: PubMed Journal: Trends Hear ISSN: 2331-2165 Impact factor: 3.293
Figure 1.Thresholds are referenced to the left axis and show the mean audiometric pure-tone thresholds of the participants. NAL-NL1 hearing aid targets and measured outputs are referenced to the right axis, averaged across three input levels of 55, 65, and 75 dB SPL. Standard error bars are shown.
Figure 2.Word recognition accuracy across Emotion conditions in Unaided and Aided listening conditions. Standard error bars are shown.
Figure 3.Emotion identification accuracy across Emotion conditions in Unaided and Aided listening conditions. Standard error bars are shown.
Figure 4.Scatterplot of word recognition scores against emotion identification scores for 14 older listeners in Unaided and Aided listening conditions. The regression line is for the Unaided condition only.
Mean Scores on Cognitive Tests and Hearing- and Emotion-Related Questionnaires, With SDs in Parentheses.
| Measure | Mean ( | Min–Max |
|---|---|---|
| MoCA (of 30), adjusted for education | 25.5 (2.8) | 19–28 |
| Reading Working Memory Span (best possible score 100) | 42.0 (12.4) | 30–68 |
| APHAB global unaided score | 42.4 (14.6) | 18–72 |
| APHAB global aided score | 25.4 (17.0) | 7–59 |
| APHAB benefit (unaided - aided) | 17.0 (17.7) | −16 to 55 |
| HHIA total score (worst possible score 100) | 28.0 (21.0) | 0–64 |
| HHIA social component (worst possible score 48) | 14.0 (10.9) | 0–32 |
| HHIA emotional component (worst possible score 52) | 14.0 (11.4) | 0–38 |
| IOI-HA total score (best possible score 40) | 33.6 (3.5) | 25–39 |
| SSQ12 total score (best possible score 120) | 74.0 (19.4) | 38–102 |
| PANAS positive (best possible score 50) | 34.5 (4.9) | 29–43 |
| PANAS negative (worst possible score 50) | 14.5 (4.5) | 10–25 |
| EMO-CHeQ average score (worst possible score 5) | 2.3 (0.6) | 1.2–3.3 |
| TAS20 (worst possible score 100) | 52.8 (6.2) | 39–63 |
Note. MoCA = Montreal Cognitive Assessment; APHAB = Abbreviated Profile of Hearing Aid Benefit; IOI-HA = International Outcome Inventory for Hearing Aids; HHIA = Hearing Handicap Inventory for Adults; SSQ12 = 12-item Speech Spatial Qualities Questionnaire; PANAS = Positive and Negative Affect Scale; EMO-CHeQ = Emotional Communication in Hearing Questionnaire; TAS20 = 20-Item Toronto Alexithymia Scale; SD = standard deviation.
Multiple Correlations Between Participant Characteristics and Listening Task Performance in Unaided and Aided Listening Conditions.
| Word recognition | Emotion identification | |||
|---|---|---|---|---|
| Unaided | Aided | Unaided | Aided | |
| Age | −0.47 | −0.42 | −0.45 | −0.40 |
| PTA | − | −0.71 | −0.49 | −0.29 |
| HA average fit | 0.04 | 0.27 | 0.10 | −0.01 |
| HA experience[ | −0.28 | 0.07 | −0.19 | −0.14 |
| MoCA | 0.45 |
| 0.59 | 0.53 |
| RWM span | 0.26 | 0.29 | 0.66 | 0.66 |
| APHAB benefit | −0.37 | −0.03 | −0.16 | −0.14 |
| HHIA total | −0.39 | −0.15 | 0.08 | 0.07 |
| IOI-HA | −0.36 | 0.01 | −0.20 | −0.05 |
| SSQ12 | 0.47 | 0.57 | 0.40 | 0.33 |
| PANAS positive | 0.14 | 0.22 | 0.29 | 0.33 |
| PANAS negative | −0.16 | 0.04 | −0.56 | −0.54 |
| EMO-CHeQ | −0.36 | −0.33 | − | − |
| TAS20 | −0.60 | −0.52 | − | −0.69 |
Note. MoCA = Montreal Cognitive Assessment; PTA = pure-tone average; HA = hearing aid; APHAB = Abbreviated Profile of Hearing Aid Benefit; IOI-HA = International Outcome Inventory for Hearing Aids; HHIA = Hearing Handicap Inventory for Adults; SSQ12 = 12-item Speech Spatial Qualities Questionnaire; PANAS = Positive and Negative Affect Scale; EMO-CHeQ = Emotional Communication in Hearing Questionnaire; TAS20 = 20-Item Toronto Alexithymia Scale; RWM = Reading Working Memory.
aThe correlations between hearing aid experience and listening task performance were calculated after excluding one participant who had 50 years of hearing aid experience. As the other participants had a range of hearing aid experience from 2 to 15 years with a median of 5 years, the inclusion of an outlying data point of 50 years resulted in a statistically significant r value that was not meaningful.
Boldface values signify p < .05 with Holm-Bonferroni correction.
Figure 5.Word recognition accuracy of younger listeners in noise and older listeners with hearing loss. Data for older listeners are collapsed across Unaided and Aided conditions.
Figure 6.Emotion identification accuracy of younger listeners in noise and older listeners with hearing loss. Data for older listeners are collapsed across Unaided and Aided conditions.