| Literature DB >> 35283747 |
Mariela C Torrente1, Rodrigo Vergara2,3, Felipe N Moreno-Gómez4, Alexis Leiva5,6, Simón San Martin5,6, Chama Belkhiria5,6, Bruno Marcenaro5,6, Carolina Delgado5,7, Paul H Delano1,5,6,8.
Abstract
Presbycusis or age-related hearing loss is a prevalent condition in the elderly population, which affects oral communication, especially in background noise, and has been associated with social isolation, depression, and cognitive decline. However, the mechanisms that relate hearing loss with cognition are complex and still elusive. Importantly, recent studies show that the use of hearing aids in presbycusis, which is its standard management, can induce neuroplasticity and modify performance in cognitive tests. As the majority of the previous studies on audition and cognition obtained their results from a mixed sample of subjects, including presbycusis individuals fitted and not fitted with hearing aids, here, we revisited the associations between hearing loss and cognition in a controlled sample of unaided presbycusis. We performed a cross-sectional study in 116 non-demented Chilean volunteers aged ≥65 years from the Auditory and Dementia study cohort. Specifically, we explored associations between bilateral sensorineural hearing loss, suprathreshold auditory brain stem responses, auditory processing (AP), and cognition with a comprehensive neuropsychological examination. The AP assessment included speech perception in noise (SIN), dichotic listening (dichotic digits and staggered spondaic words), and temporal processing [frequency pattern (FP) and gap-in-noise detection]. The neuropsychological evaluations included attention, memory, language, processing speed, executive function, and visuospatial abilities. We performed an exploratory factor analysis that yielded four composite factors, namely, hearing loss, auditory nerve, midbrain, and cognition. These four factors were used for generalized multiple linear regression models. We found significant models showing that hearing loss is associated with bilateral SIN performance, while dichotic listening was associated with cognition. We concluded that the comprehension of the auditory message in unaided presbycusis is a complex process that relies on audition and cognition. In unaided presbycusis with mild hearing loss (<40 dB HL), speech perception of monosyllabic words in background noise is associated with hearing levels, while cognition is associated with dichotic listening and FP.Entities:
Keywords: age-related hearing loss; auditory processing; cognition; elderly; hearing aids; presbycusis
Year: 2022 PMID: 35283747 PMCID: PMC8908240 DOI: 10.3389/fnagi.2022.786330
Source DB: PubMed Journal: Front Aging Neurosci ISSN: 1663-4365 Impact factor: 5.750
Description (mean and SD) of variables, including demographics (age and schooling), hearing loss, suprathreshold auditory brain stem responses, auditory processes, and cognitive domains.
| Variable | Median (IQR) | ||
| Men, | Women, | Mann-Whitney | |
| Age (years) | 75 (8) | 72 (8) | 0.012 |
| Schooling (years) | 11 (6) | 11 (6) | |
| PTA right ear (dB HL) | 34.4 (18.1) | 25 (16.7) | 0.042 |
| PTA left ear (dB HL) | 32.5 (22.5) | 23.8 (18.2) | 0.005 |
| OAE right ear | 1.0 (4) | 4.0 (7) | 0.004 |
| OAE left ear | 1.5 (5) | 5.0 (7) | 0.026 |
| Amplitude wave I right ear (μV) | 0.1 (0.09) | 0.12 (0.1) | |
| Amplitude wave I left ear (μV) | 0.10 (0.13) | 0.14 (0.11) | 0.012 |
| Latency wave I right ear (ms) | 1.53 (0.26) | 1.53 (0.17) | |
| Latency wave I left ear (ms) | 1.53 (0.34) | 1.50 (0.19) | 0.024 |
| Amplitude wave V right ear (μV) | 0.33 (0.21) | 0.38 (0.17) | |
| Amplitude wave V left ear (μV) | 0.36 (0.16) | 0.36 (0.19) | |
| Latency wave V right ear (ms) | 5.73 (0.33) | 5.60 (0.37) | 0.000 |
| Latency wave V left ear (ms) | 5.73 (0.47) | 5.60 (0.3) | 0.001 |
| Speech in noise right ear (correct answers) | 22 (2) | 23 (2) | 0.042 |
| Speech in noise left ear (correct answers) | 22 (3) | 23 (2) | |
| Dichotic digits right ear (correct answers) | 37 (5) | 35 (4) | |
| Dichotic digits left ear (correct answers) | 29 (13) | 29 (10) | |
| SSW right ear (total of errors) | 3 (4) | 2 (3) | 0.032 |
| SSW left ear (total of errors) | 8 (8) | 4 (7) | 0.001 |
| Frequency pattern right ear (correct answers) | 13 (9) | 11 (10) | 0.031 |
| Frequency pattern left ear (correct answers) | 11 (10) | 9 (12) | |
| Gap in noise (ms) | 87.1 (67.7) | 99.7 (97.9) | |
| Rey figure | 31.5 (4.5) | 31.0 (5) | |
| Foward digit span | 6 (3) | 6 (2) | |
| Backward digit span | 4 (1) | 4 (2) | |
| Digit symbol | 34.5 (17) | 40.0 (23) | 0.012 |
| TMT-A | 57.5 (32) | 49.0 (34) | |
| TMT-B | 200 (194) | 140 (151) | |
| FCSRT | 43 (8) | 46 (4) | 0.001 |
| Boston nominating test | 25 (4) | 26 (5) | |
Differences by gender were analyzed for each variable.
PTA, pure-tone average of responses for 0.5, 1, 2, and 4 kHz; OAE, total number of distortion product otoacoustic emissions; SSW, staggered spondaic words; TMT-A, Trail Making Test Part A; TMT-B, Trail Making Test Part B; FCSRT, total recall of the Free and Cued Selective Reminding Test; IQR, interquartile range.
FIGURE 1Grand average audiogram for pure-tone thresholds expressed as mean and SD of the left (blue crosses) and right (red circles) ears.
Performance of both ears in tests of hearing and auditory processing.
| Right ear (median ± SD) | Left ear (median ± SD) | Wilcoxon signed rank test | |
| PTA | 28.85 ± 11.23 | 27.83 ± 11.57 |
|
| DPOAE | 3.31 ± 2.87 | 3.34 ± 3.0 | 0.97 |
| Amplitude wave I | 0.11 ± 0.7 | 0.13 ± 0.08 |
|
| Amplitude wave V | 0.37 ± 0.13 | 0.4 ± 0.15 | 0.06 |
| Latency wave I | 1.59 ± 0.21 | 1.58 ± 0.22 | 0.43 |
| Latency wave V | 5.65 ± 0.25 | 5.66 ± 0.25 | 0.89 |
| Speech in noise | 23 ± 2.6 | 23 ± 2.4 | 0.127 |
| Dichotic digits | 36 ± 4.7 | 29 ± 7.8 |
|
| Staggered spondaic words (errors) | 2 ± 5.9 | 6 ± 11 |
|
| Frequency pattern | 10 ± 7.6 | 9 ± 7.5 |
|
The left ear had a better hearing level (PTA) and larger amplitude of wave I in suprathreshold ABR. There was a significant difference in favor of the right ear for binaural integration (dichotic digits and staggered sporadic words) and frequency pattern. Speech in noise: correct answers out of 25. Dichotic digits: correct answers out of 40.
Staggered spondaic words: total number of errors considering both competing and non-competing. Frequency pattern: correct answers out of 30.
PTA: pure-tone threshold average for frequencies 0.5, 1, 2, and 4 kHz; DPOAE, distortion product otoacoustic emissions.
Bold numbers are used to highlight statistically significant values.
Generalized multiple linear regression models for performance in auditory processes (SIN, speech in noise; DD, dichotic digits; SSW, staggered spondaic words; FP, frequency pattern; GiN, gap in noise) for both ears (RE, right ear; LE, left ear).
| SINRE | SINLE | DDRE | DDLE | SSWRE | SSWLE | FPRE | FPLE | GIN | |
| Hearing |
|
|
|
| |||||
| Auditory nerve |
| ||||||||
| Midbrain |
| ||||||||
| Cognition | 0,06 |
|
|
|
|
|
|
| |
| Schooling | |||||||||
| Age | −0.17 | −0.13 | |||||||
| Gender (M) | 0.61 | ||||||||
| Pseudo |
|
|
|
|
|
|
|
|
|
| Null deviance |
|
|
|
|
|
|
|
|
|
| Residuals deviance | 187.31 | 103 | 372 | 743 | 500 | 710 | 735 | 789 |
|
| Residuals DF | 112 | 109 | 112 | 112 | 111 | 111 | 110 | 110 | 103 |
| Family used | QB | QB | QB | QB | QB | QB | QB | QB | G |
| Model’s | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
|
| Bonferroni’s | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 | 0.000 | 0.000 | 0.000 | 0.010 |
Independent variables included the composite scores derived from the exploratory factor analyses (hearing loss, auditory nerve, midbrain, and cognition), gender, years of education (schooling), and age. Pseudo-R-squared (R
SINRE, speech in noise right ear, correct answers; SINLE, speech in noise left ear, correct answers; DDRE, dichotic digits right ear, correct answers; DDLE, dichotic digits left ear, correct answers; SSWRE, staggered spondaic words right ear, total number of errors; SSWLE, staggered spondaic words left ear, total number of errors; FFRE, frequency pattern right ear, correct answers; FPLE, frequency pattern left ear, correct answers; GIN, gap in noise, minimum time gap detected; QB, quasi binomial; G, gamma; DF, degree of freedom.
*p < 0.05; **p < 0.01; ***p < 0.001.
Bold numbers are used to highlight statistically significant values.
FIGURE 2Speech perception and dichotic listening are associated with hearing thresholds and cognition, respectively. Scatter plots presenting relevant associations found using generalized linear models. The panels present the association between (A) hearing loss and speech in noise for the right ear (SINRE), (B) hearing loss and speech in noise for the left ear (SINLE), (C) cognitive score and dichotic digits of the right ear (DDRE), and (D) cognitive score and dichotic digits of the left ear (DDLE). All variables are presented in z-score, red circles represent right ear evaluations, while blue circles illustrate left ear assessments.