| Literature DB >> 27242564 |
Antje Heinrich1, Helen Henshaw2, Melanie A Ferguson3.
Abstract
Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition.Entities:
Keywords: cognition; communication; hearing aid users; mild-to-moderate hearing loss; self-report; speech perception
Year: 2016 PMID: 27242564 PMCID: PMC4876806 DOI: 10.3389/fpsyg.2016.00576
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Mean, standard deviation (SD) and range for demographic information and experimental variables.
| Mean | Range | |||||
|---|---|---|---|---|---|---|
| Demographic information | Age (years) | 67.4 | 7.1 | 50.0–74.0 | ||
| Better ear average (dB HL), BEA | 4 freq (0.5, 1, 2, 4 kHz) | 43.6 | 13.6 | 25.0–77.5 | ||
| Speech perception | Discrimination of isolated phonemes (step size) | PD | 8-Hz modulated noise at 0 dB SNR | 74.2 | 12.1 | 54–99 |
| Discrimination of phonemes in words (percent correct) | FAAF | 20-talker babble at 0 dB SNR | 63.5 | 12.8 | 37.7–82.4 | |
| Word perception (number correct/20) | Words | Quiet | 15.9 | 3.7 | 6.0–20.0 | |
| Words | 20-talker babble at 0 dB SNR | 5.9 | 3.2 | 0.0–13.0 | ||
| Words | 20-talker babble at – 4dB SNR | 2.2 | 2.7 | 0.0–13.0 | ||
| Keyword perception in a carrier sentence (dB SNR) | MCRM | single-voice at variable SNR | -4.3 | 5.6 | -15.0–+5.0 | |
| Cognition | Single attention (time for completion / no. correct) | Test of everyday attention (#6), TEA6 | 3.5 | 0.6 | 2.4–5.0 | |
| Divided attention (time for completion / no. correct) | Test of everyday attention (#7), TEA7 | 3.9 | 1.1 | 2.4–6.6 | ||
| Attention-related decrement | Test of everyday attention (Dual task decrement), DTD | 1.1 | 1.6 | -0.5–6.8 | ||
| Verbal WM under dual attention (number correct/20) | Digits in quiet | 15.2 | 4.3 | 5.0–20.0 | ||
| Digits at 0 dB SNR | 12.1 | 4.6 | 5.0–20.0 | |||
| Digits at -4 dB SNR | 14.6 | 5.1 | 0.0–20.0 | |||
| Sustained attention (reaction time in ms) | IMAP Visual uncued | 427.2 | 123.7 | 268.3–922.6 | ||
| IMAP Visual cued | 330.1 | 91.1 | 234.1–640.7 | |||
| IMAP Visual difference | 97.1 | 75.4 | -34.0–281.8 | |||
| IMAP Audio uncued | 597.7 | 156.3 | 289.3–1033.9 | |||
| IMAP Audio cued | 401.5 | 124.7 | 226.6–880.0 | |||
| IMAP Audio difference | 196.3 | 98.3 | 0.0–375.4 | |||
| Verbal WM and response control (number correct/40) | SICspan Size | 29.0 | 5.1 | 8.0–35.0 | ||
| (number correct/80) | SICspan Intrusions | 3.4 | 1.7 | 1–8 | ||
| Verbal WM (number correct/24) | Letter Number Sequencing (LNS) | 9.2 | 2.5 | 4.0–17.0 | ||
| Self-report | GHABP (Residual Disability only) (percent) | Overall | 27.5 | 15.0 | 0.0–56.25 | |
| Q1: Listening to TV | 25.0 | 23.7 | 0.0–75.0 | |||
| Q2: Conversation in quiet | 12.5 | 15.7 | 0.0–50.0 | |||
| Q3: Conversation in a busy street / shop | 28.3 | 22.5 | 0.0–75.0 | |||
| Q4: Group conversation with several people | 44.2 | 21.5 | 0.0–75.0 | |||
Spearman correlation coefficients between residual disability scores for each of the four GHABP listening situations and the speech perception tests.
| Phonemes | Isolated Words | Words in Sentences | ||||
|---|---|---|---|---|---|---|
| Discrimination in Noise | FAAF | Words | MCRM | |||
| Quiet | 0 dB SNR | -4 dB SNR | ||||
| Listening to TV (Q1) | -0.45* | -0.15 | -0.11 | -0.33 | -0.35 | 0.17 |
| Conversation in quiet (Q2) | 0.08 | 0.05 | -0.02 | -0.19 | 0.14 | 0.05 |
| Conversation in a busy street/shop (Q3) | -0.24 | -0.41* | -0.24 | -0.54** | -0.36* | 0.26 |
| Group conversation with several people (Q4) | -0.41* | -0.17 | -0.11 | -0.08 | -0.02 | 0.41* |
Factor loadings for five cognitive tests producing a two-factor solution in a Principal Component Analysis.
| Attention (36.2%) | Working memory (28.7%) | |
|---|---|---|
| TEA6 | 0.93 | -0.03 |
| TEA7 | 0.84 | -0.07 |
| SICspan Size | 0.01 | 0.69 |
| LNS | -0.49 | 0.60 |
| Digit Quiet | -0.05 | 0.78 |
Factor loadings for five cognitive tests producing a three-factor solution in a Principal Component Analysis.
| Factor 1 (27.3%) | Factor 2 (20.8%) | Factor 3 (15.3%) | |
|---|---|---|---|
| TEA6 | 0.81 | 0.02 | -0.37 |
| TEA7 | 0.73 | 0.07 | -0.34 |
| SICspan Size | 0.02 | 0.16 | 0.55 |
| LNS | -0.22 | 0.08 | 0.77 |
| Digit Quiet | 0.00 | 0.78 | 0.37 |
| Digit 0 dB SNR | -0.31 | 0.71 | 0.14 |
| Digit -4 dB SNR | -0.71 | 0.42 | 0.00 |
| SICspan Intrusions | 0.26 | 0.78 | -0.09 |
| IMAP visual difference | 0.64 | 0.34 | 0.13 |
| IMAP audio difference | 0.63 | -0.15 | 0.46 |
| Interpretation | Attentional interference control | Response control | Verbal WM |
Results for forward stepwise regression models carried out for each of six speech perception tests.
| Speech test | Step | Adj | df1 | df2 | Sign. | sign. predictors | |||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Phoneme | 1 | 0.43 | 0.18 | 0.12 | 22.76 | 0.18 | 2.93 | 2 | 26 | 0.071 | |
| 2 | 0.44 | 0.19 | 0.01 | 24.12 | 0.01 | 0.05 | 3 | 23 | 0.985 | ||
| FAAF | 1 | 0.69 | 0.48 | 0.44 | 9.60 | 0.48 | 12.23 | 2 | 27 | 0.001 | BEA |
| 2 | 0.86 | 0.74 | 0.68 | 7.18 | 0.26 | 8.07 | 3 | 24 | 0.001 | verbal WM | |
| Words Quiet | 1 | 0.73 | 0.53 | 0.50 | 2.62 | 0.53 | 15.48 | 2 | 27 | 0.001 | BEA |
| 2 | 0.77 | 0.59 | 0.50 | 2.61 | 0.05 | 1.01 | 3 | 24 | 0.404 | ||
| Words 0 dB SNR | 1 | 0.66 | 0.43 | 0.39 | 2.47 | 0.43 | 10.16 | 2 | 27 | 0.001 | BEA |
| 2 | 0.78 | 0.60 | 0.52 | 2.19 | 0.17 | 3.50 | 3 | 24 | 0.031 | verbal WM | |
| Words -4 dB SNR | 1 | 0.25 | 0.06 | -0.01 | 2.69 | 0.06 | 0.87 | 2 | 27 | 0.432 | |
| 2 | 0.60 | 0.35 | 0.22 | 2.36 | 0.29 | 3.64 | 3 | 24 | 0.027 | verbal WM | |
| MCRM | 1 | 0.56 | 0.32 | 0.27 | 4.79 | 0.32 | 6.27 | 2 | 27 | 0.006 | BEA |
| 2 | 0.77 | 0.59 | 0.51 | 3.93 | 0.27 | 5.35 | 3 | 24 | 0.006 | response control, verbal WM | |
| Q1 | 1 | 0.46 | 0.21 | 0.16 | 21.76 | 0.21 | 3.67 | 2 | 27 | 0.039 | BEA |
| 2 | 0.49 | 0.24 | 0.08 | 22.75 | 0.02 | 0.23 | 3 | 24 | 0.873 | ||
| Q2 | 1 | 0.26 | 0.07 | 0.001 | 15.75 | 0.07 | 0.99 | 2 | 27 | 0.384 | |
| 2 | 0.40 | 0.16 | -0.01 | 15.83 | 0.09 | 0.90 | 3 | 24 | 0.455 | ||
| Q3 | 1 | 0.35 | 0.12 | 0.06 | 21.82 | 0.12 | 1.90 | 2 | 27 | 0.169 | |
| 2 | 0.52 | 0.27 | 0.12 | 21.12 | 0.15 | 1.61 | 3 | 24 | 0.214 | ||
| Q4 | 1 | 0.41 | 0.17 | 0.11 | 20.28 | 0.17 | 2.73 | 2 | 27 | 0.083 | |
| 2 | 0.42 | 0.17 | 0.01 | 21.45 | 0.01 | 0.05 | 3 | 24 | 0.985 | ||