| Literature DB >> 28270763 |
Luodi Yu1, Aparna Rao2, Yang Zhang1, Philip C Burton3, Dania Rishiq4, Harvey Abrams4.
Abstract
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.Entities:
Keywords: audiovisual integration; auditory training; brain plasticity; fMRI; functional connectivity; hearing aid; speech perception
Year: 2017 PMID: 28270763 PMCID: PMC5318380 DOI: 10.3389/fnagi.2017.00030
Source DB: PubMed Journal: Front Aging Neurosci ISSN: 1663-4365 Impact factor: 5.750
Figure 1Air-conduction audiometric thresholds in dB HL for the two cases. Red circle represents right ear, and blue cross represents left ear.
Air-conduction audiometric thresholds in dB HL for the two cases.
| Participant | Mean thresholds in dB HL | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Frequency | 250 | 500 | 750 | 1000 | 1500 | 2000 | 3000 | 4000 | 6000 | 8000 | |
| Case 1 | R | 20 | 20 | 15 | 15 | 20 | 35 | 70 | 65 | 70 | 75 |
| L | 20 | 20 | 20 | 20 | 45 | 65 | 60 | 60 | 60 | 60 | |
| Case 2 | R | 15 | 15 | 30 | 35 | 45 | 40 | 40 | 35 | 40 | 50 |
| L | 10 | 15 | 25 | 35 | 40 | 35 | 35 | 35 | 40 | 50 | |
R stands for right ear, and L stands for left ear.
Figure 2(A) Functionally-defined regions of interest (ROIs) identified through the functional localizer of the two cases. The audiovisual (AV) ROI (red) contains voxels responsive to both auditory and visual words in the posterior STS (pSTS). The auditory ROI (green) contains voxels responsive to auditory words within Heschl’s gyrus. The visual ROI (yellow) contains voxels responsive to visual words within extrastriate lateral occipitotemporal cortex. (B) The patients’ surface mapping showing activity in each condition. Clusters were identified through voxel-wise statistics corrected for multiple comparison using the False Discovery Rate algorithm with q (adjusted p) < 0.05.
Case 1 (hearing aid (HA) use) data showing activities of the three regions of interest (ROIs)—auditory ROI, visual ROI, audiovisual (AV) ROI, and functional connectivity between uni-sensory ROIs and AV ROI, in the five stimulus conditions at pretest and posttest.
| Condition | Auditory ROI | Visual ROI | AV ROI | Auditory-AV | Visual ROI-AV | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Pre | Post | Pre | Post | Pre | Post | Pre | Post | Pre | Post | ||||||
| Auditory | 0.39 | 0.43 | 0.528 | −0.05 | −0.15 | 0.704 | 0.23 | 0.39 | <0.05* | 0.19 | 0.72 | <0.001*** | 0.18 | 0.47 | <0.01** |
| Visual | −0.09 | 0.00 | 0.295 | 0.21 | 0.15 | 0.713 | 0.24 | 0.36 | 0.185 | 0.49 | 0.58 | 0.075 | 0.25 | 0.30 | 0.242 |
| Congruent | 0.38 | 0.49 | 0.260 | 0.19 | 0.11 | 0.293 | 0.33 | 0.38 | 0.189 | 0.49 | 0.58 | 0.221 | 0.24 | 0.39 | <0.05* |
| McGurk incongruent | 0.42 | 0.45 | 0.626 | 0.30 | 0.13 | 0.910 | 0.29 | 0.45 | 0.175 | 0.48 | 0.56 | 0.346 | 0.14 | 0.30 | 0.075 |
| Non-McGurk incongruent | 0.42 | 0.61 | 0.136 | 0.31 | 0.20 | 0.512 | 0.39 | 0.46 | 0.198 | 0.59 | 0.78 | <0.01** | 0.22 | 0.46 | <0.01** |
For each ROI, numbers in the first two columns present percentage signal change in activity relative to baseline at pretest and posttest; the third column presents significance of change in activity from pretest to posttest obtained from bootstrapping. For each pair of ROIs, numbers in the first two columns present connectivity measured by averaged correlation coefficient between voxels within the uni-sensory ROI and the AV ROI at pretest and posttest; the third column contains significance (p value) of change in functional connectivity from pretest to posttest obtained from bootstrapping. *p < 0.05; **p < 0.01; ***p < 0.001.
Case 2 (HA use + AV training) data showing activities of the three ROIs—auditory ROI, visual ROI, AV ROI and functional connectivity between uni-sensory ROIs and AV ROI, in the five stimulus conditions at pretest and posttest.
| Condition | Auditory ROI | Visual ROI | AV ROI | Auditory-AV | Visual ROI-AV | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Pre | Post | Pre | Post | Pre | Post | Pre | Post | Pre | Post | ||||||
| Auditory | 0.19 | 0.42 | <0.05* | −0.06 | −0.06 | 0.496 | 0.20 | 0.35 | <0.01** | 0.29 | 0.45 | 0.222 | 0.14 | 0.31 | 0.115 |
| Visual | 0.00 | −0.09 | 0.991 | 0.26 | 0.20 | 0.875 | 0.24 | 0.20 | 0.768 | 0.09 | 0.29 | 0.056 | 0.14 | 0.15 | 0.255 |
| Congruent | 0.30 | 0.49 | <0.001*** | 0.25 | 0.22 | 0.883 | 0.38 | 0.39 | 0.351 | 0.23 | 0.31 | 0.363 | 0.17 | 0.18 | 0.347 |
| McGurk | 0.25 | 0.42 | <0.001*** | 0.24 | 0.25 | 0.334 | 0.32 | 0.44 | <0.001*** | 0.19 | 0.39 | <0.01** | 0.09 | 0.17 | 0.099 |
| incongruent | |||||||||||||||
| Non-McGurk | 0.21 | 0.39 | 0.068 | 0.18 | 0.19 | 0.451 | 0.26 | 0.40 | <0.01** | 0.22 | 0.42 | <0.01** | 0.11 | 0.26 | <0.01** |
| incongruent | |||||||||||||||
For each ROI, numbers in the first two columns present percentage signal change in activity relative to baseline at pretest and posttest; the third column presents significance of change in activity from pretest to posttest obtained from bootstrapping. For each pair of ROIs, numbers in the first two columns present connectivity measured by averaged correlation coefficient between voxels within the uni-sensory ROI and the AV ROI at pretest and posttest; the third column contains significance (p value) of change in functional connectivity from pretest to posttest obtained from bootstrapping. *p < 0.05; **p < 0.01; ***p < 0.001.