Literature DB >> 35938045

Contralateral dominance to speech in the adult auditory cortex immediately after cochlear implantation.

Maureen J Shader1,2, Robert Luke3,4, Colette M McKay3,2.   

Abstract

Sensory deprivation causes structural and functional changes in the human brain. Cochlear implantation delivers immediate reintroduction of auditory sensory information. Previous reports have indicated that over a year is required for the brain to reestablish canonical cortical processing patterns after the reintroduction of auditory stimulation. We utilized functional near-infrared spectroscopy (fNIRS) to investigate brain activity to natural speech stimuli directly after cochlear implantation. We presented 12 cochlear implant recipients, who each had a minimum of 12 months of auditory deprivation, with unilateral auditory- and visual-speech stimuli. Regardless of the side of implantation, canonical responses were elicited primarily on the contralateral side of stimulation as early as 1 h after device activation. These data indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input.
© 2022 The Authors.

Entities:  

Keywords:  Bioelectronics; Clinical neuroscience; Sensory neuroscience

Year:  2022        PMID: 35938045      PMCID: PMC9352526          DOI: 10.1016/j.isci.2022.104737

Source DB:  PubMed          Journal:  iScience        ISSN: 2589-0042


Introduction

A reduction in sensory input results in compensatory changes in the brain (Kolb and Whishaw, 1998), including within the auditory pathway after hearing loss or deafness (Finney et al., 2001). Using neuroimaging techniques to understand how deafness affects the auditory pathway is challenging as it is not possible to present acoustic stimuli to sufficiently activate, and thus, probe the neural structure. Instead, an effective method to determine if any changes in auditory processing occurred in the brain as a result of auditory deprivation is to measure neural activation to auditory input immediately after sensory input is reinstated. Cochlear implants (CIs) are auditory prosthetic devices that can immediately restore auditory input to individuals with severe degrees of hearing loss for whom hearing aids do not provide sufficient auditory information. The aim of this study was to characterize the auditory- and visual-speech evoked patterns of neural activation in the auditory cortices of newly implanted CI recipients immediately after implant switch-on and reintroduction of the auditory percept. In listeners with normal-hearing, unilateral auditory stimulation evokes predominately contralateral activation in the auditory cortex (Bilecen et al., 2000; Hirano et al., 1997). In the years following unilateral cochlear implantation, contralateral dominance of auditory-evoked cortical activity can be observed (Gilley et al., 2008; Jiwani et al., 2016; Naito et al., 1995). This is demonstrated by strong lateralization of cortical activity to the contralateral auditory cortex of the implanted ear. Currently, it is unclear if contralateral dominance is present immediately following the reinstatement of auditory input (e.g., at implant switch-on), or if hemispheric differences in auditory-evoked activity take time to emerge. Beyond this fundamental question, quantifying the expected neural response to speech immediately post-implantation may also have clinical rehabilitation implications. Developing an understanding of the patterns of neural activity directly after implantation may lead to a neuromarker indicating CI recipients who may benefit from additional rehabilitation support. Cortical activity following cochlear implantation has been previously measured using various neuroimaging techniques (e.g., fMRI, PET, EEG), but these tools are not ideal for use in individuals with CIs. A CI is comprised of an internal receiver that is positioned against the temporal bone and an electrode array that is implanted into the cochlea. In addition, an external device processes incoming acoustic signals and transmits those signals to the internal device through magnetic coupling and a transcutaneous FM signal. The presence of a CI device can disrupt traditional neuroimaging techniques; the presence of a metallic device causes artifacts in MRI and MEG measurements, and the electromagnetic waves from the signal transmission and delivery similarly cause artifacts in M/EEG measurements. A variety of approaches have been proposed to overcome these artifacts (Deprez et al., 2017; Luke and Wouters, 2016); however, these all involve modifying the stimulus parameters away from what is heard during daily use (e.g., naturalistic speech signals), which limits the application and conclusions which can be drawn from the results. Instead of modifying the stimulus or trying to remove artifacts in post-processing, this study utilized functional near-infrared spectroscopy (fNIRS), an imaging technique that fundamentally avoids the causes of electromagnetic artifacts. fNIRS uses near-infrared light to estimate the concentration of oxygenated hemoglobin in the cortical blood flow. It is sensitive to brain activity in outer cortical surfaces, including both the auditory and visual cortices (Luke et al., 2021a, 2022; Shader et al., 2021; Wiggins et al., 2016). In addition to changes in auditory cortex lateralization after implantation, lateralization of visual-evoked activity in the auditory cortex (cross-modal activity) has been observed in the left auditory cortex of CI recipients after at least 6 months of CI experience (Chen et al., 2016). However, hemispheric lateralization of visual-evoked responses in the auditory cortex is not consistent across studies and may be related to individuals’ age at the onset of hearing loss (e.g., Bottari et al., 2014). In general, lateralization for visual-evoked cross-modal activation in the auditory cortex is not well understood, and it is unclear if cross-modal lateralization is present at the time of implant switch-on. The aim of the current study was to characterize the auditory- and visual-evoked patterns of cortical activation in the auditory cortices of newly implanted CI recipients immediately after implant switch-on using fNIRS. Participant demographics are detailed in Table 1. Auditory-only and visual-only continuous-speech signals were used to evaluate functional cortical activation given naturalistic speech stimuli. Activation was characterized on the group level, as well as when accounting for each individual’s side of implantation.
Table 1

Participant demographic information

SubjectAge (years)SexAge at onset of hearing loss in test ear (years)Duration of new CI experience (days)Ear testedDuration of prior CI experience in opposite ear (years)BKB sentences-in-noise (+15 dB SNR) scores (percent correct)
0138M37<1Left170
0270M191RightN/A29
0366M466RightN/A81
0462M5721RightN/A59
0565F6211RightN/A82
0648F286RightN/A0
0756F87RightN/A0
0853M23<1LeftN/A4
0978F2815Right30
1074M207Right3.544
1175F4515Right985
1263M2034LeftN/A74

Note. The sentences-in-noise speech recognition scores were obtained on the same day as the neuroimaging recordings. BKB = Bamford-Kowal-Bench sentence test. SNR = signal-to-noise ratio.

Participant demographic information Note. The sentences-in-noise speech recognition scores were obtained on the same day as the neuroimaging recordings. BKB = Bamford-Kowal-Bench sentence test. SNR = signal-to-noise ratio.

Results and discussion

Group-level auditory and visual response morphology of cochlear implant recipients resembles normal-hearing listeners

When taken together as a group (i.e., combining left-implanted and right-implanted participants), auditory- and visual-evoked responses were relatively consistent with a previous study using the identical experimental paradigm in a group of normal-hearing participants (Shader et al., 2021). Despite the listeners’ extended period of sensory deprivation following hearing loss, the evoked waveforms demonstrated a canonical hemodynamic response morphology as shown in Figure 1. This morphology is consistent with those found in previous studies investigating CI populations (e.g., Anderson et al., 2019). The canonical response morphology enables analysis to be performed using the GLM approach, which is well suited to fNIRS data due to the properties of physiological noise (Huppert, 2016).
Figure 1

Group-level mean response amplitudes of the fNIRS tracings for the seven regions of interest

Regions of interest include: Left Inferior Frontal gyrus, Left Heschl’s gyrus, Left Planum Temporale, Right Heschl’s gyrus, Right Planum Temporale, Superior Occipital gyrus, and Middle Occipital gyrus. Responses are shown for each stimulus condition: Auditory-only, Visual-only, and Silence Control. Shaded areas represent 95% confidence intervals.

Group-level mean response amplitudes of the fNIRS tracings for the seven regions of interest Regions of interest include: Left Inferior Frontal gyrus, Left Heschl’s gyrus, Left Planum Temporale, Right Heschl’s gyrus, Right Planum Temporale, Superior Occipital gyrus, and Middle Occipital gyrus. Responses are shown for each stimulus condition: Auditory-only, Visual-only, and Silence Control. Shaded areas represent 95% confidence intervals. GLM analysis performed with a canonical response model also demonstrated group-level similarity to normal-hearing listeners. Figure 2 displays group-level neural activity from all participants projected onto the cortical surface. For the auditory-only condition, significant activation was present in the Right Heschl’s gyrus ROI (β = 1.60, p = 0.003). Auditory-evoked activation was also observed in the Left Heschl’s gyrus ROI, but did not reach statistical significance (β = 1.05, p = 0.054). Visual-evoked activation was present in both the Superior Occipital (β = 1.63, p = 0.003) and Middle Occipital (β = 1.27, p = 0.02) ROIs. In addition, visual-evoked activation was present in Right Heschl’s gyrus (β = 1.34, p = 0.014), as well as in Left Heschl’s gyrus, but not reaching statistical significance (β = 0.85, p = 0.11). These activation patterns potentially reflect cross-modal activity in auditory ROIs for visual-speech stimuli.
Figure 2

Group-level neural activity from all participants

Oxyhemoglobin estimates are presented as a projection onto the cortical surface. Stimulus conditions are presented (rows) with different views of the brain (columns). No consistent activation is observed for the Silence Control condition. Further analysis which includes the correction for the confound of the side of implantation is provided in Figure 3 and Table 2.

Group-level neural activity from all participants Oxyhemoglobin estimates are presented as a projection onto the cortical surface. Stimulus conditions are presented (rows) with different views of the brain (columns). No consistent activation is observed for the Silence Control condition. Further analysis which includes the correction for the confound of the side of implantation is provided in Figure 3 and Table 2.
Figure 3

Neural activity for three subgroups of participants separated by side of implantation

Oxyhemoglobin estimates are presented as a projection onto the cortical surface. ∗Note that three out of four bilateral participants had new right-sided implants; therefore, the results for the left and right hemispheres were reversed in one participant with a new left-sided implant in order to visualize contralateral and ipsilateral responses in the bilateral group. For the bilateral group, the right hemisphere corresponds to the ipsilateral side of stimulation and the left hemisphere corresponds to the contralateral side of stimulation.

Table 2

Results of linear mixed effect model for auditory-only stimuli in the Auditory ROIs

Fixed Effects
HemisphereAuditory ROIChromaβSETp
IpsilateralHeschl’s gyrusHbO0.720.431.660.102
Planum temporaleHbO0.190.430.430.668
ContralateralHeschl’s gyrusHbO1.940.434.49<0.001
Planum temporaleHbO−0.740.43−1.700.093
IpsilateralHeschl’s gyrusHbR−0.690.43−1.590.116
Planum temporaleHbR−0.170.43−0.390.695
ContralateralHeschl’s gyrusHbR−1.490.43−3.440.001
Planum temporaleHbR−0.260.43−0.610.544

Note. SE = standard error. SD = standard deviation. Bold text indicates significance at the p < 0.05 level.

Rather than Left or Right, Auditory ROIs are coded as either Contralateral or Ipsilateral to the implanted ear. Model notation: β ∼ −1 + Hemisphere:ROI:Chroma + (1|Subject).

Consistent with results from normal-hearing listeners reported in Shader et al. (2021), no activation was observed in the Left Inferior Frontal gyrus or Planum Temporale ROIs to any auditory stimulus condition (Supp. Material, Table S1). The Left Inferior Frontal gyrus ROI was included in this study as neural activation in this structure is modulated with stimulus intelligibility (Stoppelman et al., 2013). We previously hypothesized that the lack of activation in this region in normal-hearing listeners may be due to the ease with which those listeners comprehended speech-in-quiet. Yet, in this study with listeners who, because of their newly acquired implant, have greater difficulty understanding speech also demonstrated no activation in the inferior frontal gyrus. Therefore, the lack of measured activity in this region is likely due to the activity being relatively deep within the cortex, which suggests a lower likelihood of the fNIRS signal capturing activity within this structure (e.g., Stoppelman et al., 2013). No significant auditory-evoked activation was observed in either Planum Temporale ROIs, consistent with normal-hearing listeners. A visual-evoked negative HbO response was observed in the Left Planum Temporale region (β = −1.34, p = 0.015), and a response of similar magnitude has been observed in normal-hearing listeners (β = −1.46, p = 0.072). This similarity between participant groups across two separate studies serves to validate the experimental design and neuroimaging acquisition; however, a direct comparison between these two groups is beyond the scope of this experimental design, as any differences may be due to differences in the population. For example, older participants in this study are likely to have a thinner layer of skin, cerebral fluid, and skull, potentially increasing the sensitivity of the fNIRS technique to the neural tissue (Scholkmann and Wolf, 2013).

Cortical auditory responses to unilateral cochlear implant stimulation occur contralateral to the side of implantation immediately after switch-on

Combining results from all participants, as shown in Figures 1 and 2, provides a mechanism in which to validate the measurement technique and the assumptions of the statistical model used to detect responses. However, averaging across all listeners who are implanted in different ears obfuscates potential response lateralization effects. As such, to investigate if responses were lateralized relative to the side of stimulation, the auditory ROIs were re-coded as either “contralateral” or “ipsilateral” to the implanted ear. Group-level responses, using the re-coded ROIs, determined that auditory-evoked responses were lateralized to the Heschl’s gyrus ROI in the contralateral hemisphere to the stimulated ear (Table 2). Significant group-level auditory-evoked activation was shown in the contralateral Heschl’s gyrus ROI only (β = 1.94, p < 0.001), with no significant activation in either the Heschl’s gyrus or Planum Temporale ROI for the ipsilateral condition. Neural activity for three subgroups of participants separated by side of implantation Oxyhemoglobin estimates are presented as a projection onto the cortical surface. ∗Note that three out of four bilateral participants had new right-sided implants; therefore, the results for the left and right hemispheres were reversed in one participant with a new left-sided implant in order to visualize contralateral and ipsilateral responses in the bilateral group. For the bilateral group, the right hemisphere corresponds to the ipsilateral side of stimulation and the left hemisphere corresponds to the contralateral side of stimulation. Results of linear mixed effect model for auditory-only stimuli in the Auditory ROIs Note. SE = standard error. SD = standard deviation. Bold text indicates significance at the p < 0.05 level. Rather than Left or Right, Auditory ROIs are coded as either Contralateral or Ipsilateral to the implanted ear. Model notation: β ∼ −1 + Hemisphere:ROI:Chroma + (1|Subject). It is possible that the CI device itself – either the external or internal component – may have obscured the detection of a neural response on the ipsilateral side. To investigate this possibility and potential confound, participants were divided into three groups: unilateral right-sided implants, unilateral left-sided implants, and those who had just received their second CI (bilateral implantees) with an existing device positioned under the skin on the contralateral side. Figure 3 displays the cortical responses for each group. Visual examination of the bilateral group suggested that responses were present in both hemispheres, indicating that neither the internal nor external device obscured the detection of a response in those participants. Strong contralateral dominance in auditory-evoked activity is present in normal-hearing listeners when auditory input is presented just to one ear (e.g., Hirano et al., 1997) owing to the crossed innervation of the ascending auditory pathway. Likewise, in unilaterally implanted CI recipients with greater than 1.5 years of experience with their CIs, non-speech auditory input is preferentially processed in the contralateral hemisphere (Gordon et al., 2013; Luke et al., 2017; Naito et al., 1995). Furthermore, children implanted before the age of 3 years with greater than 10 years of unilateral CI use exhibit contralateral dominant auditory-evoked responses (Jiwani et al., 2016). Our results extend these previous findings and demonstrate that the contralateral connections within the sensory auditory pathway are sustained during extended periods of auditory deprivation in adults and are acutely functional at the point at which auditory input is reinstated. While not explicitly explored as part of the hypothesis for this study, we observed greater auditory-evoked ipsilateral activity in the bilaterally implanted group. This is consistent with the observations by Jiwani et al. (2016), who suggested greater bilateral auditory-evoked activation with unilateral stimulation of the second newly implanted ear in a group of young children. These results combined suggest that long-term unilateral auditory stimulation (greater than 10 years) in children, and relatively shorter-term unilateral auditory stimulation (4 years, on average, in the current study) in adults, may promote the asymmetric strengthening of the contralateral pathway, resulting in more diffuse bilateral cortical activity following unilateral stimulation of the second implant.

Visual stimulation elicits the cross-modal activation of the auditory cortex contralateral to the side of implantation

Significant periods of sensory deprivation can cause cross-modal changes in the human cortex (Doucet et al., 2006; Luke et al., 2017), with the right auditory cortex being preferentially activated in response to visual stimuli in listeners without restored hearing (Bottari et al., 2014; Fine et al., 2005; Finney et al., 2003). Once hearing has been restored via cochlear implantation, evidence has been presented for a left hemisphere dominance in auditory cortex activation in response to visual stimuli (Chen et al., 2016). However, as with the lateralization of auditory-evoked responses, it is unknown if lateralized cross-modal responses take a length of time to develop or if responses are immediately apparent at the point at which auditory input is reinstated. Participants in the current study had an average of 29.6 years of non-normal hearing in the implanted ear, likely providing sufficient time for cortical plasticity to occur as a result of long-term hearing loss. Measuring cortical activation immediately after implantation provides little to no time for any further auditory-evoked plasticity changes to occur. We, therefore, have a cohort with which to investigate if lateralized cross-modal activity occurs following prolonged deafness. Significant cross-modal visual-evoked activation was present in the Heschl’s gyrus ROI that was contralateral to the implanted ear (β = 1.52, p = 0.01). There was no significant visual-evoked activation in any of the ipsilateral auditory ROIs (Table 3). These results suggest that for a naturalistic visual-speech stimulus, the side of implantation impacts the lateralization of cross-modal activity in the auditory cortex, with stronger activity observed in the contralateral hemisphere. This is consistent with results reported by Chen et al. (2016), who found lateralized visual-evoked responses in a group of CI recipients with greater than 1 year of CI experience. However, their findings of left-hemisphere dominance may actually be driven by contralateral dominance because 80% of those participants were right-implanted users. Our result illustrates, similar to the contralateral dominance for auditory-evoked responses, that contralateral cross-modal dominance of responses is apparent immediately after implantation for post-lingually deafened adult CI recipients who had ∼30 years, on average, of non-normal hearing in the implanted ear. This result also illustrates that although auditory pathway connections appear to be sustained during periods of sensory deprivation in adults, some degree of cross-modal plasticity is still evident in these listeners. The findings of the current study may suggest that an anatomically intact afferent pathway can still undergo cross-modal reorganization when deprived of its sensory input. During early auditory development, the brain exhibits vast neural connections, which are eventually pruned and shaped depending on the sensory input experienced by the brain (Quartz and Sejnowski, 1997). In instances where auditory sensory input is restricted or deprived, many extraneous neural pathways remain unpruned, which may supply functional connections between different sensory areas of the brain. This process, or a similar process later on in life, may explain the mechanism underlying cross-modal activity in the human brain (Sharma et al., 2007).
Table 3

Results of linear mixed effect model for visual-only stimuli in the Auditory ROIs

Fixed Effects
βSETp
HemisphereAuditory ROIChroma
IpsilateralHeschl’s gyrusHbO0.670.571.160.248
Planum temporaleHbO−0.140.57−0.240.815
ContralateralHeschl’s gyrusHbO1.520.572.650.010
Planum temporaleHbO−1.080.57−1.890.063
IpsilateralHeschl’s gyrusHbR−0.660.57−1.150.254
Planum temporaleHbR−0.150.57−0.250.801
ContralateralHeschl’s gyrusHbR−0.690.57−1.210.231
Planum temporaleHbR−0.110.57−0.190.852

Note. SE = standard error. SD = standard deviation. Bold text indicates significance at the p < 0.05 level.

Rather than Left or Right, Auditory ROIs are coded as either Contralateral or Ipsilateral to the implanted ear. Model notation: β ∼ −1 + Hemisphere:ROI:Chroma + (1|Subject),

Results of linear mixed effect model for visual-only stimuli in the Auditory ROIs Note. SE = standard error. SD = standard deviation. Bold text indicates significance at the p < 0.05 level. Rather than Left or Right, Auditory ROIs are coded as either Contralateral or Ipsilateral to the implanted ear. Model notation: β ∼ −1 + Hemisphere:ROI:Chroma + (1|Subject),

Conclusion

This study utilized fNIRS, a light-based neuroimaging technique, to characterize the auditory- and visual-evoked activation patterns in newly implanted adult CI recipients to naturalistic speech stimuli. Activation was observed in the auditory cortex contralateral to the implanted ear, consistent with results from normal-hearing listeners. Cross-modal visual-evoked activity was observed in the auditory cortex contralateral to the implanted ear. These findings indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input.

Limitations of the study

The current study evaluated group-level auditory- and visual-evoked activation patterns in the auditory cortex of newly implanted CI recipients. It is important to note that individual subject-level factors, namely the duration of deafness (or the duration of non-normal hearing) prior to implantation, can impact the degree of auditory-speech evoked activation within the auditory cortex (Green et al., 2005). Green et al. (2005) found that longer durations of deafness in post-lingually deafened adults were associated with less auditory-speech evoked activation in bilateral auditory cortices. Lazard et al. (2013) also demonstrated less auditory-speech evoked activation in the left posterior superior temporal gyrus in post-lingually deafened adult CI recipients with prolonged durations of deafness. It is possible that individual differences in the duration of deafness could have impacted the degree of auditory-evoked activation in this group of CI participants, but the relatively small sample size in the current study precluded a thorough evaluation of the impact of duration of deafness, or other subject-level factors, on the observed cortical activation patterns. Future studies should evaluate the impact of individual differences, including the duration of deafness, age, age at onset of hearing loss, and duration of prior CI experience on auditory- and visual-evoked cortical activity. Another limitation to consider is the short time period between device activation and participation in this study, with as little as a few hours in some cases. Although speech recognition scores collected on the same day suggest that the majority of the participants were perceiving and understanding speech signals to some degree, it is not known exactly how many participants perceived “speech” during the auditory-only trials and how many only perceived “noise” during those trials. In many cases, a speech percept takes weeks to months to acquire following CI activation. It is possible that some participants did not perceive any speech per se during the auditory-only trials. A perception of noise rather than speech may have contributed to the lack of inferior frontal gyrus activation in this group of participants as this structure is implicated in speech intelligibility scores (Eisner et al., 2010; Stoppelman et al., 2013).

STAR★Methods

Key resources table

Resource availability

Lead contact

Further information and request for resources should be directed to and will be fulfilled by the lead contact, Maureen J. Shader (mshader@purdue.edu).

Materials availability

This study did not generate new unique reagents.

Experimental model and subject details

This study was approved by and conducted in accordance with the ethical standards of the Royal Victorian Eye and Ear Hospital human ethics committee, and all participants provided their written informed consent prior to testing. Twelve adult CI recipients ranging in age from 38-78 years (mean age = 62.3 ± 11.8 years) volunteered for this study. Participant demographic information is reported in Table 1. The data presented in this study are a cross-sectional sample collected immediately following implant switch-on from participants enrolled in a longitudinal research study.

Method details

Stimuli

Stimuli were identical to those used and described in Shader et al. (2021). Thirty-six continuous speech segments were extracted from the children’s story titled Mrs. Tittlemouse by Beatrix Potter; the story was divided into the individual segments prior to recording the audio-only and visual-only stimuli. The speaker was a female Australian-English native speaker positioned in front of a neutral background (Figure 4). Individual stimulus segments were between 10 and 16 sec in duration, with an average duration of 12.5 sec. The visual-only stimuli also contained 1 sec of still video before and after the speaking portion of the video. The root-mean-square (RMS) intensity was equalized across all audio-speech segments, with the final auditory-only stimuli presented at 55 dBA.
Figure 4

Experimental design

(A) Schematic of the experimental protocol. An example screenshot of the monitor during the experiment is shown for a visual-only trial, auditory-only trial, and control trial. An example of one of the story summaries is also shown. (B) Montage displaying all source and detector locations. Source locations are shown in red and detector locations are shown in black. Channels are shown as white lines with an orange marker representing the midpoint. (C) Group-level results for oxyhemoglobin for the same experimental paradigm from a group of normal-hearing listeners who were presented auditory-only stimuli to both ears. Results are re-plotted as projections onto the brain surface from Shader et al. (2021).

Experimental design (A) Schematic of the experimental protocol. An example screenshot of the monitor during the experiment is shown for a visual-only trial, auditory-only trial, and control trial. An example of one of the story summaries is also shown. (B) Montage displaying all source and detector locations. Source locations are shown in red and detector locations are shown in black. Channels are shown as white lines with an orange marker representing the midpoint. (C) Group-level results for oxyhemoglobin for the same experimental paradigm from a group of normal-hearing listeners who were presented auditory-only stimuli to both ears. Results are re-plotted as projections onto the brain surface from Shader et al. (2021).

Procedure

The procedure used in the current study is consistent with the protocol reported in Shader et al. (2021). Research participants were seated comfortably in an armchair at a distance of 1.5m from a monitor in a dimly lit sound-attenuating booth. A fixation cross was displayed on the monitor whenever the visual-only stimulus was not being presented. A loudspeaker was located just above the monitor and was used for free-field presentation of the auditory-only stimuli. Participants listened to the auditory-only stimuli unilaterally using only their newly implanted CI device, with any other hearing device removed in the opposite ear (e.g., hearing aid or CI). In cases where substantial hearing was present in the opposite ear even after the hearing device was removed, an earplug was placed in that opposite ear to isolate auditory responses using only the newly implanted CI device. Participants used their personal CI sound processors during the experiment, which were set to the everyday listening program most recently configured by their clinical audiologist. Eighteen individual trials were presented for each of the auditory- and visual-only speech segments. Ten control trials consisting of 10 sec of silence were also presented randomly throughout the experiment. This resulted in a total of 46 trials per participant. The presentation of each story segment was randomly selected to be either audio-only or visual-only, but the story segments were presented in chronological order ensuring continuity of the storyline. A block design was utilized with a random interval between each segment of 15-30 sec, which minimizes the contribution of Mayer waves to the averaged fNIRS signal (Luke et al., 2021b). In an effort to maintain attention on the task by providing contextual story cues, three text annotations were presented on the screen for 10 sec at regularly spaced intervals throughout the experiment; each summary provided an overview of the story progress. Participants were instructed to follow along with the story to the best of their ability. Total testing time was approximately 35 min.

fNIRS acquisition

Data were acquired with a continuous-wave NIRScout device (NIRScout, NIRX medical technologies, LLC) using 16 LED sources (two near-infrared light illuminators emitting wavelengths of 760 and 850 nm) and 16 detectors. In addition, eight short-channel detectors were included in the montage. The experimental montage (shown in Figure 4, Panel B) designated 44 source-detector pairs that were separated by approximately 3 cm. Sources were placed at locations AF7, F7, F3, AFF5h, FC5, T7, CCP5h, CP5, P3, 01, POz, O2, P4, CP6, CCP6h, and T8. The 8 short-channel detectors were placed on source locations F7, AF7, T7, CP5, O1, O2, CP6, and T8. Detectors were placed at locations F5, FFC5h, C5, TP7, CP3, CPP5h, P5, P03, Oz, PO4, P6, CP4, CPP6h, CP4, TP8, and C6.

Region of interest selection

The seven ROIs applied to the data were derived a priori based on auditory- and visual-evoked activation patterns using the same experimental stimuli in a group of normal-hearing participants. In-depth descriptions of the seven regions of interest are reported in Shader et al. (2021); results of that experiment are also re-plotted in panel C of Figure 4. Briefly, the Left Inferior Frontal gyrus region of interest was selected to capture activity in the frontal lobe related to speech and language processing (Okada et al., 2010). The Left and Right Heschl’s gyrus regions were selected to capture the rostral aspect of the temporal lobes, including the superficial area surrounding Heschl’s gyrus. The Left and Right Planum Temporale regions were selected to capture the caudal aspects of the temporal lobe, including the planum temporale, also known as Wernicke’s area. Two visual regions of interest were selected from the occipital lobe to capture the superior occipital gyrus and the cuneus, and the middle occipital gyrus.

Quantification and statistical analysis

Signal processing was performed using open-source toolboxes: MNE (Gramfort et al., 2013, 2014), Nilearn (Abraham et al., 2014; Thirion et al., 2021), and MNE-NIRS (Luke et al., 2021a). Both a qualitative and quantitative analysis were performed on the data. The analysis performed on this dataset mirrors that in Shader et al. (2021), and is reproduced here for completeness per the recommendations for best practices in fNIRS publications (Yücel et al., 2021). Grand average waveforms were produced by converting the raw data to optical density, after which the data quality was assessed using the scalp coupling index method (Pollonini et al., 2014). Optodes with a coupling index below 0.8 were removed from further analysis. Temporal Derivative Distribution Repair was then applied to the optical density signal (Fishburn et al., 2019). Systemic signal correction was then applied to the long-channel data, which utilizes information from the signals collected from the short-separation channels (Saager and Berger, 2005). Next, the modified Beer-Lambert Law was used to transform the data to oxy- and deoxyhemoglobin estimates. To further remove signal contributions from heart-rate components and slow signal drifts, a filter was applied to the data with a pass band between 0.02 and 0.4 Hz. The negative correlation enhancement approach described by Cui et al. (2010) was then applied to the hemoglobin signals. The data was then epoched with a time interval ranging from 8 sec before stimulus onset to 30 sec after stimulus onset, and epochs with a peak-to-peak value above 100 μM were excluded (8.3% of total epochs). Quantitative statistical analyses were also performed on the data. The oxy- and deoxyhemoglobin signals were fit to a general linear model (GLM), which included regressors created by convolving a boxcar function for each condition, with duration equal to the average stimulus length of 12.5 s, with a SPM hemodynamic response model (Penny et al., 2011). To account for systemic contributions in the acquired measurements, the individual short-separation channel data and the mean signal from the short-separation channels were included as additional regressors in the GLMs (Santosa et al., 2020; Tachtsidis and Scholkmann, 2016). Channel-level data was then combined into a region of interest based on a priori optode selection (Shader et al., 2021) using a weighted average of the resulting beta values. There were two instances in which the CI external coil was sitting directly under a source or detector optode; in these cases, those specific channels were removed from the data prior to the GLM analysis. Statistical analyses were performed on the GLM beta values using RStudio (RStudio Team, 2015) and the lme4 package (Bates et al., 2014). Linear mixed-effect modelling was performed with fixed effects of stimulus condition, region of interest, and chromophore, with a random effect of participant. Statistical significance was determined using an alpha level (p-value) of 0.05.
REAGENT or RESOURCESOURCEIDENTIFIER
Deposited data

Human dataThis paperhttps://doi.org/10.17632/377b4ff8p6.1

Software and algorithms

MNE-Python v0.24.0MNE Developershttps://mne.tools/
MNE-NIRS v0.0.6MNE Developershttps://mne.tools/mne-nirs
NilearnNilearn Developershttps://nilearn.github.io/stable/index.html
  40 in total

1.  Direct characterization and removal of interfering absorption trends in two-layer turbid media.

Authors:  Rolf B Saager; Andrew J Berger
Journal:  J Opt Soc Am A Opt Image Sci Vis       Date:  2005-09       Impact factor: 2.129

2.  Comparing the effects of auditory deprivation and sign language within the auditory and visual cortex.

Authors:  Ione Fine; Eva M Finney; Geoffrey M Boynton; Karen R Dobkins
Journal:  J Cogn Neurosci       Date:  2005-10       Impact factor: 3.225

Review 3.  Deprivation-induced cortical reorganization in children with cochlear implants.

Authors:  Anu Sharma; Phillip M Gilley; Michael F Dorman; Robert Baldwin
Journal:  Int J Audiol       Date:  2007-09       Impact factor: 2.117

4.  Source analysis of auditory steady-state responses in acoustic and electric hearing.

Authors:  Robert Luke; Astrid De Vos; Jan Wouters
Journal:  Neuroimage       Date:  2016-11-25       Impact factor: 6.556

5.  Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

Authors:  Theodore J Huppert
Journal:  Neurophotonics       Date:  2016-03-02       Impact factor: 3.593

6.  Cross-modal reorganization and speech perception in cochlear implant users.

Authors:  M E Doucet; F Bergeron; M Lassonde; P Ferron; F Lepore
Journal:  Brain       Date:  2006-09-26       Impact factor: 13.501

7.  MNE software for processing MEG and EEG data.

Authors:  Alexandre Gramfort; Martin Luessi; Eric Larson; Denis A Engemann; Daniel Strohmeier; Christian Brodbeck; Lauri Parkkonen; Matti S Hämäläinen
Journal:  Neuroimage       Date:  2013-10-24       Impact factor: 6.556

8.  Machine learning for neuroimaging with scikit-learn.

Authors:  Alexandre Abraham; Fabian Pedregosa; Michael Eickenberg; Philippe Gervais; Andreas Mueller; Jean Kossaifi; Alexandre Gramfort; Bertrand Thirion; Gaël Varoquaux
Journal:  Front Neuroinform       Date:  2014-02-21       Impact factor: 4.081

9.  Best practices for fNIRS publications.

Authors:  Meryem A Yücel; Alexander V Lühmann; Felix Scholkmann; Judit Gervain; Ippeita Dan; Hasan Ayaz; David Boas; Robert J Cooper; Joseph Culver; Clare E Elwell; Adam Eggebrecht; Maria A Franceschini; Christophe Grova; Fumitaka Homae; Frédéric Lesage; Hellmuth Obrig; Ilias Tachtsidis; Sungho Tak; Yunjie Tong; Alessandro Torricelli; Heidrun Wabnitz; Martin Wolf
Journal:  Neurophotonics       Date:  2021-01-07       Impact factor: 3.593

10.  Do not throw out the baby with the bath water: choosing an effective baseline for a functional localizer of speech processing.

Authors:  Nadav Stoppelman; Tamar Harpaz; Michal Ben-Shachar
Journal:  Brain Behav       Date:  2013-02-17       Impact factor: 2.708

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.