Literature DB >> 18794839

Speech motor learning in profoundly deaf adults.

Sazzad M Nasir1, David J Ostry.   

Abstract

Speech production, like other sensorimotor behaviors, relies on multiple sensory inputs--audition, proprioceptive inputs from muscle spindles and cutaneous inputs from mechanoreceptors in the skin and soft tissues of the vocal tract. However, the capacity for intelligible speech by deaf speakers suggests that somatosensory input alone may contribute to speech motor control and perhaps even to speech learning. We assessed speech motor learning in cochlear implant recipients who were tested with their implants turned off. A robotic device was used to alter somatosensory feedback by displacing the jaw during speech. We found that implant subjects progressively adapted to the mechanical perturbation with training. Moreover, the corrections that we observed were for movement deviations that were exceedingly small, on the order of millimeters, indicating that speakers have precise somatosensory expectations. Speech motor learning is substantially dependent on somatosensory input.

Entities:  

Mesh:

Year:  2008        PMID: 18794839      PMCID: PMC2601702          DOI: 10.1038/nn.2193

Source DB:  PubMed          Journal:  Nat Neurosci        ISSN: 1097-6256            Impact factor:   24.884


One of the puzzles of human language is that individuals who become deaf as adults remain capable of producing quite intelligible speech for many years, in the absence of auditory input1–3. This ability suggests that speech production is substantially dependent on non-auditory sensory information, and in particular, afferent input from the somatosensory system. Previous studies that have sought to identify a somatosensory basis to speech motor function have done so in the presence of auditory inputs4–12 and hence any effects that were observed may have been due to the presence of the auditory signal. Here we show that somatosensory input on its own may underlie speech production and speech motor learning. We do so by studying speech learning in cochlear implant recipients who we tested with their implants turned off. We assessed speech learning by using a robotic device that applied forces, which displaced the jaw and altered somatosensory feedback during speech. We found that even in the absence of auditory input, implant subjects progressively corrected their speech movements to offset errors in the motion path of the jaw. Indeed, the levels of adaptation that we observed were comparable for implant subjects and normal hearing control subjects. This indicates that speech learning is substantially dependent on somatosensory feedback. Speech production must be understood both as an auditory13–15 and a somatosensory task11, 12, 16.

RESULTS

Five post-lingually deaf adults participated in the study (Figure 1a). These subjects had profound hearing loss in both ears (average hearing loss, 101 dB). All but one had received a cochlear implant (Figure 1b; gray). The hearing loss for the subjects with cochlear implants was sensorineural in origin; the remaining subject, who wore a hearing aid, had a mixed sensorineural and conductive hearing loss (see Methods). Six age-matched control subjects (average hearing loss, 13 dB) had hearing typical of their age range (Figure 1b; black). During the experimental session, a robotic device applied a mechanical load to jaw as the subject repeated aloud test-utterances that were chosen randomly from a set of four (saw, say, sass, sane) and displayed on a computer monitor. The mechanical load was velocity-dependent and acted to displace the jaw in a protrusion direction, altering somatosensory but not auditory feedback9, 17, 18. Subjects were trained over the course of three hundred utterances. Sensorimotor learning was evaluated using a measure of movement curvature to quantify adaptation. Curvature was measured at the point of maximum jaw lowering velocity and was calculated as jaw protrusion divided by the magnitude of jaw elevation at this point. The hearing-impaired subjects were trained with their implant or hearing aid turned off, while control subjects had full hearing during training.
Figure 1

Experimental set-up and audiogram

a. A robotic device delivered a velocity-dependent load to the jaw. b. Pure-tone hearing thresholds. Implant subjects (gray) had a severe to profound hearing loss. Control subjects (black) had hearing levels typical for their age.

Figures 2a and 2b show a sagittal plane view of representative jaw trajectories in speech for an implant subject and a normal hearing control subject. In both cases, movements are straight in the absence of load (null condition: cyan); the jaw is displaced in a protrusion direction when the load is first applied (initial-exposure: red); curvature decreases with training (end-training: black); there is a small after-effect following unexpected removal of load (after-effect: gray). Movements for the implant subject under no load conditions are similar regardless of whether the implant is on or off (implant-on: gold, implant-off: blue). Figure 3a shows movement curvature measures for an implant subject for individual trials over the entire course of the experiment. As in the movements shown in Figure 2a, values of curvature were low in the null condition, increased with the introduction of load and then progressively decreased with training.
Figure 2

Sagittal plane jaw movement paths

Speech motor learning in implant recipients, who were trained with their implants turned off, was similar to that of normal hearing control subjects. a. For implant subjects, jaw paths were straight in the absence of load (gold: implant turned on; cyan: implant turned off). The jaw was deflected in the protrusion direction when the load came on (red). After training, movement curvature decreased (black). When the load was switched off unexpectedly at the end of training, there was a small after-effect (gray). b. Control subjects show a similar pattern. Color codes are the same as those used in panel a. In all cases, individual movements are shown.

Figure 3

Adaptation patterns in implant and control subjects

a. Scatter plot showing learning for a representative implant subject. The ordinate depicts movement curvature; the abscissa gives trial number. Curvature is low during null trials (gold and cyan), increases with the introduction of the load, and then decreases over the course of training (red). A small after-effect is seen when the load is switched off (gray). b. Significant adaptation was observed in all implant subjects. The figure shows mean curvature (± SE) during various phases of the experiment. Curvature increases with the introduction of load (red) and decreases reliably with training (black). Stars (*) designate statistically reliable adaptation, p < 0.01. The subject with mixed hearing loss is shown with a gold star. c. Significant adaptation is also observed in 4 out of 6 normal hearing control subjects.

Kinematic and acoustical tests of adaptation were conducted quantitatively on a per subject basis using ANOVA followed by Tukey HSD post-hoc tests. Subjects showed similar kinematic patterns in both the implant and control groups. In the implant group, adaptation was observed in all five subjects as indicated by a significant decrease in curvature over the training period (Figure 3b, p < 0.01 for all subjects). Only four of six control subjects adapted to the load (Figure 3c, p < 0.01). The amount of adaptation was assessed on a per subject basis by computing the reduction in curvature over the course of training as a proportion of the curvature due to the introduction of load. A value of 1.0 indicates complete adaptation. For the implant group, adaptation, averaged across subjects and test words, was 0.20 ± 0.06 (mean ± SE), and for the control group, adaptation was 0.19 ± 0.04. Adaptation was thus comparable for the two groups (p > 0.93, t-test), suggesting that for the set of four utterances that we have tested, auditory feedback is not necessary for adaptation to load. Subjects’ response to the sudden removal of the load following training was variable. Two of the four control subjects that adapted to load showed reliable after-effects by post-hoc tests as did three of five implant subjects (p < 0.01). In individuals with normal hearing, the adaptation observed here could have been driven by somatosensory or auditory feedback, or the two in combination: somatosensory feedback is altered because the load alters the movement path of the jaw and changes somatosensory input; auditory feedback may also change because the load might affect speech acoustics by altering the shape of the vocal tract. Since subjects in the implant group adapted with the implant turned-off, auditory input does not seem to be necessary for speech learning, at least in post-lingually deaf adults. In order to evaluate the presence of auditory cues for adaptation that might have been used by the normal hearing control subjects, acoustical changes in the speech signal were assessed over the course of training. Acoustical effects related to the application of load were evaluated by computing the first and second formant frequencies of the vowel immediately following the initial consonant s in each of the test-utterances. Figure 4a shows an example of the raw acoustical signal for the test utterance saw and the associated first and second formants of the speech spectrogram. We chose these particular vowels for acoustical analysis because their production coincided within the opening phase of jaw movement during which the force-field was maximum. We reasoned that the load’s effect, if any, on the speech signal should be most evident at this point. The acoustical data included for analysis were for only those subjects who adapted to load.
Figure 4

There were no systematic acoustical effects associated with force field learning

a. The top panel shows the raw acoustical waveform for the word saw. The first two formants of the corresponding spectrogram are shown in the bottom panel. b. The load had little effect on the acoustics of the implanted subjects. The first formants of vowels were computed under no load conditions (gold and cyan), at the introduction of the load (red) and at the end of training (black). The second formants are shown in pale colours, with the frequency scale shown on the right. c. The load had little effect on the acoustics of the normally hearing control subjects. The first and the second formants of vowels were computed under no load conditions (cyan), at the introduction of the load (red) and at the end of training (black). In all cases ± 1 SE is shown.

Figures 4b and 4c summarize the acoustical findings for both implant and control subjects. Null condition blocks are shown in cyan for both the control group and for the implant group with their implant turned-off. Null condition blocks with the implant turned on are shown in gold. The red and black bars give acoustical results during the initial and final training blocks with the force field on. In all cases, the first formant is displayed in solid colours and the second formant is in pale colours. Acoustical effects were assessed quantitatively on both a between subjects basis and for each subject separately. We focused on potential effects of the load’s introduction, possible changes with learning, and changes due to the unexpected removal of load in the after-effect trials. A repeated measures ANOVA produced few statistically reliable acoustical effects over the course of learning. As expected, the acoustical patterns differed for the different test words (p < 0.01 for both formants). However the acoustical effects were similar for implant and control subjects (p > 0.69 and p > 0.58 for the first and second formants, respectively). In one case, for the utterance sass, there was a reliable increase in the first formant frequency over the course of training (p < 0.05). However we found no other statistically reliable differences in either the first or second formant frequency with the introduction of load, from the start to the end of training, or upon sudden removal of load. We repeated these analyses for the implant and control groups separately and obtained no reliable differences in formant frequencies between these various phases of the experiment. We also assessed possible acoustical and kinematic differences in the implant group in the null condition with and without the implant on. For this test we used the last half of the no-load trials with the implant on and an equal number of trials at the end of the no-load phase with the implant off. Typically this amounted to 40 or 50 utterances in each condition. A repeated measures ANOVA (across subjects and test utterances) found no differences in either the first or the second formant frequencies (p > 0.89 and p > 0.93, respectively) or in movement curvature (p > 0.97). This indicates that before the force field was introduced any systematic changes in production patterns due to switching off the implant had been eliminated. We used tests of correlation to assess the extent to which changes in movement curvature over the course of learning were related to first and second formant values. Tests were conducted for each subject using the mean curvature and the mean formant frequency in each block over the course of the experiment. Separate correlations were computed for the first and second formant frequencies. There was little evidence that changes in movement curvature with learning were mirrored in the acoustical domain. In the case of the first formant, hearing-impaired subjects showed a mean correlation of −0.12 ± 0.10 (mean ± S.E.) between formant value and curvature. The corresponding correlations for the control group were −0.01 ± 0.01. For the second formant these same correlations yielded 0.02 ± 0.09 and 0.002 ± 0.02, respectively. Thus, there is no indication that changes in movement curvature with learning have any effects on the formant frequencies of the associated speech. The adaptation seen in implant subjects may have been due in part to changes in somatosensory and / or kinematic precision that have taken place to compensate for the auditory loss. As already noted, all of our implant subjects showed statistically reliable adaptation whereas only two-thirds of the normal hearing control subjects (4 out of 6) had similar patterns. We looked for differences in the kinematic and acoustical characteristics of the two groups under null conditions. We examined the first two formants and associated values of jaw protrusion and elevation. Figure 5a shows null condition values of the first two formants; Figure 5b shows the corresponding values of jaw protrusion and elevation (control: magenta; hearing-off: cyan; hearing-on: black). The individual data points are a representative selection of values across all subjects and utterances.
Figure 5

Kinematic and acoustical variability for implant (with implant on and off) and control subjects

a. Acoustical variability is similar for implant and control subjects. Representative examples of first and second formant frequencies are shown for no load trials. b. Both subject groups have comparable variation in jaw kinematics. Representative examples of vertical and horizontal jaw position during the null trials. Position values are computed at the point at which formant values are evaluated. c. Implant and control subjects have similar coefficients of variation (CV) for formant frequencies in the absence of load. d. Corresponding CVs for jaw position.

Using ANOVA we tested for differences in implant and control subjects in acoustical and kinematic parameters in the absence of load (implant turned off). We found no systematic differences between these groups in first (p > 0.95) or second formant frequency (p > 0.97) or in jaw protrusion (p > 0.53) or elevation (p > 0.11). We further assessed possible differences between implant and control subjects in acoustical and kinematic precision by computing their respective coefficients of variation (CV), which are measures of variability normalized by the mean. Figures 5c and 5d plot the CVs of the first two formants and the CVs of protrusion and elevation. The individual data points give null condition values of the CV for each utterance and each subject separately. No differences in CV between implant and control subjects were found for either the first or the second formant frequency (p > 0.68 and p > 0.31, respectively) or for horizontal jaw position (p > 0.95). For vertical jaw position, control subjects were found to have a marginally greater CV (p = 0.06).

DISCUSSION

We have examined speech motor learning in post-lingually deaf adults who were tested with their implants turned off. Implant subjects showed significant adaptation, comparable to that observed in control subjects, to a mechanical load that acted to displace the jaw and alter somatosensory feedback. Neither group of subjects displayed measurable acoustical change as a consequence of the load. The data are consistent with the idea that speech motor learning is reliant upon somatosensory feedback and that even subtle changes to movement prompt corrective adjustments. The somatosensory guidance of speech movements by deaf speakers may underlie the capacity for intelligible speech following hearing loss. It is important to consider an alternate possibility that in the absence of auditory input deaf individuals speak intelligibly because they use stored motor programs, in effect a sequence of motor commands, which are executed without sensory feedback. The present findings indicate that speech trajectory representations cannot be encoded just as motor commands, or else we would not observe adaptation to mechanical perturbations. The compensation observed here cannot be acoustic since there is limited acoustic feedback that might regulate the adaptation. Accordingly, compensation in the present study must involve a somatosensory trajectory representation and somatosensory feedback. Moreover, both are likely to be used on a routine basis in the production of speech and speech motor learning. The adaptation shown by the implant group may reflect a heightened sensitivity to somatosensory input as a consequence of hearing loss but it might also reflect the normal role of somatosensory inputs in determining speech movements. Our data provide some support for both possibilities. The fact that the compensation observed here is similar for implant and control subjects suggests that the implant group is no more sensitive to somatosensory change than subjects with normal hearing. However, all subjects in the implant group show adaptation in comparison to the more typical 2/3 proportion in the control group11, 12, 19. This difference would argue in favour of the idea that somatosensory sensitivity is improved in at least some individuals with late-onset hearing loss. We have been able to dissociate the role of auditory and somatosensory feedback by assessing speech learning in deaf adults who receive no auditory information during speech. In other studies, and also here, we achieved a comparable dissociation by applying loads that alter jaw movement and hence change somatosensory feedback without any perceptible change to the acoustics11. The small acoustical effects are presumably due to the non-rigid coupling between the jaw and the acoustically critical tongue surface. Moreover the perturbations are quite small and change the length of the vocal tract by millimetres at most. The expected acoustical effect is therefore rather limited. The degree to which subjects compensate for load is comparable in implant subjects and in age-matched controls. Adaptation was incomplete in both cases; on average there is about a 20% reduction in movement error over the course of training. However, partial adaptation is typical of studies of speech motor learning, both with mechanical loads and altered acoustical feedback11–15, 19 and may reflect the imprecision of articulatory targets and the possibility for inter-articulator trade-offs in the achievement of auditory goals. In studies of speech motor learning with mechanical loads there is typically somewhat greater adaptation11, 12. The age of subjects may be a determining factor since our subjects were considerably older than in previous studies. This may have contributed to a greater tolerance for movement errors and, hence, reduced adaptation. As already mentioned, only 2/3 of our control subjects showed evidence of adaptation. Indeed adaptation rates in studies of altered auditory feedback are in a similar range19. Although this remains to be tested, the observed adaptation rates may reflect an individual’s reliance on auditory versus somatosensory feedback. Subjects in the present study who fail to adapt may rely less on somatosensory function and more on auditory feedback whereas subjects who fail to adapt in work on altered auditory feedback may be more reliant on somatosensory function and less on auditory feedback19. We should comment on the possibility of auditory feedback in subjects in our implant group. Auditory feedback ordinarily reaches the cochlea on the basis of air conducted and also bone conducted signals. The consensus is that the basilar membrane simply sums them20, 21. Individuals that have a profound sensorineural hearing loss such as the four implant recipients tested in this experiment should not hear signals that reach the cochlea, regardless of whether those signals arrive by air or bone conduction. The fifth subject in the implant group who had a mixed hearing loss with both sensorineural and conductive components may receive some low frequency bone conducted auditory input when he talks, but given his significant sensorineural hearing loss he is at best likely to experience a much attenuated signal22. It is worth noting that this subject’s speech adaptation patterns are similar to those of the other subjects in the implant group and likewise to control subjects with normal hearing. We believe that the similarity of speech learning over the spectrum of hearing loss underscores the conclusion that speech production and speech motor learning are not strictly tied to auditory input. The present finding that implant and control subjects achieved a comparable level of adaptation, bears on multisensory integration in speech. Speech production typically involves integration of auditory and somatosensory inputs. In subjects with normal hearing, inputs from each modality contribute to the error information that drives adaptation. The simplest possibility is that the nervous system linearly sums error information to achieve a composite measure of total sensory error23, 24. For implant subjects, particularly in the context of the present experiment, where testing occurs shortly after the implant is turned off, we see that subjects can rapidly place reliance on somatosensory input to achieve adaptation and can seemingly discount the auditory channel. The weighting of sensory inputs is not fixed and indeed it seems possible to quickly alter the weighting if needed.

METHODS

Subjects and tasks

We tested eleven subjects in total (see Figure 1b): five of them (age 63.8 ± 8.5 yrs; two females and three males) had an average hearing loss of 101 dB for air conducted sound and the other six age-matched control subjects (age 55.6 ± 4.7 years; two females and four males) had normal hearing (average loss of 13 dB). All of the hearing impaired subjects had post-lingually acquired deafness and had lost their hearing gradually. All but one had received a cochlear implant in the ear with the worse hearing. The one remaining subject wore a hearing aid. The onset of hearing loss ranged from age 6 to 56 with a mean onset age of 33.2. On average subjects had received their implant 2.5 yrs prior to participating in this study. The Institutional Review Board of McGill University approved the experimental protocol. We evaluated subjects in the implant group for the possibility of conductive hearing loss (failure of signal transmission through the tympanic membrane or middle ear) by assessing auditory thresholds to bone conducted sound and by tympanometry. We found no evidence of a conductive hearing loss in any of the four subjects with cochlear implants. In other words, their deafness was entirely sensorineural in origin. For these subjects, bone conduction auditory thresholds measured at frequencies from 250 Hz to 4000 Hz showed a hearing loss that exceeded the limits of the audiometer for bone conduction testing (in the range of 65 dB hearing loss to a bone conducted signal). The subject that used a hearing aid had a mixed hearing loss with both a conductive and a sensorineural component. This particular subject was profoundly deaf to air conducted sound. His hearing loss for the right ear averaged 105 dB. His left ear hearing loss was 96 dB. His bone conduction thresholds measured at 250, 500, 1000, 2000 and 4000 Hz showed a hearing loss of 30, 40, 35, 55 and 50 dB respectively. The less severe hearing loss for bone conducted sound at lower frequencies is consistent with the finding that the bone conduction transfer function peaks between 700 and 1200 Hz22. It is thus possible that this subject would hear the low frequencies of his own voice through bone conduction during speech production. However because of his substantial sensorineural hearing loss, any transmission through the cochlea would be attenuated. (The data for this subject are shown with a gold star in Figure 3b.) The task was to repeat a single word test-utterance that was displayed on a computer monitor while a robotic device delivered a velocity dependent load to the jaw that acted in the protrusion direction. The experiment was carried out in blocks of twelve utterances each. On each trial, the test utterance was chosen randomly from a set of four words, saw, say, sass, and sane, and within a block of twelve utterances each of the test-utterances in the set was presented three times on average. The test words were selected so that in each case the fricative consonant “s” was followed by a vowel or a dipthong. Production of the consonant s involves a precise jaw position near to closure. The vowels and dipthongs were chosen to give large amplitude jaw movements and high force levels. The display of the word was controlled manually by the experimenter, which introduced a delay of 1 to 2 seconds between the test-utterances. The first three to five blocks were recorded under null or no-load conditions. The implant or hearing-aid was left on for this first set of trials which constituted the hearing-on null phase of the experiment. Normal hearing control subjects were tested for a similar number of blocks in the null condition. The implant was then turned off and stayed off for the remainder of the experiment. After the implant was first turned off there was a waiting period of approximately 10 minutes before subsequent testing began. At this point a further ten to fifteen blocks were recorded under null-conditions for the implant group. This constituted a hearing-off null phase. We recorded this large number of no-load trials after turning off the implant because there are rapid changes in the speech formant structure when the implant is first turned off25, 26, and we wanted to ensure that the production pattern had stabilized before training commenced. The next twenty-five blocks, approximately 300 repetitions of the test-utterances, were recorded with the load on and constituted the training phase. Following the training phase, the load was unexpectedly turned off and one block of “catch” trials was recorded in the absence of load.

Experimental procedures

A computer controlled robotic device (Phantom Premium 1.0, Sensable Technologies, Woburn, MA, USA) was used to deliver a load to the lower jaw. The robotic device was connected to a custom made acrylic-metal dental appliance via a magnesium-titanium rotary connector that offered fully unconstrained movement of the jaw in the absence of external load. The dental appliance was attached to the buccal surface of the mandibular teeth with a dental adhesive (Iso-Dent, Ellman International, Hewlett, NY, USA). A force/torque sensor (ATI Nano-17, ATI Industrial Automation, Apex, NC, USA) was mounted at the tip of the robotic device to measure the resistive force applied by the subjects in opposition to the load. The subject’s head was restrained during the experiment by connecting a second dental appliance that was glued to the maxillary teeth to an external frame that consisted of a set of articulated metal arms. The metal arms were locked in place throughout the experimental session. Jaw movement was recorded in three dimensions at a rate of 1 KHz and the data were digitally low-pass filtered offline at 8 Hz. The subject’s voice was recorded using a unidirectional microphone (Sennheiser, Germany). The acoustical signal was low-pass analogue filtered at 22 KHz and digitally sampled at 44 KHz. The robot applied a mechanical load to the jaw that resulted in jaw protrusion. The load varied with the absolute vertical velocity of the jaw and was governed by the following equation: F=k|v|, where F is the load in Newtons, k is a scaling coefficient and v is jaw velocity in mm/s. The scaling coefficient was chosen to have a value of between 0.6–0.8, with a higher coefficient used for subjects who spoke more slowly and vice versa. The maximum load was however capped at 7.0 N. Jaw velocity estimates for purposes of load application were obtained in real-time by numerically differentiating jaw position values obtained from the robot encoders. The computed velocity signal was low-pass filtered using a first order Butterworth filter with a cut-off frequency 2 Hz. The smoothed velocity profile was used online to generate the protrusion load.

Data analyses

A measure of path curvature — the ratio of protrusion to elevation at the peak vertical velocity of the jaw — was computed for each repetition of the test-utterance. The jaw opening movement was used for analyses. Movement start and end were scored at 10% of peak vertical velocity. The first two trials in each training block were excluded from analysis to guard against the possibility that subjects initially stiffened up at the onset of force application. Adaptation was assessed by computing the mean curvature for the first 25% and the last 25% of the force-field training trials. This gave approximately 50 movements in each case. A similarly computed measure of null condition performance was obtained by taking the mean of the last 50% of trials in the null condition blocks (both with implant on and off). Statistical assessments of adaptation were conducted using null blocks and initial and final training blocks. The effect on movement curvature of the unexpected removal of the load following training was evaluated relative to the null condition baseline level. This was done by subtracting the mean of the null condition values from the after-effect block11. The first five trials in the after-effect block were used for this analysis. A t-test was conducted to determine whether the mean of the normalized after-effect values was negative. Acoustical effects were quantified by computing the first and second formant frequencies of the vowels. An interval of approximately 100 msec that contained the steady state portion of the vowel was selected manually on a per trial basis. The formants within this interval were computed using a formant-tracking algorithm that was based on the standard LPC procedures implemented in Matlab. An analysis window of length 25 msec was used. The median of the formant estimates within the interval was used for subsequent analyses.

Statistical analysis

The main statistical analyses were conducted using ANOVA followed by Tukey’s HSD post-hoc tests.
  21 in total

1.  Effects of short-term auditory deprivation on speech production in adult cochlear implant users.

Authors:  M A Svirsky; H Lane; J S Perkell; J Wozniak
Journal:  J Acoust Soc Am       Date:  1992-09       Impact factor: 1.840

2.  Effect of different types of auditory stimulation on vowel formant frequencies in multichannel cochlear implant users.

Authors:  M A Svirsky; E A Tobey
Journal:  J Acoust Soc Am       Date:  1991-06       Impact factor: 1.840

3.  Humans integrate visual and haptic information in a statistically optimal fashion.

Authors:  Marc O Ernst; Martin S Banks
Journal:  Nature       Date:  2002-01-24       Impact factor: 49.962

4.  Incomplete compensation to articulatory perturbation.

Authors:  D H McFarland; S R Baum
Journal:  J Acoust Soc Am       Date:  1995-03       Impact factor: 1.840

5.  Sensorimotor adaptation in speech production.

Authors:  J F Houde; M I Jordan
Journal:  Science       Date:  1998-02-20       Impact factor: 47.728

6.  Adaptive representation of dynamics during learning of a motor task.

Authors:  R Shadmehr; F A Mussa-Ivaldi
Journal:  J Neurosci       Date:  1994-05       Impact factor: 6.167

7.  Bone-conduction measurement and calibration using the cancellation method.

Authors:  T S Kapteyn; E H Boezeman; A M Snel
Journal:  J Acoust Soc Am       Date:  1983-10       Impact factor: 1.840

8.  Rapid adaptation to Coriolis force perturbations of arm trajectory.

Authors:  J R Lackner; P Dizio
Journal:  J Neurophysiol       Date:  1994-07       Impact factor: 2.714

9.  A study of speech deterioration in post-lingually deafened adults.

Authors:  R Cowie; E Douglas-Cowie; A G Kerr
Journal:  J Laryngol Otol       Date:  1982-02       Impact factor: 1.469

10.  Speech compensation to structural modifications of the oral cavity.

Authors:  D H McFarland; S R Baum; C Chabot
Journal:  J Acoust Soc Am       Date:  1996-08       Impact factor: 1.840

View more
  32 in total

1.  Movement goals and feedback and feedforward control mechanisms in speech production.

Authors:  Joseph S Perkell
Journal:  J Neurolinguistics       Date:  2010-03-26       Impact factor: 1.710

2.  Adaptive auditory feedback control of the production of formant trajectories in the Mandarin triphthong /iau/ and its pattern of generalization.

Authors:  Shanqing Cai; Satrajit S Ghosh; Frank H Guenther; Joseph S Perkell
Journal:  J Acoust Soc Am       Date:  2010-10       Impact factor: 1.840

3.  fMRI investigation of unexpected somatosensory feedback perturbation during speech.

Authors:  Elisa Golfinopoulos; Jason A Tourville; Jason W Bohland; Satrajit S Ghosh; Alfonso Nieto-Castanon; Frank H Guenther
Journal:  Neuroimage       Date:  2010-12-30       Impact factor: 6.556

Review 4.  A model for production, perception, and acquisition of actions in face-to-face communication.

Authors:  Bernd J Kröger; Stefan Kopp; Anja Lowit
Journal:  Cogn Process       Date:  2009-12-10

5.  Auditory plasticity and speech motor learning.

Authors:  Sazzad M Nasir; David J Ostry
Journal:  Proc Natl Acad Sci U S A       Date:  2009-11-02       Impact factor: 11.205

6.  Compensations in response to real-time formant perturbations of different magnitudes.

Authors:  Ewen N MacDonald; Robyn Goldberg; Kevin G Munhall
Journal:  J Acoust Soc Am       Date:  2010-02       Impact factor: 1.840

7.  Temporal and spectral audiotactile interactions in musicians.

Authors:  Simon P Landry; Andréanne Sharp; Sara Pagé; François Champoux
Journal:  Exp Brain Res       Date:  2016-11-01       Impact factor: 1.972

8.  Neural Basis of Sensorimotor Plasticity in Speech Motor Adaptation.

Authors:  Mohammad Darainy; Shahabeddin Vahdat; David J Ostry
Journal:  Cereb Cortex       Date:  2019-07-05       Impact factor: 5.357

9.  Intermittent theta burst stimulation over right somatosensory larynx cortex enhances vocal pitch-regulation in nonsingers.

Authors:  Sebastian Finkel; Ralf Veit; Martin Lotze; Anders Friberg; Peter Vuust; Surjo Soekadar; Niels Birbaumer; Boris Kleber
Journal:  Hum Brain Mapp       Date:  2019-01-21       Impact factor: 5.038

10.  Functional and structural aging of the speech sensorimotor neural system: functional magnetic resonance imaging evidence.

Authors:  Pascale Tremblay; Anthony S Dick; Steven L Small
Journal:  Neurobiol Aging       Date:  2013-03-21       Impact factor: 4.673

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.