Bob McMurray1, Tyler P Ellis2, Keith S Apfelbaum3,4. 1. Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, Iowa, USA. 2. Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA. 3. Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA. 4. Foundations in Learning, Inc., Coralville, Iowa, USA.
Abstract
OBJECTIVES: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. DESIGN: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. RESULTS: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. CONCLUSIONS: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.
OBJECTIVES: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. DESIGN: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. RESULTS: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. CONCLUSIONS: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.
Authors: Luis Lassaletta; Alejandro Castro; Marta Bastarrica; Maria José de Sarriá; Javier Gavilán Journal: Eur Arch Otorhinolaryngol Date: 2005-07-16 Impact factor: 2.503
Authors: J Helms; J Müller; F Schön; L Moser; W Arnold; T Janssen; R Ramsden; C von Ilberg; J Kiefer; T Pfennigdorf; W Gstöttner; W Baumgartner; K Ehrenberger; H Skarzynski; O Ribari; W Thumfart; K Stephan; W Mann; M Heinemann; P Zorowka; K L Lippert; H P Zenner; M Bohndord; K Hüttenbrink; I Hochmair-Desoyer Journal: ORL J Otorhinolaryngol Relat Spec Date: 1997 Jan-Feb Impact factor: 1.538
Authors: Richard S Tyler; Aaron J Parkinson; Blake S Wilson; Shelley Witt; John P Preece; William Noble Journal: Ear Hear Date: 2002-04 Impact factor: 3.570
Authors: Michael F Dorman; Philip Loizou; Shuai Wang; Ting Zhang; Anthony Spahr; Louise Loiselle; Sarah Cook Journal: Audiol Neurootol Date: 2014-07-02 Impact factor: 1.854
Authors: Laura K Holden; Charles C Finley; Jill B Firszt; Timothy A Holden; Christine Brenner; Lisa G Potts; Brenda D Gotter; Sallie S Vanderhoof; Karen Mispagel; Gitry Heydebrand; Margaret W Skinner Journal: Ear Hear Date: 2013 May-Jun Impact factor: 3.570
Authors: Subong Kim; Adam T Schwalje; Andrew S Liu; Phillip E Gander; Bob McMurray; Timothy D Griffiths; Inyong Choi Journal: Neuroimage Date: 2020-12-30 Impact factor: 6.556