Literature DB >> 28972469

Relationship of Grammatical Context on Children's Recognition of s/z-Inflected Words.

Meredith Spratford1, Hannah Hodson McLean2, Ryan McCreery1.   

Abstract

BACKGROUND: Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.
PURPOSE: To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH). RESEARCH
DESIGN: A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH. STUDY SAMPLE: Thirty-five children, aged 5-12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English. DATA COLLECTION AND ANALYSIS: Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.
RESULTS: When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.
CONCLUSIONS: High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children's use of high-frequency audibility in a manner that approximates how they learn language. American Academy of Audiology

Entities:  

Mesh:

Year:  2017        PMID: 28972469      PMCID: PMC5665565          DOI: 10.3766/jaaa.16151

Source DB:  PubMed          Journal:  J Am Acad Audiol        ISSN: 1050-0545            Impact factor:   1.664


  29 in total

1.  Learning to perceive speech: how fricative perception changes, and how it stays the same.

Authors:  Susan Nittrouer
Journal:  J Acoust Soc Am       Date:  2002-08       Impact factor: 1.840

2.  Frequency-importance functions for words in high- and low-context sentences.

Authors:  T S Bell; D D Dirks; T D Trine
Journal:  J Speech Hear Res       Date:  1992-08

3.  Discriminability and perceptual weighting of some acoustic cues to speech perception by 3-year-olds.

Authors:  S Nittrouer
Journal:  J Speech Hear Res       Date:  1996-04

4.  Bandwidth effects on children's perception of the inflectional morpheme /s/: acoustical measurements, auditory detection, and clarity rating.

Authors:  R W Kortekaas; P G Stelmachowicz
Journal:  J Speech Lang Hear Res       Date:  2000-06       Impact factor: 2.297

5.  Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss.

Authors:  Jace Wolfe; Andrew John; Erin Schafer; Myriel Nyffeler; Michael Boretzki; Teresa Caraway
Journal:  J Am Acad Audiol       Date:  2010 Nov-Dec       Impact factor: 1.664

6.  Sentence-position effects on children's perception and production of English third person singular -s.

Authors:  Megha Sundara; Katherine Demuth; Patricia K Kuhl
Journal:  J Speech Lang Hear Res       Date:  2010-08-12       Impact factor: 2.297

7.  Some differences between English plural noun inflections and third singular verb inflections in the input: the contributions of frequency, sentence position, and duration.

Authors:  L Hsieh; L B Leonard; L Swanson
Journal:  J Child Lang       Date:  1999-10

8.  The role of sentence position, allomorph, and morpheme type on accurate use of s-related morphemes by children who are hard of hearing.

Authors:  Keegan Koehlinger; Amanda Owen Van Horne; Jacob Oleson; Ryan McCreery; Mary Pat Moeller
Journal:  J Speech Lang Hear Res       Date:  2015-04       Impact factor: 2.297

9.  Language Outcomes in Young Children with Mild to Severe Hearing Loss.

Authors:  J Bruce Tomblin; Melody Harrison; Sophie E Ambrose; Elizabeth A Walker; Jacob J Oleson; Mary Pat Moeller
Journal:  Ear Hear       Date:  2015 Nov-Dec       Impact factor: 3.570

10.  Adult-child differences in acoustic cue weighting are influenced by segmental context: children are not always perceptually biased toward transitions.

Authors:  Catherine Mayo; Alice Turk
Journal:  J Acoust Soc Am       Date:  2004-06       Impact factor: 1.840

View more
  1 in total

1.  Listener Performance with a Novel Hearing Aid Frequency Lowering Technique.

Authors:  Benjamin J Kirby; Judy G Kopun; Meredith Spratford; Clairissa M Mollak; Marc A Brennan; Ryan W McCreery
Journal:  J Am Acad Audiol       Date:  2017-10       Impact factor: 1.664

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.