| Literature DB >> 33086178 |
McCall E Sarrett1, Bob McMurray2, Efthymia C Kapnoula3.
Abstract
Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.Entities:
Keywords: Electroencephalography; N100; N400; Predictive coding; Semantic integration; Speech perception; Top-down effects
Year: 2020 PMID: 33086178 PMCID: PMC7682806 DOI: 10.1016/j.bandl.2020.104875
Source DB: PubMed Journal: Brain Lang ISSN: 0093-934X Impact factor: 2.381