| Literature DB >> 33147138 |
Ediz Sohoglu1, Matthew H Davis2.
Abstract
Human speech perception can be described as Bayesian perceptual inference but how are these Bayesian computations instantiated neurally? We used magnetoencephalographic recordings of brain responses to degraded spoken words and experimentally manipulated signal quality and prior knowledge. We first demonstrate that spectrotemporal modulations in speech are more strongly represented in neural responses than alternative speech representations (e.g. spectrogram or articulatory features). Critically, we found an interaction between speech signal quality and expectations from prior written text on the quality of neural representations; increased signal quality enhanced neural representations of speech that mismatched with prior expectations, but led to greater suppression of speech that matched prior expectations. This interaction is a unique neural signature of prediction error computations and is apparent in neural responses within 100 ms of speech input. Our findings contribute to the detailed specification of a computational model of speech perception based on predictive coding frameworks.Entities:
Keywords: MEG; human; neuroscience; predictive coding; spectrotemporal modulations; speech perception
Mesh:
Year: 2020 PMID: 33147138 PMCID: PMC7641582 DOI: 10.7554/eLife.58077
Source DB: PubMed Journal: Elife ISSN: 2050-084X Impact factor: 8.140