| Literature DB >> 25186758 |
Il Joon Moon1, Jong Ho Won2, Min-Hyun Park3, D Timothy Ives4, Kaibao Nie5, Michael G Heinz6, Christian Lorenzi4, Jay T Rubinstein5.
Abstract
The dichotomy between acoustic temporal envelope (ENV) and fine structure (TFS) cues has stimulated numerous studies over the past decade to understand the relative role of acoustic ENV and TFS in human speech perception. Such acoustic temporal speech cues produce distinct neural discharge patterns at the level of the auditory nerve, yet little is known about the central neural mechanisms underlying the dichotomy in speech perception between neural ENV and TFS cues. We explored the question of how the peripheral auditory system encodes neural ENV and TFS cues in steady or fluctuating background noise, and how the central auditory system combines these forms of neural information for speech identification. We sought to address this question by (1) measuring sentence identification in background noise for human subjects as a function of the degree of available acoustic TFS information and (2) examining the optimal combination of neural ENV and TFS cues to explain human speech perception performance using computational models of the peripheral auditory system and central neural observers. Speech-identification performance by human subjects decreased as the acoustic TFS information was degraded in the speech signals. The model predictions best matched human performance when a greater emphasis was placed on neural ENV coding rather than neural TFS. However, neural TFS cues were necessary to account for the full effect of background-noise modulations on human speech-identification performance.Entities:
Keywords: computational model; neural mechanism; speech perception; temporal cues
Mesh:
Year: 2014 PMID: 25186758 PMCID: PMC4152611 DOI: 10.1523/JNEUROSCI.1025-14.2014
Source DB: PubMed Journal: J Neurosci ISSN: 0270-6474 Impact factor: 6.167