| Literature DB >> 31761952 |
Xiangbin Teng1, David Poeppel1,2.
Abstract
Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al. 2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acoustic dynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures over timescales from ~200 to ~30 ms and investigated temporal coding on different timescales. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Although considerable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analyses reveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands, but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta and gamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporal coding with comparable capacity. Our findings provide a novel perspective-acoustic information of all timescales is discretised into two discrete temporal chunks for further perceptual analysis.Entities:
Keywords: asymmetric sampling; discretization; multiplexing; temporal channel; temporal processing
Year: 2020 PMID: 31761952 PMCID: PMC7174990 DOI: 10.1093/cercor/bhz263
Source DB: PubMed Journal: Cereb Cortex ISSN: 1047-3211 Impact factor: 5.357