Literature DB >> 32108113

Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody.

Benjamin Morillon1, Robert J Zatorre2,3, Philippe Albouy4,3,5, Lucas Benjamin2.   

Abstract

Does brain asymmetry for speech and music emerge from acoustical cues or from domain-specific neural networks? We selectively filtered temporal or spectral modulations in sung speech stimuli for which verbal and melodic content was crossed and balanced. Perception of speech decreased only with degradation of temporal information, whereas perception of melodies decreased only with spectral degradation. Functional magnetic resonance imaging data showed that the neural decoding of speech and melodies depends on activity patterns in left and right auditory regions, respectively. This asymmetry is supported by specific sensitivity to spectrotemporal modulation rates within each region. Finally, the effects of degradation on perception were paralleled by their effects on neural classification. Our results suggest a match between acoustical properties of communicative signals and neural specializations adapted to that purpose.
Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

Year:  2020        PMID: 32108113     DOI: 10.1126/science.aaz3468

Source DB:  PubMed          Journal:  Science        ISSN: 0036-8075            Impact factor:   47.728


  23 in total

1.  MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training.

Authors:  Sebastian Puschmann; Mor Regev; Sylvain Baillet; Robert J Zatorre
Journal:  J Neurosci       Date:  2021-02-03       Impact factor: 6.167

2.  Learning metrics on spectrotemporal modulations reveals the perception of musical instrument timbre.

Authors:  Etienne Thoret; Baptiste Caramiaux; Philippe Depalle; Stephen McAdams
Journal:  Nat Hum Behav       Date:  2020-11-30

3.  Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies.

Authors:  Nathaniel J Zuk; Jeremy W Murphy; Richard B Reilly; Edmund C Lalor
Journal:  PLoS Comput Biol       Date:  2021-09-17       Impact factor: 4.475

4.  The Neuroanatomy of Speech Processing: A Large-scale Lesion Study.

Authors:  Corianne Rogalsky; Alexandra Basilakos; Chris Rorden; Sara Pillay; Arianna N LaCroix; Lynsey Keator; Soren Mickelsen; Steven W Anderson; Tracy Love; Julius Fridriksson; Jeffrey Binder; Gregory Hickok
Journal:  J Cogn Neurosci       Date:  2022-07-01       Impact factor: 3.420

5.  Differential activation of a frontoparietal network explains population-level differences in statistical learning from speech.

Authors:  Joan Orpella; M Florencia Assaneo; Pablo Ripollés; Laura Noejovich; Diana López-Barroso; Ruth de Diego-Balaguer; David Poeppel
Journal:  PLoS Biol       Date:  2022-07-06       Impact factor: 9.593

Review 6.  Potential Benefits of Music Therapy on Stroke Rehabilitation.

Authors:  Chengyan Xu; Zixia He; Zhipeng Shen; Fei Huang
Journal:  Oxid Med Cell Longev       Date:  2022-06-15       Impact factor: 7.310

7.  Effects of intention in the imitation of sung and spoken pitch.

Authors:  Peter Q Pfordresher; James T Mantell; Tim A Pruitt
Journal:  Psychol Res       Date:  2021-05-20

8.  The Microstructural Plasticity of the Arcuate Fasciculus Undergirds Improved Speech in Noise Perception in Musicians.

Authors:  Xiaonan Li; Robert J Zatorre; Yi Du
Journal:  Cereb Cortex       Date:  2021-07-29       Impact factor: 5.357

9.  Coupled oscillations enable rapid temporal recalibration to audiovisual asynchrony.

Authors:  Therese Lennert; Soheila Samiee; Sylvain Baillet
Journal:  Commun Biol       Date:  2021-05-11

10.  Music-selective neural populations arise without musical training.

Authors:  Dana Boebinger; Sam V Norman-Haignere; Josh H McDermott; Nancy Kanwisher
Journal:  J Neurophysiol       Date:  2021-02-17       Impact factor: 2.974

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.