Literature DB >> 28212857

Temporal modulations in speech and music.

Nai Ding1, Aniruddh D Patel2, Lin Chen3, Henry Butler4, Cheng Luo5, David Poeppel6.   

Abstract

Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.
Copyright © 2017 Elsevier Ltd. All rights reserved.

Keywords:  Modulation spectrum; Music; Rhythm; Speech; Temporal modulations

Mesh:

Year:  2017        PMID: 28212857     DOI: 10.1016/j.neubiorev.2017.02.011

Source DB:  PubMed          Journal:  Neurosci Biobehav Rev        ISSN: 0149-7634            Impact factor:   8.989


  75 in total

1.  Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex.

Authors:  Sam V Norman-Haignere; Josh H McDermott
Journal:  PLoS Biol       Date:  2018-12-03       Impact factor: 8.029

Review 2.  A New Unifying Account of the Roles of Neuronal Entrainment.

Authors:  Peter Lakatos; Joachim Gross; Gregor Thut
Journal:  Curr Biol       Date:  2019-09-23       Impact factor: 10.834

3.  MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training.

Authors:  Sebastian Puschmann; Mor Regev; Sylvain Baillet; Robert J Zatorre
Journal:  J Neurosci       Date:  2021-02-03       Impact factor: 6.167

4.  The possible role of brain rhythms in perceiving fast speech: Evidence from adult aging.

Authors:  Lana R Penn; Nicole D Ayasse; Arthur Wingfield; Oded Ghitza
Journal:  J Acoust Soc Am       Date:  2018-10       Impact factor: 1.840

5.  Neural entrainment to music is sensitive to melodic spectral complexity.

Authors:  Indiana Wollman; Pablo Arias; Jean-Julien Aucouturier; Benjamin Morillon
Journal:  J Neurophysiol       Date:  2020-02-05       Impact factor: 2.714

6.  Listening to birdsong reveals basic features of rate perception and aesthetic judgements.

Authors:  Tina Roeske; Pauline Larrouy-Maestri; Yasuhiro Sakamoto; David Poeppel
Journal:  Proc Biol Sci       Date:  2020-03-25       Impact factor: 5.349

7.  Amplitude modulation transfer functions reveal opposing populations within both the inferior colliculus and medial geniculate body.

Authors:  Duck O Kim; Laurel Carney; Shigeyuki Kuwada
Journal:  J Neurophysiol       Date:  2020-09-09       Impact factor: 2.714

Review 8.  Speech rhythms and their neural foundations.

Authors:  David Poeppel; M Florencia Assaneo
Journal:  Nat Rev Neurosci       Date:  2020-05-06       Impact factor: 34.870

9.  An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions.

Authors:  Sanne Ten Oever; Andrea E Martin
Journal:  Elife       Date:  2021-08-02       Impact factor: 8.140

10.  Linguistic Structure and Meaning Organize Neural Oscillations into a Content-Specific Hierarchy.

Authors:  Greta Kaufeld; Hans Rutger Bosker; Sanne Ten Oever; Phillip M Alday; Antje S Meyer; Andrea E Martin
Journal:  J Neurosci       Date:  2020-10-23       Impact factor: 6.167

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.