| Literature DB >> 28212857 |
Nai Ding1, Aniruddh D Patel2, Lin Chen3, Henry Butler4, Cheng Luo5, David Poeppel6.
Abstract
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.Keywords: Modulation spectrum; Music; Rhythm; Speech; Temporal modulations
Mesh:
Year: 2017 PMID: 28212857 DOI: 10.1016/j.neubiorev.2017.02.011
Source DB: PubMed Journal: Neurosci Biobehav Rev ISSN: 0149-7634 Impact factor: 8.989