Literature DB >> 36240225

Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk.

Tatsuya Daikoku1,2,3, Usha Goswami1.   

Abstract

Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.

Entities:  

Mesh:

Year:  2022        PMID: 36240225      PMCID: PMC9565671          DOI: 10.1371/journal.pone.0275631

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


  66 in total

1.  Cortical entrainment to music and its modulation by expertise.

Authors:  Keith B Doelling; David Poeppel
Journal:  Proc Natl Acad Sci U S A       Date:  2015-10-26       Impact factor: 11.205

2.  Speech recognition with amplitude and frequency modulations.

Authors:  Fan-Gang Zeng; Kaibao Nie; Ginger S Stickney; Ying-Yee Kong; Michael Vongphoe; Ashish Bhargave; Chaogang Wei; Keli Cao
Journal:  Proc Natl Acad Sci U S A       Date:  2005-01-27       Impact factor: 11.205

3.  A temporal sampling framework for developmental dyslexia.

Authors:  Usha Goswami
Journal:  Trends Cogn Sci       Date:  2010-11-18       Impact factor: 20.229

4.  Frequency modulation entrains slow neural oscillations and optimizes human listening behavior.

Authors:  Molly J Henry; Jonas Obleser
Journal:  Proc Natl Acad Sci U S A       Date:  2012-11-14       Impact factor: 11.205

5.  Tonal consonance and critical bandwidth.

Authors:  R Plomp; W J Levelt
Journal:  J Acoust Soc Am       Date:  1965-10       Impact factor: 1.840

6.  Hierarchical organization of melodic sequences is encoded by cortical entrainment.

Authors:  Lucas S Baltzell; Ramesh Srinivasan; Virginia Richards
Journal:  Neuroimage       Date:  2019-06-27       Impact factor: 6.556

7.  Local Temporal Regularities in Child-Directed Speech in Spanish.

Authors:  Jose Pérez-Navarro; Marie Lallier; Catherine Clark; Sheila Flanagan; Usha Goswami
Journal:  J Speech Lang Hear Res       Date:  2022-10-04       Impact factor: 2.674

8.  Speech rhythms and multiplexed oscillatory sensory coding in the human brain.

Authors:  Joachim Gross; Nienke Hoogenboom; Gregor Thut; Philippe Schyns; Stefano Panzeri; Pascal Belin; Simon Garrod
Journal:  PLoS Biol       Date:  2013-12-31       Impact factor: 8.029

9.  Born to Speak and Sing: Musical Predictors of Language Development in Pre-schoolers.

Authors:  Nina Politimou; Simone Dalla Bella; Nicolas Farrugia; Fabia Franco
Journal:  Front Psychol       Date:  2019-05-24

10.  Neural Networks for Beat Perception in Musical Rhythm.

Authors:  Edward W Large; Jorge A Herrera; Marc J Velasco
Journal:  Front Syst Neurosci       Date:  2015-11-25
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.