| Literature DB >> 24837135 |
Shinya Fujii1, Hama Watanabe2, Hiroki Oohashi3, Masaya Hirashima2, Daichi Nozaki2, Gentaro Taga2.
Abstract
Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations.Entities:
Mesh:
Year: 2014 PMID: 24837135 PMCID: PMC4023986 DOI: 10.1371/journal.pone.0097680
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 1Spontaneous limb movements of infants when they listen to “Everybody” by The Backstreet Boys (music condition, Video S3) and those without any auditory stimulus (silent condition, Video S1).
(A) Typical limb trajectories during the music condition in an infant (ID1) in X, Y, and Z coordinates. (B) Mean square sum of right leg velocities and (C) relative proportion of the power spectrum density (PSD) around the musical tempo for right leg movements along the Y coordinate axis in ID1 (red), other infants (grey), and the group mean except for ID1 with standard deviation (SD) (black). (D) The right-foot position along the Y coordinate axis in ID1. He kicked more rhythmically during the music condition (red) than the silent condition (blue). (E) Power spectrogram of the right foot position along the Y coordinate axis in ID1. Relatively high PSD can be seen around the musical tempo (dashed line) in the music condition. (F) Mean synchronization index across moving sections (Methods for detail) in the music (red) and silent (blue) conditions. Error bars indicate standard errors (SE) across the moving sections.*p<0.01.
Figure 2Significant synchronization in right leg movements of ID1 during the music condition “Everybody” (108.7 BPM) (Video S3).
(A) Sound wave of the auditory stimulus (yellow) with the detected beat onsets (red vertical lines). (B) Observed (left) and phase-randomized (right) position data s pos (t) along the Y coordinate axis when the infant moved continuously over a period of three seconds (defined as a moving section). (C) Instantaneous phase of the musical beat φ music (t) calculated from the detected beat onsets. (D) Instantaneous phase of the motion φ motion (t). (E) Relative phase φ rel (t) between motion and the musical beat. (F) Circular histograms of φ rel (t). (G) Monte-Carlo statistics showed that the observed synchronization index (magenta line) was above the 95% confidence interval of the surrogate synchronization indexes (blue lines) calculated from the 10,000 phase-randomized position data: The observed movement was significantly synchronized to the musical beat.
Figure 3Spontaneous vocalizations of infants during the music condition “Go Trippy” by WANICO feat. Jake Smith (red) and in the silent condition where no auditory stimulus was present (blue).
Error bars indicate standard errors (SE) among the participants. (A) No significant difference was found in mean duration of vocalization per minute between the silent and music conditions (Wilcoxon signed-rank test, Z = 1.62, p = 0.11). (B) Typical time series of fundamental (F0, black lines) and formant frequencies (F1 and F2, cyan and magenta lines, respectively) within utterances. (C, D) Mean F0 and F1 was significantly higher in the music condition than in the silent condition (Z = 2.39, *p<0.05; Z = 2.06, *p<0.05, respectively). (E, F) There were no significant differences in mean F2 and SD of F0 (Z = 1.92, p = 0.06; Z = 1.16, P = 0.25, respectively). (G, H) SD of F1 and F2 were significantly higher in the music condition than in the silent condition (Z = 3.43, **p<0.001; Z = 3.48, **p<0.001, respectively).