| Literature DB >> 32244259 |
Enikő Ladányi1, Valentina Persici1,2,3, Anna Fiveash4, Barbara Tillmann4, Reyna L Gordon1,3,5,6.
Abstract
Although a growing literature points to substantial variation in speech/language abilities related to individual differences in musical abilities, mainstream models of communication sciences and disorders have not yet incorporated these individual differences into childhood speech/language development. This article reviews three sources of evidence in a comprehensive body of research aligning with three main themes: (a) associations between musical rhythm and speech/language processing, (b) musical rhythm in children with developmental speech/language disorders and common comorbid attentional and motor disorders, and (c) individual differences in mechanisms underlying rhythm processing in infants and their relationship with later speech/language development. In light of converging evidence on associations between musical rhythm and speech/language processing, we propose the Atypical Rhythm Risk Hypothesis, which posits that individuals with atypical rhythm are at higher risk for developmental speech/language disorders. The hypothesis is framed within the larger epidemiological literature in which recent methodological advances allow for large-scale testing of shared underlying biology across clinically distinct disorders. A series of predictions for future work testing the Atypical Rhythm Risk Hypothesis are outlined. We suggest that if a significant body of evidence is found to support this hypothesis, we can envision new risk factor models that incorporate atypical rhythm to predict the risk of developing speech/language disorders. Given the high prevalence of speech/language disorders in the population and the negative long-term social and economic consequences of gaps in identifying children at-risk, these new lines of research could potentially positively impact access to early identification and treatment. This article is categorized under: Linguistics > Language in Mind and Brain Neuroscience > Development Linguistics > Language Acquisition.Entities:
Keywords: developmental language disorder; dyslexia; rhythm; risk factor; specific language impairment
Mesh:
Year: 2020 PMID: 32244259 PMCID: PMC7415602 DOI: 10.1002/wcs.1528
Source DB: PubMed Journal: Wiley Interdiscip Rev Cogn Sci ISSN: 1939-5078
Figure 1Shared underlying mechanisms for musical rhythm and speech/language processing. (a) Models of underlying mechanisms for musical rhythm and speech/language processing emphasize the role of fine‐grained auditory processing, oscillatory brain networks and sensorimotor coupling (Fiveash et al., submitted). (b) Another line of literature emphasizes the shared role of processing of hierarchical structures in both musical rhythm and syntactic processing (e.g., Fitch & Martins, 2014; bottom figure is adapted from Heard & Lee, 2020)
Figure 2Rhythm production ability and early literacy skills. Convergent evidence across data acquired with various methods, in support of associations between rhythm skills and language in preschool‐aged children (Reprinted with permission from Woodruff Carr, White‐Schwoch, Tierney, Strait, and Kraus (2014)). Children that performed well on a musical beat synchronization task (here called synchronizers, in red, shown on the rose plots on left to have better drumming accuracy) encoded speech more efficiently (top right) and show significantly better phonological awareness than their non‐synchronizer peers (bottom right). Synchronizers also performed better on a sentence repetition task (it is important to note that sentence repetition/imitation tasks not only require auditory perception and short‐term memory but also reflect deeper access to the grammatical structure of language: see Klem et al., 2015)
Summary of current literature investigating rhythm in children with atypical speech–language development
| Age | Task | Evidence for atypical rhythm? | ||
|---|---|---|---|---|
| Dyslexia | Colling, Noble, and Goswami ( | 9–10 years |
Beat perception Tapping task | Yes |
| Cutini, Szucs, Mead, Huss, and Goswami ( | 12 years | Neural entrainment to amplitude‐modulated noise | Yes (2 Hz) | |
| Frey, François, Chobert, Besson, and Ziegler ( | 10 years | Neural processing of speech sounds in silence, noise, and envelope conditions | Yes | |
| Goswami et al. ( | 11 years | Beat detection in amplitude‐modulated sounds | Yes | |
| Goswami, Gerson, and Astruc ( | 7–13 years | Amplitude envelope onset (rise time) discrimination | Yes | |
| Goswami, Huss, Mead, Fosker, and Verney ( | 8–14 years | Beat perception | Yes | |
| Goswami et al. ( | 9 years | Syllable stress discrimination | Yes | |
| Goswami et al. ( |
Discrimination of amplitude rise time Temporal modulations of nursery rhymes |
Yes
No but impaired acoustic learning during the experiment from low‐pass filtered targets | ||
| Hämäläinen, Rupp, Soltész, Szücs, and Goswami ( | 19–29 years | Amplitude‐modulated white noise | Yes at 2 Hz | |
| Huss, Verney, Fosker, Mead, and Goswami ( | 8–13 years | Amplitude envelope rise time perception | Yes | |
| Lee, Sie, Chen, and Cheng ( | 9–12 years | Rhythmic imitation | Yes | |
| Leong and Goswami ( | <40 years, mean: 22 years | Rhythmic detection to identify amplitude‐modulated nursery rhyme sentences | Yes | |
| Leong, Hämäläinen, Soltész, and Goswami ( | 17–41 years | Amplitude envelope onset (rise time) perception and syllable stress detection | Yes | |
| Lizarazu et al. ( | Children: 8–14 years; adults: 17–44 years | Auditory neural synchronization | Yes | |
| Molinaro, Lizarazu, Lallier, Bourguignon, and Carreiras ( | Children: 8–14 years; adults: 22–37 years | Neural synchronization to spoken sentences (MEG) | Yes | |
| Muneaux, Ziegler, Truc, Thomson, and Goswami ( | 11 years | Beat perception (slope) | Yes | |
| Overy ( | 6–7 years |
Rhythm discrimination Tempo discrimination Meter reproduction | Yes, especially in meter reproduction | |
| Overy, Nicolson, Fawcett, and Clarke ( | 7–11 years | Tests of timing skills (rhythm copying, rhythm discrimination, song rhythm, tempo copying, tempo discrimination, song beat) | Yes | |
| Pasquini, Corriveau, and Goswami ( | 19–27 years | Rise time perception and temporal order judgment | Yes | |
| Persici et al. ( | 9–11 years | Tapping | Yes | |
| Power, Colling, Mead, Barnes, and Goswami ( | 12–14 years | Neural entrainment to speech syllables | Yes | |
| Soltész, Szűcs, Leong, White, and Goswami ( | Mean: 25.8 years | Neural entrainment to tones presented at 2 or 1.5 Hz | Yes | |
| Surányi et al. ( | 8–9 years | Amplitude envelope rise time discrimination | Yes | |
| Thomson, Fryer, Maltby, and Goswami ( | 18–31 years |
Basic auditory processing tasks (rise time, duration, and intensity discrimination) Tempo discrimination Tapping (unimanual and bimanual) |
Yes
No Yes but only in the inter‐tap‐interval variability | |
| Thomson and Goswami ( | 10 years |
Rhythmic discrimination Paced and unpaced finger tapping |
No Yes | |
| Wang, Huss, Hämäläinen, and Goswami ( | 9–10 years | Basic auditory processing tasks (rise time, duration, and intensity discrimination) | Yes | |
| Zuk et al. ( | 18–36 years | Speech syllable discrimination | Yes | |
| DLD | Bedoin et al. ( | 9–11 years | Rhythm discrimination | Yes |
| Corriveau and Goswami ( | 7–11 years | Paced and unpaced tapping | Yes in the paced condition | |
| Corriveau, Pasquini, and Goswami ( | 7–11 years | Amplitude envelope rise time and sound duration perception | Yes | |
| Cumming, Wilson, Leong, Colling, and Goswami ( | 6–12 years |
Beat detection Tapping Speech/music task | Yes, especially in tapping | |
| Goswami et al. ( | 9 years |
Discrimination of amplitude rise time Temporal modulations of nursery rhymes | Yes | |
| Richards and Goswami ( | 8–12 years | Stress perception task | Yes | |
| Richards and Goswami ( | 6–11 years | Stress pattern disruptions | Yes | |
| Sabisch, Hahne, Glass, von Suchodoletz, and Friederici ( | 8–10 years | Syntactic processing with prosody disruptions | Yes | |
| Sallat and Jentschke ( | 4–5 years | Rhythmic–melodic perception task | Yes | |
| Vuolo, Goffman, and Zelaznik ( | 4–5 years | Tapping and bimanual clapping | Yes, but only in the bimanual clapping task | |
| Weinert ( | 5–8 years | Rhythmic discrimination | Yes | |
| Wells and Peppé ( | 8 years | Prosody perception | Yes | |
| Zelaznik and Goffman ( | 6–8 years | Tapping and drawing to a metronome | Yes (but no in the timing skill in the manual domain) | |
| Stuttering | Chang, Chow, Wieland, and McAuley ( | 6–11 years | Auditory rhythm discrimination task | Yes |
| Falk, Müller, and Dalla Bella ( | 8–16 years | Finger tapping | Yes | |
| Olander, Smith, and Zelaznik ( | 4–6 years | Metronome clapping | Yes | |
| Toyomura, Fujii, and Kuriki ( | 18–55 years | Metronome‐timed speech | No (yes in the normal speech condition) | |
| Wieland, McAuley, Dilley, and Chang ( | 6–11 years | Simple and complex rhythms discrimination | Yes | |
| DCD | Puyjarinet, Bégel, Lopez, Dellacherie, and Dalla Bella ( |
Children: 6–12 years; adults: 19–50 years |
Duration and beat perception Tapping | Yes |
| Rosenblum and Regev ( | 7–12 years | Metronome synchronization | Yes | |
| Trainor, Chang, Cairney, and Li ( | 6–7 years |
Auditory duration and rhythm discrimination Oddball ERP paradigm | Yes | |
| ADHD | Carrer ( | 6–14 years | Rhythmic discrimination | Yes |
| Hove, Gravel, Spencer, and Valera ( | 20 years | Paced and unpaced finger tapping | Yes (in the standard task, not in the one with time shifts) | |
| Puyjarinet et al. ( |
Children: 6–12 years; adults: 19–50 years |
Duration and beat perception Tapping | Yes | |
| Valera et al. ( | 10 years | Paced and unpaced tapping | Yes—greater within‐subject variability | |
| Zelaznik et al. ( | 9 years | Spacebar press following a metronome | Yes |
Abbreviations: ADHD, attention deficit hyperactivity disorder; DCD, developmental coordination disorder; DLD, developmental language disorder.
Figure 3Rhythm perception and production difficulties in children with stuttering. (a) Children who stutter show impaired rhythm perception performance compared to TD children on a task requiring discrimination of simple and complex musical rhythmic sequences (Wieland et al., 2015). (b) Children and adolescents who stutter are less accurate than TD peers on synchronization tests, both tapping to a metronome at certain rates and tapping to the beat in music (Falk et al., 2015). Both figures are reprinted with permission from the original papers
Figure 4Pleiotropy scenarios for shared versus separate genetic architecture of rhythm and speech/language. The Atypical Rhythm Risk hypothesis predicts that associations between rhythm and speech/language are (a) in part driven by genetic pleiotropy, such that a common set of causal genes affects both phenotypes directly, or (b) mediated genetic pleiotropy, such that genes directly affect rhythm phenotypes, and those phenotypes in turn affect individual differences in acquisition of speech/language during development, or (c) genes directly affect speech/language phenotypes, and those phenotypes affect individual differences in rhythm development. These models should be tested against the null hypothesis of separate genetic architecture. Moreover, a key to understanding the dynamics between genes, brain and behavior will be to test mediating neural endophenotypes linked to (d) shared or (e) separate genetic architecture