| Literature DB >> 35259161 |
Prisca Hsu1, Emily A Ready2,3, Jessica A Grahn2,3.
Abstract
Humans naturally perceive and move to a musical beat, entraining body movements to auditory rhythms through clapping, tapping, and dancing. Yet the accuracy of this seemingly effortless behavior varies widely across individuals. Beat perception and production abilities can be improved by experience, such as music and dance training, and impaired by progressive neurological changes, such as in Parkinson's disease. In this study, we assessed the effects of music and dance experience on beat processing in young and older adults, as well as individuals with early-stage Parkinson's disease. We used the Beat Alignment Test (BAT) to assess beat perception and production in a convenience sample of 458 participants (278 healthy young adults, 139 healthy older adults, and 41 people with early-stage Parkinson's disease), with varying levels of music and dance training. In general, we found that participants with over three years of music training had more accurate beat perception than those with less training (p < .001). Interestingly, Parkinson's disease patients with music training had beat production abilities comparable to healthy adults while Parkinson's disease patients with minimal to no music training performed significantly worse. No effects were found in healthy adults for dance training, and too few Parkinson's disease patients had dance training to reliably assess its effects. The finding that musically trained Parkinson's disease patients performed similarly to healthy adults during a beat production task, while untrained patients did not, suggests music training may preserve certain rhythmic motor timing abilities in early-stage Parkinson's disease.Entities:
Mesh:
Year: 2022 PMID: 35259161 PMCID: PMC8903281 DOI: 10.1371/journal.pone.0264587
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Participant demographics.
| N = 458 | Age | Music training (years) | Dance training (years) | |||
|---|---|---|---|---|---|---|
| Years (SD) | 0–2 | 3+ | 0–2 | 3+ | ||
| Young adults | 278 | 20.41 (3.01) | 111 | 167 | 208 | 70 |
| Older adults | 139 | 64.63 (9.27) | 71 | 68 | 111 | 28 |
| Parkinson’s disease patients | 41 | 68.28 (7.73) | 25 | 16 | 40 | 1 |
Fig 1Music training effects on beat perception and production.
Performance broken down by group (young, older, Parkinson’s disease) and music training (0–2 years, 3+ years) for beat perception (A), beat production phase matching (B), beat production tempo matching (C), and beat production tapping variability (D). For beat perception, young adults and Parkinson’s disease patients with more extensive music training were significantly better than those without. For asynchrony (phase matching), Parkinson’s disease patients with minimal music training were significantly worse than all other groups. No significant differences were present for coefficient of deviation (tempo matching). For coefficient of variation (tapping variability), older adults and Parkinson’s patients were more variable than younger adults, and participants with more extensive music training (regardless of group) were less variable than those with little training. Error bars indicate the standard error of the mean. ** = p < .01, *** = p < .001.
Fig 2Dance training effects on beat perception and production.
Performance broken down by group (young, older, Parkinson’s disease) and dance training (0–2 years, 3+ years) for beat perception (A), beat production phase matching (B), beat production tempo matching (C), and beat production tapping variability (D). No significant differences were present for beat perception, phase matching (asynchrony) and tempo matching (coefficient of deviation). Tapping variability did differ between groups. Error bars indicate the standard error of the mean.
Comparison of Bayes models: Music & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.00 | 0.00 | 1.00 | |
| music | 0.20 | 0.89 | 33.23 | 685845 | 0.00 |
| music + group + music*group | 0.20 | 0.06 | 0.26 | 46220 | 1.85 |
| music + group | 0.20 | 0.05 | 0.20 | 36339 | 1.48 |
| group | 0.20 | 0.00 | 0.00 | 0.12 | 0.02 |
Note: P(M) = prior model probability; P(M|data) = posterior model probability; BFm = change from prior to posterior model odds; BF10 = Bayes Factor in favor of each model compared with the null model. Music = music training level, group = young/older/Parkinson’s group.
Comparison of Bayes models: Dance & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.57 | 0.57 | 1.00 | |
| dance | 0.20 | 0.19 | 0.19 | 0.33 | 0.00 |
| group | 0.20 | 0.17 | 0.17 | 0.29 | 0.00 |
| dance + group | 0.20 | 0.06 | 0.06 | 0.10 | 1.08 |
| dance + group + dance*group | 0.20 | 0.01 | 0.01 | 0.02 | 2.54 |
Note: P(M) = prior model probability; P(M|data) = posterior model probability; BFm = change from prior to posterior model odds; BF10 = Bayes Factor in favor of each model compared with the null model. Dance = dance training level, group = young/older group.
Comparison of Bayes models: Music & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.40 | 2.72 | 1.00 | |
| music | 0.20 | 0.25 | 1.30 | 0.61 | 0.03 |
| music + group + music*group | 0.20 | 0.15 | 0.73 | 0.38 | 0.00 |
| music + group | 0.20 | 0.12 | 0.55 | 0.30 | 1.80 |
| group | 0.20 | 0.08 | 0.33 | 0.19 | 8.34 |
Comparison of Bayes models: Dance & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.76 | 12.86 | 1.00 | |
| dance | 0.20 | 0.13 | 0.59 | 0.17 | 0.00 |
| group | 0.20 | 0.09 | 0.40 | 0.12 | 0.00 |
| dance + group | 0.20 | 0.02 | 0.06 | 0.02 | 2.15 |
| dance + group + dance*group | 0.20 | 0.00 | 0.02 | 0.01 | 5.91 |
Comparison of Bayes models: Music & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.60 | 5.97 | 1.00 | |
| music | 0.20 | 0.30 | 1.72 | 0.50 | 0.00 |
| music + group + music*group | 0.20 | 0.07 | 0.31 | 0.12 | 0.02 |
| music + group | 0.20 | 0.03 | 0.11 | 0.04 | 2.69 |
| group | 0.20 | 0.00 | 0.01 | 0.00 | 2.09 |
Comparison of Bayes models: Dance & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.69 | 8.73 | 1.00 | |
| dance | 0.20 | 0.15 | 0.71 | 0.22 | 0.00 |
| group | 0.20 | 0.12 | 0.56 | 0.18 | 0.00 |
| dance + group | 0.20 | 0.03 | 0.11 | 0.04 | 1.12 |
| dance + group + dance*group | 0.20 | 0.01 | 0.05 | 0.02 | 1.44 |
Comparison of Bayes models: Music & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.00 | 0.00 | 1.00 | |
| music + group | 0.20 | 0.79 | 14.90 | 5.30e+9 | 7.12 |
| music + group + music*group | 0.20 | 0.17 | 0.84 | 1.19e+9 | 1.75 |
| music | 0.20 | 0.04 | 0.16 | 2.65e+8 | 0.00 |
| group | 0.20 | 0.00 | 0.00 | 301.27 | 0.02 |
Comparison of Bayes models: Dance & group.
| Models | P(M) | P(M|data) | BF M | BF 10 | error % |
|---|---|---|---|---|---|
| Null model | 0.20 | 0.05 | 0.20 | 1.00 | |
| group | 0.20 | 0.79 | 15.02 | 16.78 | 0.00 |
| dance | 0.20 | 0.12 | 0.56 | 2.62 | 1.68 |
| dance + group | 0.20 | 0.03 | 0.13 | 0.68 | 1.37 |
| dance + group + dance*group | 0.20 | 0.01 | 0.03 | 0.16 | 0.00 |