| Literature DB >> 23977386 |
Tyler K Perrachione1, Evelina G Fedorenko, Louis Vinke, Edward Gibson, Laura C Dilley.
Abstract
Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct--either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.Entities:
Mesh:
Year: 2013 PMID: 23977386 PMCID: PMC3744486 DOI: 10.1371/journal.pone.0073372
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Musical and linguistic background of participants (by self-report).
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| Ever played an instrument | 15 | 18 | ||||
| -- Number of instruments played | 0 - 4 | 2 | 1.56 | 1.15 | 18 | |
| -- Years played | 0 - 17 | 5 | 5.50 | 5.39 | 18 | |
| -- Proficiency | 0 - 10 | 6 | 5.00 | 3.57 | 18 | |
| Ever sung in a choir | 12 | 18 | ||||
| -- Years in choir | 0 - 14 | 2 | 3.83 | 4.60 | 18 | |
| Ever had formal music lessons | 14 | 18 | ||||
| -- Years of lessons | 1-10 | 5 | 4.79 | 2.97 | 14 | |
| -- Years since last lesson | 0 - 8 | 3 | 4.36 | 2.98 | 14 | |
| -- Years since last practice | 0 - 8 | 1 | 2.39 | 2.97 | 14 | |
| Ever had formal training in music theory | 6 | 18 | ||||
| -- Years of music theory training | 1-11 | 4.5 | 5.33 | 3.39 | 6 | |
| Formal degree in music | 1 | 18 | ||||
| Hours of music listening daily | 0.75-18 | 3 | 4.43 | 4.43 | 18 | |
| Ever studied a foreign language | 15 | 18 | ||||
| -- Number of foreign languages studied | 1-2 | 1 | 1.33 | 0.49 | 15 | |
| -- Age foreign language study began | 6-16 | 14 | 13.07 | 2.52 | 15 | |
| -- Speaking proficiency | 1-8 | 3.5 | 4.00 | 2.25 | 15 | |
| -- Understanding proficiency | 1-9 | 5 | 4.79 | 2.55 | 15 | |
| -- Reading proficiency | 1-10 | 5 | 4.71 | 2.61 | 15 | |
| -- Writing proficiency | 0 - 10 | 5 | 4.20 | 3.00 | 15 |
For most proficient musical instrument or foreign language
Scale 0 (least proficient) to 10 (most proficient)
Figure 1Example psychophysical stimuli.
(A) At left, a waveform and spectrogram illustrate an example template linguistic stimulus with overlaid pitch contour (orange) and phonemic alignment. Plots at right illustrate the four different types of linguistic pitch contours (black traces) showing ±100, 200, and three hundred cents deviants (blue, green, and red traces, respectively). (B) At left, a waveform illustrates an example template musical stimulus with overlaid pitch contour (orange), as well as the notation of musical stimuli. Plots at right illustrate the four different types of musical pitch contours (black traces), analogous to those from the Language condition, as well as traces of deviants of ±100 (blue), 200 (green), and 300 (red) cents. (C) These plots show the relative frequencies of the template (black traces) and deviants of ±10 (blue), 20 (green), and 30 (red) cents, each shown within the the temporal configuration of a single trial. (D) These plots show the relative rates of the template click train (black lines) and rate deviants of ±200 (blue), 400 (green), and 600 (red) cents. Note that only the first 150ms of the full 1s stimuli are shown. (E) Visual spatial frequency stimuli (“Gabor patches”), with the template (outlined) and example deviants of ±200, 400, and six hundred cents.
Task performance by condition.
|
|
|
| ||||
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| Language | 0.77 | ± 0.09 | 0.87 | ± 0.08 | 151 | ± 74 |
| Music | 0.83 | ± 0.09 | 0.90 | ± 0.08 | 129 | ± 92 |
| Tones | 0.65 | ± 0.09 | 0.75 | ± 0.10 | 26 | ± 5 |
| Clicks | 0.75 | ± 0.06 | 0.84 | ± 0.07 | 313 | ± 137 |
| Gabors | 0.79 | ± 0.08 | 0.88 | ± 0.07 | 296 | ± 143 |
Figure 2Discrimination contours across stimulus conditions.
Mean percent "different" responses are shown for each condition (note differences in abscissa values). Shaded regions show the standard deviation of the sample. Dotted horizontal line: 75% discrimination threshold. Ordinate: frequency of “different” responses; Abscissa: cents different from the template.
Pairwise correlations.
|
|
|
| |||||
|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
| |
|
| Music | 0.927 | 0.000 | 0.928 | 0.000 | 0.905 | 0.000 |
| Tones | 0.780 | 0.000 | 0.732 | 0.001 | 0.601 | 0.008 | |
| Clicks | 0.749 | 0.000 | 0.713 | 0.001 | 0.661 | 0.003 | |
| Gabors | 0.389 | 0.111 | 0.576 | 0.012 | 0.562 | 0.015 | |
|
| Tones | 0.671 | 0.002 | 0.642 | 0.004 | 0.522 | 0.026 |
| Clicks | 0.626 | 0.005 | 0.558 | 0.016 | 0.684 | 0.002 | |
| Gabors | 0.374 | 0.126 | 0.506 | 0.032 | 0.550 | 0.018 | |
|
| Clicks | 0.752 | 0.000 | 0.687 | 0.002 | 0.330 | 0.180 |
| Gabors | 0.384 | 0.115 | 0.559 | 0.016 | 0.540 | 0.021 | |
|
| Gabors | 0.425 | 0.079 | 0.466 | 0.051 | 0.543 | 0.020 |
significant at Bonferroni-corrected α = 0.00167
Comparison of linear models of language performance.
|
|
| ||||||
|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
| |
| Language ~ Tones + Clicks + Gabors | 0.671 | 0.001 | β = | 0.495 | 0.505 | 0.054 | − |
|
| 0.054 | 0.155 | 0.786 | − | |||
| Language ~ Tones + Clicks + Gabors + Music | 0.919 | 6×10-7 | β = | 0.181 | 0.261 | -0.023 | 0.655 |
|
| 0.191 | 0.165 | 0.822 | 3×10-5 | |||
|
| |||||||
| Language ~ Tones + Clicks + Gabors | 0.646 | 0.002 | β = | 0.302 | 0.435 | 0.229 | − |
|
| 0.139 | 0.119 | 0.323 | − | |||
| Language ~ Tones + Clicks + Gabors + Music | 0.923 | 5×10-7 | β = | 0.065 | 0.273 | 0.071 | 0.696 |
|
| 0.525 | 0.055 | 0.533 | 2×10-5 | |||
|
| |||||||
| Language ~ Tones + Clicks + Gabors | 0.606 | 0.004 | β = | 0.636 | 0.520 | 0.090 | − |
|
| 0.068 | 0.030 | 0.706 | − | |||
| Language ~ Tones + Clicks + Gabors + Music | 0.846 | 4×10-5 | β = | 0.287 | 0.092 | 0.009 | 0.599 |
|
| 0.220 | 0.595 | 0.956 | 6×10-4 | |||
Comparison of linear models of music performance.
|
|
| ||||||
|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
| |
| Music ~ Tones + Clicks + Gabors | 0.492 | 0.021 | β = | 0.480 | 0.372 | 0.118 | − |
|
| 0.146 | 0.417 | 0.655 | − | |||
| Music ~ Tones + Clicks + Gabors + Language | 0.874 | 1×10-5 | β = | -0.090 | -0.208 | 0.056 | 1.149 |
|
| 0.635 | 0.415 | 0.682 | 3×10-5 | |||
|
| |||||||
| Music ~ Tones + Clicks + Gabors | 0.463 | 0.030 | β = | 0.340 | 0.233 | 0.227 | − |
|
| 0.185 | 0.495 | 0.437 | − | |||
| Music ~ Tones + Clicks + Gabors + Language | 0.883 | 7×10-6 | β = | 0.001 | -0.255 | -0.030 | 1.123 |
|
| 0.992 | 0.172 | 0.837 | 2×10-5 | |||
|
| |||||||
| Music ~ Tones + Clicks + Gabors | 0.572 | 0.007 | β = | 0.582 | 0.714 | 0.135 | − |
|
| 0.185 | 0.023 | 0.662 | − | |||
| Music ~ Tones + Clicks + Gabors + Language | 0.832 | 6×10-5 | β = | -0.063 | 0.186 | 0.044 | 1.015 |
|
| 0.841 | 0.406 | 0.826 | 6×10-4 | |||