| Literature DB >> 26399909 |
Léo Varnet1,2,3, Tianyun Wang1,2,3, Chloe Peter1,3, Fanny Meunier2,3, Michel Hoen1,3,4.
Abstract
It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.Entities:
Mesh:
Year: 2015 PMID: 26399909 PMCID: PMC4585866 DOI: 10.1038/srep14489
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Summary of the characteristics of the two groups.
| Variable | Musicians | Non-musicians | t-test |
|---|---|---|---|
| Age (year) | 23 (±2.89 S.D.) | 22.68 (±4.39 S.D.) | p = 0.78 |
| Gender (M/F) | 9/5 | 6/13 | |
| Handedness | 64.21 (±53.99 S.D.) | 73.95 (±57.72 S.D.) | p = 0.61 |
| ANT | |||
| Alerting effect | 29.95 (±22.68 S.D.) | 32.37 (±21.97 S.D.) | p = 0.78 |
| Orienting effect | 46.05 (±14.98 S.D.) | 40.21 (±23.25 S.D.) | p = 0.36 |
| Conflict effect | 131.74 (±49.13 S.D.) | 131.16 (±39.95 S.D.) | p = 0.97 |
| Main experiment | |||
| Score (%) | 79.29 (±0.35 S.D.) | 78.83 (±0.42 S.D.) | p = 0.0008* |
| SNR (dB) | −13.37 (±1.22 S.D.) | −11.91 (±1.04 S.D.) | p = 0.0004* |
| Reaction Time (s) | 1.37 (±0.19 S.D.) | 1.28 (±0.13 S.D.) | p = 0.11 |
| Sensitivity d’ | 1.68 (±0.05 S.D.) | 1.64 (±0.04 S.D.) | p = 0.0036* |
| Decision criterion | 0.638 (±0.128 S.D.) | 0.639 (±0.125 S.D.) | p = 0.97 |
Details of all participants’ musical experience.
| Musician | Age onset (year) | Years of practice | Instrument | Absolute pitch test (/20) |
|---|---|---|---|---|
| #1 | 13 | 7 | Double bass; guitar | 1 |
| #2 | 6 | 20 | Trombone | 17 |
| #3 | 6 | 13 | Piano | 20 |
| #4 | 4 | 17 | Violoncello | 19 |
| #5 | 6 | 15 | Violin | 8 |
| #6 | 5 | 14 | Accordion | 17 |
| #7 | 5 | 22 | Violin; piano | 18 |
| #8 | 7 | 14 | Violin | 10 |
| #9 | 5 | 18 | Double bass | 20 |
| #10 | 6 | 16 | Violin | 6 |
| #11 | 5 | 22 | Piano | 20 |
| #12 | 7 | 13 | Piano | 13 |
| #13 | 5 | 14 | Flute; bassoon | 2 |
| #14 | 3.5 | 18 | Opera singing | 5 |
| #15 | 5 | 21 | violin, viola | 15 |
| #17 | 7 | 10 | guitar | 19 |
| #18 | 7 | 13 | guitar | 2 |
| #19 | 7 | 17 | Double bass | 14 |
| #20 | 5 | 17 | viola da gamba; bowed viol | 8 |
Figure 1Cochleagrams of the four stimuli involved in the experiment.
Parameters for spectral and temporal resolution are the same as those used for the derivation of ACIs (see details in the text).
Figure 2Evolution of SNR (a) and response time (b) over the course of the experiment, averaged across participants (the shaded region shows the s.e.m.). Each data point corresponds to a mean over 500 trials.
Figure 3(a) ACIs for the two groups of participants (N = 19). Non significant weights (FDR > 0.01) are set to zero (b). Mean ACI over all 38 participants. Only weights sets (min. 7 adjacent weights with p < 10−10) are shown (c). Cluster-based nonparametric test between ACIs for the Non-musician and Musician groups (N = 19).
Summary of the characteristics of all sets of weights, sorted by bias and latency.
| Set | size (pxls) | Centroid | Extent | Correspondence with formants | Bias towards | Set weights | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t (ms) | f (Hz) | t (ms) | f (Hz) | NM (mean) | NM (SD) | M (mean) | M (SD) | t-test (M vs. NM) | ||||
| #1 | 41 | 317,9 | 215,8 | 51,04 | 256,8 | offset F1, 1st syllable | ‘da’ | 0,0173 | 0,0097 | 0,0158 | 0,0065 | p = 0,58 |
| #2 | 25 | 340,8 | 1913 | 43,75 | 429,5 | onset F2/F3, 2nd syllable | ‘da’ | 0,0070 | 0,0030 | 0,0075 | 0,0026 | p = 0,60 |
| #3 | 34 | 406 | 1383 | 65,63 | 425,4 | onset F2, 2nd syllable | ‘da’ | 0,0166 | 0,0047 | 0,0160 | 0,0048 | p = 0,68 |
| #4 | 42 | 405 | 658,4 | 58,33 | 300,6 | onset F1, 2nd syllable | ‘da’ | 0,0158 | 0,0060 | 0,0171 | 0,0044 | p = 0,45 |
| #5 | 19 | 443,5 | 2548 | 80,21 | 137,7 | onset F3, 2nd syllable | ‘da’ | 0,0066 | 0,0018 | 0,0052 | 0,0022 | p = 0,051 |
| #6 | 34 | 521,6 | 697,1 | 36,46 | 489,7 | offset F1, 2nd syllable | ‘da’ | 0,0075 | 0,0050 | 0,0090 | 0,0025 | p = 0,27 |
| #7 | 18 | 166,1 | 876,4 | 51,04 | 133,5 | offset F1, 1st syllable | ‘ga’ | −0,0067 | 0,0036 | −0,0060 | 0,0030 | p = 0,50 |
| #8 | 17 | 173,9 | 1297 | 36,46 | 244,9 | offset F1, 2nd syllable | ‘ga’ | −0,0040 | 0,0017 | −0,0042 | 0,0020 | p = 0,63 |
| #9 | 10 | 350 | 1347 | 21,88 | 166,4 | onset F2, 2nd syllable | ‘ga’ | −0,0029 | 0,0010 | −0,0023 | 0,0017 | p = 0,17 |
| #10 | 32 | 404,9 | 301,3 | 51,04 | 209,2 | onset F1, 2nd syllable | ‘ga’ | −0,0113 | 0,0066 | −0,0180 | 0,0043 | |
| #11 | 60 | 413,9 | 1960 | 72,92 | 645,6 | onset F2/F3, 2nd syllable | ‘ga’ | −0,0389 | 0,0076 | −0,0461 | 0,0061 | |
Figure 4Cross-prediction deviances for all participants.
Auto-predictions are represented along the diagonal of the matrix. Participants in each group are sorted by auto-prediction values.