| Literature DB >> 28974695 |
Bastien Intartaglia1, Travis White-Schwoch2, Nina Kraus2,3,4, Daniele Schön5.
Abstract
Growing evidence shows that music and language experience affect the neural processing of speech sounds throughout the auditory system. Recent work mainly focused on the benefits induced by musical practice on the processing of native language or tonal foreign language, which rely on pitch processing. The aim of the present study was to take this research a step further by investigating the effect of music training on processing English sounds by foreign listeners. We recorded subcortical electrophysiological responses to an English syllable in three groups of participants: native speakers, non-native nonmusicians, and non-native musicians. Native speakers had enhanced neural processing of the formant frequencies of speech, compared to non-native nonmusicians, suggesting that automatic encoding of these relevant speech cues are sensitive to language experience. Most strikingly, in non-native musicians, neural responses to the formant frequencies did not differ from those of native speakers, suggesting that musical training may compensate for the lack of language experience by strengthening the neural encoding of important acoustic information. Language and music experience seem to induce a selective sensory gain along acoustic dimensions that are functionally-relevant-here, formant frequencies that are crucial for phoneme discrimination.Entities:
Mesh:
Year: 2017 PMID: 28974695 PMCID: PMC5626754 DOI: 10.1038/s41598-017-12575-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Global spectral subcortical representation of the English syllable [thae] averaged across consonant and vowel for different frequency bands (see Material and methods for more details). Non-native nonmusicians (black), non-native musicians (red) and native nonmusicians (blue). *p < 0.05; error bars represent ± 1 standard error.
Figure 2Spectral representations. Top: Fast Fourier transform of the neural response to the consonant (left panel) and the vowel (right panel) for non-native nonmusicians (black), non-native musicians (red) and native nonmusicians (blue). Bottom: Bar graphs corresponding to the fundamental frequency (F0) and its subsequent harmonics (H2-H6) for the consonant (left), and to the F0, the first formant (F1) and the non-formant frequencies (Non-F1) for the vowel (right). Left y axes correspond to the F0 and Non-F1 frequencies, right y axes correspond to the harmonics (H2-H6) and F1 frequencies. *p < 0.05; **p < 0.01; error bars represent ± 1 standard error.
Figure 3Waveform of the stimulus (normalized amplitude). The vertical gray line indicates the boundary between the consonant and vowel, as established according to the spectral changes by an experienced phonetician.
Figure 4Fast Fourier transform computed on the whole stimulus (normalized amplitude). The horizontal line indicates the range of the first formant.