| Literature DB >> 30837851 |
Alessandra Cecilia Rampinini1, Giacomo Handjaras1, Andrea Leo1, Luca Cecchetti1, Monica Betta1, Giovanna Marotta2, Emiliano Ricciardi1, Pietro Pietrini1.
Abstract
Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network.Entities:
Keywords: fMRI; formants; language; perception; production; speech; tones; vowels
Year: 2019 PMID: 30837851 PMCID: PMC6383050 DOI: 10.3389/fnhum.2019.00032
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Average F1 and F2 values and standard deviations for each stimulus.
| 305 ± 21.1 | 2170 ± 25.7 | |
| 303 ± 35.9 | 1736 ± 30.7 | |
| 400 ± 27.1 | 1428 ± 47.4 | |
| 525 ± 28.9 | 1139 ± 7.1 | |
| 455 ± 68.1 | 836 ± 34.9 | |
| 338 ± 23.4 | 637 ± 71.6 | |
| 278 ± 16.2 | 604 ± 27.0 |
Figure 3Here we show formant space (top left) and articulatory space (top right). The bottom panel shows the reconstruction of formant space (bottom left and right) from group-level brain activity in the left pSTS-MTG (center, R2 = 0.40) and IFGpTri (right, R2 = 0.39) through CCA. Dashed ellipses represent standard errors. Articulatory space reconstruction is not reported for lack of statistical significance.
Figure 1Here we show a sample vowel by its formant (left) and articulatory (right) representations, as described in Materials and Methods. Formant features represent F1 in blue and F2 in yellow (sampled time step = 0.025 s for display purposes; frequency step unaltered). On the top right, MRI-based articulatory features for the same vowel are indicated by red arrows, with numbers matching the anatomical description of the same measure in Materials and Methods.
CCA results in regions from vowel listening, imagery and perception (lines), between brain activity in each task (columns) and the formant model.
| Region | Brain Activity | |||
|---|---|---|---|---|
| Vowel Listening | Vowel Imagery | Vowel Production | ||
CCA results in tone perception regions, between vowel listening brain data and the articulatory model at group level.
| Region | Brain Activity | |
|---|---|---|
Figure 2Searchlight classifier results from Rampinini et al. (2017). Each panel shows regions where model-free decoding was successful in each task.
CCA results in tone perception regions, between vowel listening brain data and the formant model at group level.
| Region | Brain Activity | |
|---|---|---|
Figure 4Bootstrap-based performance comparison between the articulatory and formant models, in regions surviving Bonferroni correction (C.I.: 5–95th of the distribution obtained by computing their difference).
CCA results in regions from vowel listening, imagery and perception (lines), between brain activity in each task (columns) and the articulatory model.
| Region | Brain Activity | |||
|---|---|---|---|---|
| Vowel Listening | Vowel Imagery | Vowel Production | ||