| Literature DB >> 15268765 |
Jonas Obleser1, Thomas Elbert, Carsten Eulitz.
Abstract
BACKGROUND: The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects.Entities:
Mesh:
Year: 2004 PMID: 15268765 PMCID: PMC503386 DOI: 10.1186/1471-2202-5-24
Source DB: PubMed Journal: BMC Neurosci ISSN: 1471-2202 Impact factor: 3.288
Figure 2Grand average (N = 21) of root mean squared amplitudes for all conditions over time separately for left (upper panel) and right hemisphere (lower panel). N100m is clearly the most prominent waveform deflection, and the repeatedly reported N100m time lag between coronal vowel [ø] (black) and dorsal vowel [o] (gray) is also obvious.
Figure 3Mean two-dimensional source space locations and orientations separately for the left and the right hemisphere (posterior-anterior on abscissa, inferior-superior on ordinate) are shown. Results of the phonological categorization task are shown in open source symbols, results of the speaker categorization task in filled source symbols. Please note that in both conditions the [ø] source (circle symbols) is more inferior and anterior than the [o] source (diamond symbols).
Figure 1Upper panel: Illustration of the F1-F2 formant space for the vowel tokens used. Lower panel: Illustration of the stimulation paradigm and of the two tasks which all subjects performed. Attention was either focused on vowel category changes (Task A) or on changes in the voice speaking (Task B). Arrows indicate required button presses.
Formant Frequency Overview. Pitch (F0), formant frequencies (F1, F2, F3) and -distance (F2-F1) for the vowels used.
| Vowel | voice | |||||
| 123 | 317 | 516 | 2601 | 199 | ||
| 123 | 318 | 1357 | 1980 | 1039 | ||
| 223 | 390 | 904 | 2871 | 514 | ||
| 223 | 417 | 1731 | 2627 | 1314 |