| Literature DB >> 26742643 |
Anne Mandel1, Mathieu Bourguignon2, Lauri Parkkonen2, Riitta Hari2.
Abstract
Although the main function of speech is communication, the brain bases of speaking and listening are typically studied in single subjects, leaving unsettled how brain function supports interactive vocal exchange. Here we used whole-scalp magnetoencephalography (MEG) to monitor modulation of sensorimotor brain rhythms related to the speaker vs. listener roles during natural conversation. Nine dyads of healthy adults were recruited. The partners of a dyad were engaged in live conversations via an audio link while their brain activity was measured simultaneously in two separate MEG laboratories. The levels of ∼10-Hz and ∼20-Hz rolandic oscillations depended on the speaker vs. listener role. In the left rolandic cortex, these oscillations were consistently (by ∼20%) weaker during speaking than listening. At the turn changes in conversation, the level of the ∼10Hz oscillations enhanced transiently around 1.0 or 2.3s before the end of the partner's turn. Our findings indicate left-hemisphere-dominant involvement of the sensorimotor cortex during own speech in natural conversation. The ∼10-Hz modulations could be related to preparation for starting one's own turn, already before the partner's turn has finished.Entities:
Keywords: Conversation; Magnetoencephalography; Mu rhythm; Sensorimotor activation
Mesh:
Year: 2015 PMID: 26742643 PMCID: PMC4756274 DOI: 10.1016/j.neulet.2015.12.054
Source DB: PubMed Journal: Neurosci Lett ISSN: 0304-3940 Impact factor: 3.046
Fig. 1Dual-MEG setup for measuring brain activity simultaneously from two subjects engaged in a conversation via an Internet-based audio connection. Above: Amplitude spectra from one MEG planar gradiometer channel over the left rolandic cortex; blue lines show the activity during participant’s own speech and orange lines during partner’s speech. Below: MEG data from 4 planar gradiometer channels over left rolandic cortex filtered to 7–13 and 15–25 Hz, respectively. Two lowermost traces show the speech waveforms of the participant in question (above), and the speech of the partner (below). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 2(A) Topographic maps of the MEG signals in ∼10-Hz (left column) and ∼20-Hz frequency bands (right column). The spectra were calculated separately for speaking (top) and listening (bottom) epochs of the conversation. The warmer the color, the stronger is the activity in a particular area. (B) Top row: Mean difference (group average) in 7–13-Hz (left) and 15–25-Hz (right) activity between speaking and listening periods in the conversation; warm colors mark an increase, and cold colors a decrease in the activation during speaking compared with listening periods. Black rectangle surrounds the four MEG sensors that were used to calculate the individual suppression strengths. Bottom row: statistical significance map (t-values) between speaking and listening conditions. White crosses mark the sensors where the difference was statistically significant (p < 0.05).
Fig. 3Top panels: Time courses of power envelopes of the ∼10-Hz rhythm around the start of the subject’s next turn in conversation; signals are displayed from one left-hemisphere sensor unit for each single individual. The waveforms are grouped and aligned according to the latency of the strongest peak in the ∼10-Hz power, with one group (left) with the mean peak latency of about 2.3 s and the other (right) with the mean peak latency about 1 s before the turn start. The brackets above the traces indicate the mean and range of the latency. The gray horizontal shadings indicate the group-mean RMS values, calculated from 0.5–1.5 s before the transient peak. Bottom panels: Time–frequency representations of the same data (group means) from 1 to 40 Hz.