| Literature DB >> 35737703 |
Hayato Watanabe1,2,3, Atsushi Shimojo3,4, Kazuyori Yagyu1, Tsuyoshi Sonehara5, Kazuyoshi Takano6, Jared Boasen3,7, Hideaki Shiraishi4, Koichi Yokosawa3, Takuya Saito1.
Abstract
Communication is one of the most important abilities in human society, which makes clarification of brain functions that underlie communication of great importance to cognitive neuroscience. To investigate the rapidly changing cortical-level brain activity underlying communication, a hyperscanning system with both high temporal and spatial resolution is extremely desirable. The modality of magnetoencephalography (MEG) would be ideal, but MEG hyperscanning systems suitable for communication studies remain rare. Here, we report the establishment of an MEG hyperscanning system that is optimized for natural, real-time, face-to-face communication between two adults in sitting positions. Two MEG systems, which are installed 500m away from each other, were directly connected with fiber optic cables. The number of intermediate devices was minimized, enabling transmission of trigger and auditory signals with almost no delay (1.95-3.90 μs and 3 ms, respectively). Additionally, video signals were transmitted at the lowest latency ever reported (60-100 ms). We furthermore verified the function of an auditory delay line to synchronize the audio with the video signals. This system is thus optimized for natural face-to-face communication, and additionally, music-based communication which requires higher temporal accuracy is also possible via audio-only transmission. Owing to the high temporal and spatial resolution of MEG, our system offers a unique advantage over existing hyperscanning modalities of EEG, fNIRS, or fMRI. It provides novel neuroscientific methodology to investigate communication and other forms of social interaction, and could potentially aid in the development of novel medications or interventions for communication disorders.Entities:
Mesh:
Year: 2022 PMID: 35737703 PMCID: PMC9223398 DOI: 10.1371/journal.pone.0270090
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
Fig 1Connection overview.
All signals were subjected to electrical/optic conversion, optical transmission, and optic/electric conversion using Optic Input-Output modules (Optic I/O module A–C). The TTL signal was output from the PC at site A and recorded by the MEG data acquisition systems of each site. The timing standards were set using the TTL signal. The audio/video signal input/output unit serves as the communication device. The video signal is optically transmitted from the camera and presented from the projector via an A/V mixer. The audio signal is transmitted through the microphone, its latency matched to that of the video signal using an audio delay line, and then presented from the speaker via the A/V mixer. Photos: with permission by the models.
Fig 2Video and audio signal latency distribution.
Video and audio signal latencies from site A to site B and those from site B to site A are superimposed. Audio signal latencies (red/blue bars on the left) are short and have no jitter, while video signal latencies (red/blue bars on the right) are longer and have some jitter (mean: 76.85 ms, SD: 6.57 ms) ranging from 60–100 ms. To optimize the setup for natural communication, audio signals can be delayed to match the latency of the video signals (white bar).
Fig 3Amplitude modulation of alpha-band rhythm during face-to-face conversation.
redThe brain responses when two subjects faced each other via the A/V devices and spoke words in turns are shown. Mean alpha rhythm amplitude across 128 speech exchanges was normalized by the mean amplitude over the baseline period, from -2 and -1 s, to calculate Event Related Synchronization (ERS) and Event Related Desynchronization (ERD). The time traces of ERS/D averaged over the whole brain of each subject at site A (blue line) and B (green line) are shown on the upper panel. The brain activity of the subjects at both sites reflects that which is associated with listening, with time point 0 ms being the moment of speech onset of the opposite party. ERD of alpha rhythm is exhibited just before and during hearing the speech (mean: 0.7 s) of the opposite party. The brain surface images on the lower part show mean distributions of ERS (red; <+5%) and ERD (blue; >-5%) on each of the 15,002 vertices across both subjects; back-view (upper row) and left side-view (lower row). This mean alpha rhythm ERS/D was furthermore averaged temporally within each 0.5 s bin. A distinct ERD in the bilateral occipital region (visual area) and left temporal region (linguistic area) observed after 0 s indicate functional involvement of both the visual and auditory systems, suggesting that each subject could visually predict the onset of the opposite party’s speech. Abbreviations. L: Left, R: Right, A: Anterior, P: Posterior.