Literature DB >> 35737703

Construction of a fiber-optically connected MEG hyperscanning system for recording brain activity during real-time communication.

Hayato Watanabe1,2,3, Atsushi Shimojo3,4, Kazuyori Yagyu1, Tsuyoshi Sonehara5, Kazuyoshi Takano6, Jared Boasen3,7, Hideaki Shiraishi4, Koichi Yokosawa3, Takuya Saito1.   

Abstract

Communication is one of the most important abilities in human society, which makes clarification of brain functions that underlie communication of great importance to cognitive neuroscience. To investigate the rapidly changing cortical-level brain activity underlying communication, a hyperscanning system with both high temporal and spatial resolution is extremely desirable. The modality of magnetoencephalography (MEG) would be ideal, but MEG hyperscanning systems suitable for communication studies remain rare. Here, we report the establishment of an MEG hyperscanning system that is optimized for natural, real-time, face-to-face communication between two adults in sitting positions. Two MEG systems, which are installed 500m away from each other, were directly connected with fiber optic cables. The number of intermediate devices was minimized, enabling transmission of trigger and auditory signals with almost no delay (1.95-3.90 μs and 3 ms, respectively). Additionally, video signals were transmitted at the lowest latency ever reported (60-100 ms). We furthermore verified the function of an auditory delay line to synchronize the audio with the video signals. This system is thus optimized for natural face-to-face communication, and additionally, music-based communication which requires higher temporal accuracy is also possible via audio-only transmission. Owing to the high temporal and spatial resolution of MEG, our system offers a unique advantage over existing hyperscanning modalities of EEG, fNIRS, or fMRI. It provides novel neuroscientific methodology to investigate communication and other forms of social interaction, and could potentially aid in the development of novel medications or interventions for communication disorders.

Entities:  

Mesh:

Year:  2022        PMID: 35737703      PMCID: PMC9223398          DOI: 10.1371/journal.pone.0270090

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Real-time, face-to-face communication between two people is spontaneous and dynamic, and likely involves both cooperative and competitive brain responses. To properly capture these neural processes in both communicating parties requires a simultaneous and synchronous brain imaging technique known as hyperscanning. Hyperscanning studies regarding forms of human communication and social interaction have been widely reported in the neuroimaging modalities of functional magnetic resonance imaging (fMRI) [1-8], functional near infrared spectroscopy (fNIRS) [9-16], and electroencephalography (EEG) [17-23]. Among these, EEG has the highest time resolution, on the order of milliseconds, which is highly advantageous in hyperscanning studies regarding communication. However, despite advances in source localization techniques for EEG signals, the spatial resolution of EEG remains poor in comparison to other neuroimaging modalities. Alternatively, magnetoencephalography (MEG) has temporal resolution identical to EEG, and far superior spatial resolution due to the fact that magnetic fields are undistorted by cranial bone and tissue. Nevertheless, successful implementation of MEG hyperscanning for studying communication or social interactions between two adults remains comparatively rare [24-27]. The reason for the comparative rarity of MEG hyperscanning systems for communication studies likely stems from the fact that MEG devices themselves are rare, and thus rarely in close enough proximity to permit audiovisual signal transmission at latencies sufficiently low for natural communication. Baess et al. [24] avoided the issue of latency with their Network Time Protocol (NTP)-synchronized MEG hyperscanning system by foregoing video transmission, and only communicating audio via an Integrated Services Digital Network telephone landline. This method reportedly resulted in a local audio transmission delay of 4.7 ms, and a lab-to-lab audio transfer time of 12.7 ms at a distance of approximately five kilometers. In further adaptation of the same system, Zhdanov et al. [25], succeeded in transmitting both audio and video signals via a customized User Datagram Protocol at transmission latencies of 50 ± 2 ms and 130 ± 12 ms (mean ± standard deviation), respectively. With this level of latency, they report that nine pairs of adult subjects were able to synchronize right hand movements at an accuracy from 215 ms to as low as 77 ms. Meanwhile, Ahn et al. [26] used a similar NTP synchronization technique to hyperscan with two EEG/MEG systems separated by a distance of 100 km. Although they report successful implementation of a verbal interaction task between two adults, the task was not designed to be time critical, and the inherent transmission latencies of the hyperscanning system are not reported. For smooth social interactions, the limit for audio and visual one-way transmission delays has been reported at 100 ms and 500 ms, respectively [28, 29]. Meanwhile, accurate perceptual integration of audio and visual speech stimuli reportedly begins to decline when audiovisual misalignment exceeds 80 ms at the group level, and can be less at the individual level [30]. In terms of perceived quality of stimuli, it has been reported that audiovisual misalignment exceeding 20 ms causes discomfort, particularly if the audio precedes the video [31, 32]. In musical contexts, where temporal accuracy is extremely important, perceived quality of sound even in non-professionals has been reported to deteriorate with as little latency as 10 ms [33]. Therefore, although the latencies reported by Zhdanov et al. [25] are certainly low enough for smooth social interaction and accurate audiovisual integration of speech stimuli, further reduction in audiovisual latencies and misalignment is still desirable. In this study, we present a newly established MEG hyperscanning system that offers marked improvements in audiovisual latencies and misalignment over previously reported systems. The system comprises two MEG devices directly connected via fiber optic cables, with a minimum number of specially-selected low-latency intermediate devices, and an audio delay line (ADL) which permits synchronization of the transmitted audio and video signals. Here, we describe the constitution of the MEG hyperscanning system, and methods and results of evaluation of its audiovisual latencies.

Materials and methods

Fiber optics and MEGs

Our MEG hyperscanning system was constructed by connecting two MEGs installed at Hokkaido University Medical and Dental Research Building (site A) and Hokkaido University Hospital (site B) using 473 m of fiber optic cables (Fig 1). Transistor-transistor logic (TTL) signals were used to verify transmission latency between the two MEG devices. The TTL signals were produced by a PC installed at site A, and transmitted to the MEG data acquisition systems (MEG Acqs) at both sites. The MEG hyperscanning system had an audio/visual (A/V) transmission system, which facilitated realistic, face-to-face communication between participants at the two sites. The video system was unified to 1080p/60p. Audio signals were synchronized with video signals using the ADL.
Fig 1

Connection overview.

All signals were subjected to electrical/optic conversion, optical transmission, and optic/electric conversion using Optic Input-Output modules (Optic I/O module A–C). The TTL signal was output from the PC at site A and recorded by the MEG data acquisition systems of each site. The timing standards were set using the TTL signal. The audio/video signal input/output unit serves as the communication device. The video signal is optically transmitted from the camera and presented from the projector via an A/V mixer. The audio signal is transmitted through the microphone, its latency matched to that of the video signal using an audio delay line, and then presented from the speaker via the A/V mixer. Photos: with permission by the models.

Connection overview.

All signals were subjected to electrical/optic conversion, optical transmission, and optic/electric conversion using Optic Input-Output modules (Optic I/O module A–C). The TTL signal was output from the PC at site A and recorded by the MEG data acquisition systems of each site. The timing standards were set using the TTL signal. The audio/video signal input/output unit serves as the communication device. The video signal is optically transmitted from the camera and presented from the projector via an A/V mixer. The audio signal is transmitted through the microphone, its latency matched to that of the video signal using an audio delay line, and then presented from the speaker via the A/V mixer. Photos: with permission by the models.

TTL setup

We used TTL signals to match the timing of measurements between the two sites. TTL signals were transmitted from site A to site B as follows. A PC at site A produced TTL signals. These TTL signals were subjected to electrical-optical conversion using a digital signal bidirectional optical/electrical conversion module (DPDVD16–002-OPT(M), Nanaboshi Electric Mfg. Co., Ltd.), which is shown as Optic I/O module A in Fig 1. The converted optical TTL signals were transmitted via fiber optics to site B. The transmitted signals were decoded into electrical TTL signals using an identical conversion module at site B. The decoded electrical TTL signals were received by the MEG Acqs at site B.

Video setup

To ensure the potential to visualize small changes in the facial expressions of future participants, the video systems needed high resolutions and frame rates. Here, progressive scanning has advantages for motion recording and playing. Therefore, video signal transmission was unified to 1080p/60p. Video signals are transmitted from one site to the other site as follows: At one site, video signals are sampled by an HD camera (GP-KH232A, Panasonic) in a shielded room, transmitted to the HD camera control unit outside the shielded room via a 15 m cable, and converted into HDMI signals. The converted HDMI video signals are transmitted to a distributor (CRO-HD13, Imagenics). The video signals are then converted to optical signals using an HDMI/DVI optical extender (Transmitter; CRO-FD24 TX, Imagenics), which is shown as Optic I/O module B in Fig 1. The converted signals are transmitted via fiber optics to the other site. Upon arrival, the signals are decoded to HDMI signals using an HDMI/DVI extender (Receiver; CRO-FD24 RX, Imagenics), which is shown as Optic I/O module C in Fig 1. The decoded video signals are then transmitted to an A/V mixer (VR-4HD, Roland). Video signals are then visually rendered by a projector (VPL-CH355, Sony) via the A/V mixer.

Audio setup (with ADL synchronizaton)

As video signals are output in units of frames, the latency of video signals strongly depends on the presentation time of each frame. In contrast, audio signals are transmitted without frames; as a result, audio signals were expected to be transmitted more rapidly than video signals (See Results in detail). Therefore, to adjust the latencies of the audio signals, we additionally tested their output via ADL. Audio signals are transmitted from one site to the other site using ADL as follows: At one site, audio signals are sampled using a monaural microphone (AT9904, Audio-Technica). The sampled audio signals are transmitted to the distributor (DA-144, Imagenics). The signals are embedded into HDMI signals and converted to optical signals using the HDMI/DVI optical extender together with video signals (Transmitter; CRO-FD24 TX, Imagenics). The converted audio signals are transmitted via fiber optics to the other site. The transmitted signals are decoded into analogue audio signals using the HDMI/DVI optical extender (Receiver; CRO-FD24 RX, Imagenics). The decoded signals are transmitted to the A/V mixer via ADL (ADL-40, Imagenics). The audio signals are played on a non-magnetic speaker (Audio Element N-20 in SSHP60X20, Panphonics) via the A/V mixer.

TTL latency measurement

The standard signaling latency between the two sites was defined by a TTL signal. The latency of the TTL signal, which consists of durations of conversion and transmission (Fig 1), was measured as follows. A TTL signal generated by a PC at site A was recorded by a digital oscilloscope (Advantest, R9211E digital spectrum analyzer) at site A after a round trip to site B (loop back condition). The same TTL signal was directly recorded by the same digital oscilloscope without the round trip (direct condition). The time difference between those two conditions was evaluated. Half of the time difference was defined as the TTL signal latency. The sampling frequency of the digital oscilloscope was set 256.41 kHz (3.90 μs). The TTL signals were transmitted and recorded 100 times to confirm the reproducibility of our latency measurement.

Video and audio latency measurement overview

The latency of video or audio signals caused by conversion, transmission, and passage of the signal through all intermediate devices. To measure the latencies of the video or audio signal from one site to the other, references were required to know the onset time of the signal. The reference signals also had inherent latencies. Therefore, the latencies of the reference signals were also evaluated. Latencies were measured 100 times with a digital oscilloscope and averaged. Jitter was observable as distortion in the averaged latency. When evaluating jitter, latencies derived via the MEG Acqs at a sampling rate of 1,000 Hz were analyzed to determine their mode, average, variance, and range.

Video latency measurement

Flashing LED lights and photodiodes to detect them were used to measure the latency of the video signal.

Reference signal

The reference signal for video was a square signal which was generated by the output of the photodiode detecting the LED light. The square signal was directly transmitted from one site to the other site via the same pathway as the TTL signal described above, and input into the MEG Acqs at the receiver site. The sampling rate was 1,000 Hz.

Measurement signal

The LED light was flashed 200 times over five sessions (total 1,000 times) at site A. The light was captured by the video camera at site A, and transmitted via all intermediate devices to site B, where the light was projected into the shielded room and detected by a photo diode. A square signal was generated by the output of the photodiode and input into the MEG Acqs at site B. This measurement process was performed in the opposite direction as well.

Audio latency measurement and adjustment

Sine waves (250 Hz, 100 ms, 5 ms rise/fall) generated by a PC were used to measure the latency of the audio signal. The reference signal for audio transmission was a sine wave generated by a PC at site A. It was recorded by a digital oscilloscope at site A after a round trip to site B via optical analogue link (Transmitter, PE-1800TAF, Optex; Receiver, PE-1800RAF, Optex). The loop-backed sine wave was compared with the original one on the same digital oscilloscope at site A. The sine wave signal generated by the PC at site A was split. One part was transmitted directly to site B and recorded on a digital oscilloscope. The other part was played on a non-magnetic speaker and sampled by a monaural microphone in the shielded room at site A. The signal captured by this microphone was then transmitted to site B where it, underwent digital audio conversion and passed through the A/V mixer. It was then re-played on a non-magnetic speaker and sampled by a monaural microphone in the shielded room at site B. The sampled signal was recorded on the same digital oscilloscope at site B. The audio waves of the two split signals were compared. This measurement process was performed in the opposite direction as well.

Audio latency adjustment

After determining the latencies of the audio and video signals, the latency of the audio signal was adjusted to the latency of the video signal by ADL, as appropriate. The minimum adjustment width of ADL was 1 ms.

Electrophysiological experiment

One pair of subjects (23 year-old female and 25 year-old male) participated. Signed informed consent was obtained from both subjects before the experiment. The MEG recordings were approved by the Ethics Review Board of the Graduate School of Medicine at Hokkaido University. The two subjects faced each other via the A/V devices and spoke words in turns according to timed cues. The speech audio signals from each site were transmitted to the opposite site with a 90 ms delay using the ADL to align them with the visual signal delay. MEGs were recorded during 128 speech exchanges of this alternate speaking protocol. The amplitude modulations of the alpha-band rhythms across all 128 exchanges were averaged and then normalized in each subject based on their average alpha amplitude over the period from -2,000 ms to -1,000 ms prior to the speech onset of the other subject (S2 File). The resulting normalized mean alpha activity was then mapped onto template brains. Data analysis was performed with Brainstorm [34].

Results

TTL latency

The time difference between the loop back and direct conditions were recorded by the digital oscilloscope as 7.80 μs for all signals (S1 Fig). Given that the sampling rate of the digital oscilloscope was 3.90 μs, this means that the signal latency of the loop back condition was later than 3.90 μs and shorter than 7.80 μs. Therefore, the latency of the direct condition, which was considered to be half the latency of the loop back condition, was evaluated to be 1.95–3.90 μs. No jitter was observed within this time resolution. The theoretical latency of the TTL signals was 2.88 μs, which is the sum of the integral of the speed of light over the transmission distance of 472 m (1.58 μs) and the time required for conversion (1.30 μs) by optical I/O module A (Fig 1). Thus, our measured latency coincides the theoretical one, and is much smaller than the highest temporal resolution of the MEG Acqs (1 ms at 1,000 Hz sampling).

Video latency

Our evaluations revealed that it took 11.61 μs to generate the square signal from the output of the photodiode. Therefore, the latency of the reference signal was the sum of this delay of 11.61 μs and the direct latency of the TTL signal (1.95–3.90 μs). Effectively, the latency of the reference signal was negligibly short compared to the measurement signal. The latencies of the 1,000 LED light flash at both sites are summarized on a histogram with 2-ms bins (Fig 2, site A blue bars, site B red bars, S1 File). From site A to site B, the mode was 70–72 ms (mean = 76.76 ms, SD = 5.34 ms, and range = 66.42–97.42 ms); from site B to site A, the mode was 76–78 ms (mean = 76.94 ms, SD = 7.61 ms, and range = 63.42–95.42 ms). Here, the transmission takes 2.36 μs, which was calculated by the transmission speed of the HDMI cable 0.5 μs/100 m and cable length of 472 m, and conversion takes 400 μs in total (200 μs for optic I/O module B and C in Fig 1). As both sites had the same devices and set-ups, the latency distributions were nearly identical, ranging from 60 ms to 100 ms. These latencies are sufficiently short for natural communication, thereby meeting the objective of this system.
Fig 2

Video and audio signal latency distribution.

Video and audio signal latencies from site A to site B and those from site B to site A are superimposed. Audio signal latencies (red/blue bars on the left) are short and have no jitter, while video signal latencies (red/blue bars on the right) are longer and have some jitter (mean: 76.85 ms, SD: 6.57 ms) ranging from 60–100 ms. To optimize the setup for natural communication, audio signals can be delayed to match the latency of the video signals (white bar).

Video and audio signal latency distribution.

Video and audio signal latencies from site A to site B and those from site B to site A are superimposed. Audio signal latencies (red/blue bars on the left) are short and have no jitter, while video signal latencies (red/blue bars on the right) are longer and have some jitter (mean: 76.85 ms, SD: 6.57 ms) ranging from 60–100 ms. To optimize the setup for natural communication, audio signals can be delayed to match the latency of the video signals (white bar). The processing latency of one of the intermediate devices, the A/V mixer, is 16.67 ms/frame. This latency is small compared to the mean overall latency of about 77 ms. Hence, the majority of the video latency is implicitly caused by the camera and the projector. The signal transmission of the camera and the projector are 1080p/60p, i.e., one frame equates to 16.67 ms. Jitter was presumed to be caused by the frame of both the camera and the projector, and was therefore calculated as 33.34 ms (16.67 ms × 2 devices). The latency range of our measurement results (about 31.5 ms) closely coincides with this value.

Audio latency and synchronization with video

The loop-backed sine wave was compared with the original one on the digital oscilloscope at site A. The latency, calculated as the half of the difference between the two waves was 202.4 μs. This latency was negligibly short compared to the measurement signal. There was no distortion of the sine wave, based on visual inspection, thus indicating an absence of jitter. A comparison of the audio waves of the two split signals demonstrated a constant latency of 3.13 ms (from Site A to Site B) and 2.78 ms (from Site B to Site A) with no jitter (Fig 2, red and blue bar, S2 Fig). The reason for the slight directional difference is not clear, but we suspect that it might depend on the distance between the microphone and the speaker at each site. Regardless, the minute directional difference (0.4 ms) is arguably not physiologically discernible, and the approximately 3 ms jitter-free latency in both directions is sufficiently low for natural communication. Audio signals from one site arrive at the other site about 74 ms earlier than the video signal. As mentioned previously, this situation is known to cause discomfort when viewing video [31, 32]. To correct this, and ensure that our system can be comfortably used for real-time audiovisual communication, ADL was used to increase the latencies of the audio signals to make them arrive just after the video signals. The ADL was set such that the audio signal latencies were increased by 90 ms, which is approximately two standard deviations above the mean video signal latency (76.85 ± 6.57 ms). Consequently, the ADL-adjusted audio signals had a mean latency of 93 ms (Fig 2, white bar). Fig 3 shows normalized mean alpha-band amplitude modulation across all 128 speech exchanges for each subject averaged across the entire cortical surface (Upper), and that across both subjects and mapped onto the template brain (Lower). The brain activity of the subjects at both sites reflects that which is associated with listening, with time point 0 ms being the moment of speech onset of the opposite party. Alpha-band desynchronization was exhibited in both the site A and site B subject during listening. Notably, the desynchronization appears to have commenced before the speech onset of the opposite party, a sign that subjects could visually predict the onset of the opposite party’s speech. The suppression was primarily concentrated in occipital and left temporal regions, indicating functional involvement of both the visual and auditory systems, and suggesting that each subject could visually predict the onset of the opposite party’s speech.
Fig 3

Amplitude modulation of alpha-band rhythm during face-to-face conversation.

redThe brain responses when two subjects faced each other via the A/V devices and spoke words in turns are shown. Mean alpha rhythm amplitude across 128 speech exchanges was normalized by the mean amplitude over the baseline period, from -2 and -1 s, to calculate Event Related Synchronization (ERS) and Event Related Desynchronization (ERD). The time traces of ERS/D averaged over the whole brain of each subject at site A (blue line) and B (green line) are shown on the upper panel. The brain activity of the subjects at both sites reflects that which is associated with listening, with time point 0 ms being the moment of speech onset of the opposite party. ERD of alpha rhythm is exhibited just before and during hearing the speech (mean: 0.7 s) of the opposite party. The brain surface images on the lower part show mean distributions of ERS (red; <+5%) and ERD (blue; >-5%) on each of the 15,002 vertices across both subjects; back-view (upper row) and left side-view (lower row). This mean alpha rhythm ERS/D was furthermore averaged temporally within each 0.5 s bin. A distinct ERD in the bilateral occipital region (visual area) and left temporal region (linguistic area) observed after 0 s indicate functional involvement of both the visual and auditory systems, suggesting that each subject could visually predict the onset of the opposite party’s speech. Abbreviations. L: Left, R: Right, A: Anterior, P: Posterior.

Amplitude modulation of alpha-band rhythm during face-to-face conversation.

redThe brain responses when two subjects faced each other via the A/V devices and spoke words in turns are shown. Mean alpha rhythm amplitude across 128 speech exchanges was normalized by the mean amplitude over the baseline period, from -2 and -1 s, to calculate Event Related Synchronization (ERS) and Event Related Desynchronization (ERD). The time traces of ERS/D averaged over the whole brain of each subject at site A (blue line) and B (green line) are shown on the upper panel. The brain activity of the subjects at both sites reflects that which is associated with listening, with time point 0 ms being the moment of speech onset of the opposite party. ERD of alpha rhythm is exhibited just before and during hearing the speech (mean: 0.7 s) of the opposite party. The brain surface images on the lower part show mean distributions of ERS (red; <+5%) and ERD (blue; >-5%) on each of the 15,002 vertices across both subjects; back-view (upper row) and left side-view (lower row). This mean alpha rhythm ERS/D was furthermore averaged temporally within each 0.5 s bin. A distinct ERD in the bilateral occipital region (visual area) and left temporal region (linguistic area) observed after 0 s indicate functional involvement of both the visual and auditory systems, suggesting that each subject could visually predict the onset of the opposite party’s speech. Abbreviations. L: Left, R: Right, A: Anterior, P: Posterior.

Discussion

We established an MEG hyperscanning system with an audiovisual interface capable of permitting real-time, face-to-face communication between two adults, and verified its TTL signaling and audiovisual transmission latency. The latency of the TTL signal (trigger) was orders of magnitude lower than the maximum temporal resolution of our MEG devices, essentially demonstrating simultaneous and synchronous recording onset for both MEG devices. Site-to-site audio signal latency was about 3 ms, in either direction, which is on par with the speed of transmission of telephone landline audio signals [25]. Moreover, audio latency was completely jitter free, and well below reported thresholds for human detection of musical quality deterioration, indicating that our system would additionally be suitable for communication paradigms based on musical stimuli [33]. Finally, the video signals had short latencies (60–100 ms) and small jitter (SD: 6.57 ms). We also conducted an electrophysiological study and confirmed that this hyperscanning system can reliably transmit A/V information and measure physiological signals. The latencies and jitter values recorded here are the smallest ever reported for an MEG hyperscanning system. The additional verification of audio synchronization to video signals via ADL is another achievement that has hitherto not been reported. The only other existing MEG hyperscanning system that might have comparable video delay is one reported by Hirata et al. [35]. That system comprises two MEGs co-located in one shielded room, with one MEG designed for adults, and the other designed for infants or small children, thus permitting parent-child hyperscanning. The co-location of the MEGs in the same room allows the audio communication to be transmitted directly through the air. However, the two MEGs are designed for recording subjects in supine positions, and thus facial communication with their system has been accomplished similarly to us with video signals transmitted via cameras and projectors. Correspondingly, although the exact amount has not been reported, the co-located MEG hyperscanning system reported by Hirata et al. must certainly have delay in the video signals. Furthermore, the co-location of the subjects in the same room and their auditory communication through air not only means that auditory signals likely precede video signals, but also that the audio cannot be isolated and properly synchronized to the video signals. Finally, their system is limited in that hyperscanning can only be performed between an adult and a child. Our MEG hyperscanning system realizes real-time video and audio communication between two adults, and uses a more natural, face-to-face, seated orientation (Fig 1). Combined with the extremely low video latency and audio-video synchronization, our system should permit natural conversation. See the S1 Appendix for information about ways that latency and jitter could be reduced even further. As MEG is silent, and completely non-invasive, our system should permit cortical-level investigation into numerous kinds of subtle and dynamic brain processes which occur during natural two-way communication. For example, our system could be used to measure cortical brain response associated with changes in speech patterns and facial expressions between the participating subjects. The ability to measure this is important as brain responses during dynamic real-time conversation may be quite different than isolated event-related responses. Indeed, consider that the N400 event-related potential component associated with semantic processing of a single word is generally observed about 400 ms after the word is presented [36]. In contrast, responses in everyday conversation have been reported to occur in as little as 200 ms after a conversation partner’s speech onset [37]. In addition, a prominent response in the occipital cortex to another’s blink has been observed at 250 ms, and this brain response is positively correlated with empathic concern in the viewer [38, 39]. These kinds of fast brain responses that occur back and forth in real-time communication likely have neurocorrelates in both the sender and the receiver, and thus require high temporal resolution hyperscanning to adequately capture. Moreover, it is important to recognize that in natural, two-way communication, both parties alternate between being the sender and the receiver of auditory and visual information, and the brain regions involved when sending (inferior parietal lobule/sulcus, ventral premotor cortex) and receiving (ventral medial prefrontal cortex) communication are different [40]. Therefore, high spatial resolution is also very important in a hyperscanning system, thereby making MEG a preferable modality for investigating the neurocorrelates of natural communication. Finally, we would like to highlight the importance of the intermediate devices used to transmit/receive audiovisual signals in hyperscanning systems. The quality of these devices and the validation of their signal processing latencies and characteristics is essential for realizing well-controlled experimental designs in neuropsychophysiological experimentation. Moreover, the minimization of the latency through these intermediate devices, such as via a direct fiber optic connection, is a fundamental priority for hyperscanning research protocols in any modality, not only MEG. Comprehensively, the establishment and verification of our new MEG hyperscanning system opens the door to a new line of neuroimaging research regarding human communication. Future studies employing our system may shed light on the pathophysiology of neurological and psychiatric disorders that manifest with communication deficits, and inspire development of novel medications or interventions.

TTL latency data (photo).

TTL measured by a digital spectrum analyzer. Top: loop-back value, bottom: direct measurement value. doi:10.6084/m9.figshare.19127282. (TIF) Click here for additional data file.

Auditory latency data (photo).

Auditory latency measured by a digital spectrum analyzer. Top: with delay value, bottom: direct measurement value. doi:10.6084/m9.figshare.14872785. (ZIP) Click here for additional data file.

Video latency data.

Mat files of video latency measured by MEG Acq of the MEG hyperscanning system at Hokkaido University. doi:10.6084/m9.figshare.14872827. (ZIP) Click here for additional data file.

Electrophysiological data.

Fiff files of electrophysiological data measured by MEG Acq of the MEG hyperscanning system at Hokkaido University. doi:10.6084/m9.figshare.19127285. (ZIP) Click here for additional data file.

Measurement of visual event related field.

A proposal for measuring visual evoked field with high accuracy based on jitter and latency of visual signals transmission. (PDF) Click here for additional data file. 4 Jan 2022
PONE-D-21-21236
Construction of a fiber-optically connected MEG hyperscanning system for recording brain activity during real-time communication
PLOS ONE Dear Dr. Yokosawa, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 18 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Kiyoshi Nakahara, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. Thank you for stating the following in the Financial Disclosure section: "This Research was supported by Strategic Research Program for Brain Sciences by Japan Agency for Medical Research and Development JP20dm0107567, The Watanabe foundation, and JSPS KAKENHI Grant Number 20H04496. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." We note that one or more of the authors are employed by a commercial company: "Research and Development Group, Hitachi Ltd." a. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. 4. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. Additional Editor Comments (if provided): Please follow the comments of reviewers and correct the deficiencies in the paper. Reviewer 2's comments are particularly important and need to be addressed in order for this paper to be accepted. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: No Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In the present paper, the authors reported the details of the Hyperscanning MEG system. Information on how to build up the system, especially information about what devices are used in the integrated system, is usually unclear in the most of hyperscanning systems. Therefore, the present paper would be informative for potential readers who would like to prepare the hyperscanning MEG. However, unfortunately, I have several concerns with this paper. First and the most important thing is that the scientific significance of this paper is unclear. A low latency to transmit signals is very important. I agree. But when the two devices are apart from each other, and connected via some transmitters, the lag cannot be zero. The hyperscanning EEG and NIRS are better in this respect, because it is possible to record two brains using one device, and two participants could do communication directly without any audio/video devices. Potential readers want to know how much better the author’s hyperscanning MEG compared to old-type hyperscanning MEG especially in ability to depict the inter-brain synchrony. The comparison is indispensable if the author would like to stress the advantage of this setting. What kind of inter-brain effect could be specifically observed only by this hyperscanning MEG? How the subtle inter-brain effect (i.e., inter-brain sync degree) could be specifically captured by the system? In this paper, there is no such results, so this is merely a technical paper: ‘I found new better device, so I integrated it on the recording system. It is better than previous one, because the device is new. I have no idea how the new device contribute to the recording system’. That is the only message I could receive from the paper. For example, the authors could make clear how the small delay, that was in the old-fashioned hyperscanning MEG but not in this present system, affects the detection of inter-brain sync, by doing a small experiment. Because not all hyperscanning MEG system could not be connected via fiber optics, so they are almost always ignoring the delay effect. It would be useful if the authors declare what problems can occurs when there is small latency. Of course, there is nothing wrong with reporting this as a technical report. But I don't think it is a scientific paper. It is difficult to understand why latency needs to be calculated for reference signals. I understood that the authors want to separate the latency caused by the transmission line (mainly fiber optics) from the latency caused by the microphones and speakers. However, I don't know if my understanding is correct since nothing is mentioned in the paper. Please explain clearly why you need to do this measurement. The structure of the paper is really far away from standard style: there is no Results section.I think the Results section might be after L204, but please adjust the format. The unconstructed paper is very hard to read. This is an opinion that has nothing to do with the scientific point of view. Please check the contents of the paper carefully by the authors, before submission. Quality of paper is very low. Here are some examples: sentences starting with L1 have a period at the end for some reason; the content of the paragraph starting with L77 is exactly the same as that starting with L57. I am not a native speaker of English, but I found many points with wrong English grammar. I recommend that the authors send it to an English proofreader before submission. Minor comments L228: Latency purple? L241: What is the ‘mus’? L243: How did the authors confirm that there is no distortion of sine-wave? L70 etc.: The term TTL was repeatedly explained. L85: ‘Photos: with permission by the models.’ It is redundant. L214: ‘MEG Aqs’ What is the Aqs? L236: What is the lip-synchronized delay? L288: The authors said ‘See the Appendix’. Where is your appendix? I could not find it. Reviewer #2: Watanabe et al. developed a hyperscanning system in two MEG devices with a low latency for audio and video communications. The study presented a newly developed hyperscanning system that can provide the potential to study brain dynamics of social interactions. The systematic configuration was well established and the audio-visual latencies were verified for real-time face-to-face communications. I have minor comments to improve the quality of the manuscript. - Audio and visual latencies should be measured from the site B to the site A as well for the bidirectional communication. Further, the effect of the direction of the transmission should be investigated. - An electrophysiological experiment should be performed to use the hyperscanning system in practice. For example, audio, video, and audio-video stimuli are presented at the site A and transmitted to the site B. At the same time, MEG data of a participant are recorded at the site B and the data are analyzed. In this experimental paradigm, electrophysiological signals such as event-related potentials and event-related spectral perturbation for audio, video, and audio-video stimulus can be obtained and the results can verify the system. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Sangtae Ahn [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 8 Mar 2022 See the attachment "Response to Reviewers.docx" file. Submitted filename: Response to Reviewers.docx Click here for additional data file. 4 Apr 2022
PONE-D-21-21236R1
Construction of a fiber-optically connected MEG hyperscanning system for recording brain activity during real-time communication
PLOS ONE Dear Dr. Yokosawa, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by May 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Kiyoshi Nakahara, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors adequately responded to my previous comments. I'm sure that now the authors could get better understanding about the importance of this research project, and about what the authors did to build the hyperscanning MEG system. Now I only have some minor comments. 1. The authors cited several papers using hyperscanning fMRI system. I recommend that the authors have to select more appropriate papers. First, while the study by Schippers and his colleagues is great, it is not the study of hyperscanning focusing on the interactivity during communication. In this study, two participants could not mutually exchange information. In the case, the video/audio delay does not become a big issue. I suggest that the authors should cite more appropriate literatures investigating neural basis of real social interaction. If the authors think that number of hyperscanning fMRI papers cited in this paper is too much, I strongly suggest some papers by Japanese hyperscanning teams should be replaced by these following papers. Bilek group Bilek, E., Ruf, M., Schäfer, A., Akdeniz, C., Calhoun, V. D., Schmahl, C., et al. (2015). Information flow between interacting human brains: identification, validation, and relationship to social expertise. Proc. Natl. Acad. Sci. U.S.A. 112, 5207–5212. doi: 10.1073/pnas.1421831112 Chinese team https://www.pnas.org/doi/10.1073/pnas.1917407117 Netherland team https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4280639/ 2. Overall, there is not enough information about what figures show. For example, in Figure 3, the authors showed activation/deactivation (ERS/D) rendered on the surface image. Does the row of surface images in lower panel represent the passage of time? Let us show how the authors calculated the activation on one surface image in detail: which time bin corresponds to the image, how the activation is depicted, whether is this the left hemisphere, and so on. Please also add left-right and AP (anterior-posterior) information. I understand that this small experiment is not the main report of the authors. However, please clearly describe the process that led to these figures and what these figures mean. Figures in Supporting information could be described a bit more carefully. See, Figure S1. The authors claimed that ‘The loopbacked (round-trip) signal delayed for 7.812 μs (Upper) compared to direct signal (Lower)’, however, I have no idea where these information could be found in the TIFF file. Reviewer #2: Thanks for the efforts to address my comments. The concerns I had have been fully addressed by the authors. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Takahiko Koike Reviewer #2: Yes: Sangtae Ahn [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
18 May 2022 Please see the "ResponseToReviewers" file. Submitted filename: ResponseToReviewers.pdf Click here for additional data file. 6 Jun 2022 Construction of a fiber-optically connected MEG hyperscanning system for recording brain activity during real-time communication PONE-D-21-21236R2 Dear Dr. Yokosawa, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Kiyoshi Nakahara, PhD Academic Editor PLOS ONE Additional Editor Comments: Several minor typographical errors were noted by the reviewer. Please check the entire manuscript upon receipt of the galley proof and make any final typographical corrections. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have adequately replied to my comments, and I think the paper is ready to publish. The authors have to carefully check there are no any typos. Here I list typos I found. L2: Real-time. face-to-face -> Real time face-to-face L69: The transistor-transistor logic (TTL) -> It seems redundant, because the abbreviation was shown in the manuscript (L60). L201: -2000 ms to -1000-ms -> -2,000 ms to -1,000 ms ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Takahiko Koike ********** 10 Jun 2022 PONE-D-21-21236R2 Construction of a fiber-optically connected MEG hyperscanning system for recording brain activity during real-time communication Dear Dr. Yokosawa: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Kiyoshi Nakahara Academic Editor PLOS ONE
  34 in total

1.  NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation.

Authors:  Xu Cui; Daniel M Bryant; Allan L Reiss
Journal:  Neuroimage       Date:  2011-09-10       Impact factor: 6.556

Review 2.  Lipreading and audio-visual speech perception.

Authors:  Q Summerfield
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  1992-01-29       Impact factor: 6.237

3.  Hypermethods for EEG hyperscanning.

Authors:  Fabio Babiloni; Febo Cincotti; Donatella Mattia; Marco Mattiocco; Fabrizio De Vico Fallani; Andrea Tocci; Luigi Bianchi; Maria Grazia Marciani; Laura Astolfi
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2006

4.  Universals and cultural variation in turn-taking in conversation.

Authors:  Tanya Stivers; N J Enfield; Penelope Brown; Christina Englert; Makoto Hayashi; Trine Heinemann; Gertie Hoymann; Federico Rossano; Jan Peter de Ruiter; Kyung-Eun Yoon; Stephen C Levinson
Journal:  Proc Natl Acad Sci U S A       Date:  2009-06-24       Impact factor: 11.205

5.  Cerebral coherence between communicators marks the emergence of meaning.

Authors:  Arjen Stolk; Matthijs L Noordzij; Lennart Verhagen; Inge Volman; Jan-Mathijs Schoffelen; Robert Oostenveld; Peter Hagoort; Ivan Toni
Journal:  Proc Natl Acad Sci U S A       Date:  2014-12-08       Impact factor: 11.205

6.  Information flow between interacting human brains: Identification, validation, and relationship to social expertise.

Authors:  Edda Bilek; Matthias Ruf; Axel Schäfer; Ceren Akdeniz; Vince D Calhoun; Christian Schmahl; Charmaine Demanuele; Heike Tost; Peter Kirsch; Andreas Meyer-Lindenberg
Journal:  Proc Natl Acad Sci U S A       Date:  2015-04-06       Impact factor: 11.205

7.  MEG dual scanning: a procedure to study real-time auditory interaction between two persons.

Authors:  Pamela Baess; Andrey Zhdanov; Anne Mandel; Lauri Parkkonen; Lotta Hirvenkari; Jyrki P Mäkelä; Veikko Jousmäki; Riitta Hari
Journal:  Front Hum Neurosci       Date:  2012-04-10       Impact factor: 3.169

8.  Brainstorm: a user-friendly application for MEG/EEG analysis.

Authors:  François Tadel; Sylvain Baillet; John C Mosher; Dimitrios Pantazis; Richard M Leahy
Journal:  Comput Intell Neurosci       Date:  2011-04-13

9.  Brain-to-brain synchronization across two persons predicts mutual prosociality.

Authors:  Yi Hu; Yinying Hu; Xianchun Li; Yafeng Pan; Xiaojun Cheng
Journal:  Soc Cogn Affect Neurosci       Date:  2017-12-01       Impact factor: 3.436

10.  An Internet-Based Real-Time Audiovisual Link for Dual MEG Recordings.

Authors:  Andrey Zhdanov; Jussi Nurminen; Pamela Baess; Lotta Hirvenkari; Veikko Jousmäki; Jyrki P Mäkelä; Anne Mandel; Lassi Meronen; Riitta Hari; Lauri Parkkonen
Journal:  PLoS One       Date:  2015-06-22       Impact factor: 3.240

View more
  1 in total

1.  Magnetoencephalography Hyperscanning Evidence of Differing Cognitive Strategies Due to Social Role During Auditory Communication.

Authors:  Nano Yoneta; Hayato Watanabe; Atsushi Shimojo; Kazuyoshi Takano; Takuya Saito; Kazuyori Yagyu; Hideaki Shiraishi; Koichi Yokosawa; Jared Boasen
Journal:  Front Neurosci       Date:  2022-08-02       Impact factor: 5.152

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.