| Literature DB >> 34996892 |
Abstract
Integrating the spatiotemporal information acquired from the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is particularly important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Here we present statistically robust observations regarding the brain activations measured via electroencephalography (EEG) that are specific to the temporal integration. To that end, we generate videos of neutral faces of individuals and non-face objects, modulate the contrast of the even and odd frames at two specific frequencies ([Formula: see text] and [Formula: see text]) in an interlaced manner, and measure the steady-state visual evoked potential as participants view the videos. Then, we analyze the intermodulation components (IMs: ([Formula: see text]), a linear combination of the fundamentals with integer multipliers) that consequently reflect the nonlinear processing and indicate temporal integration by design. We show that electrodes around the medial temporal, inferior, and medial frontal areas respond strongly and selectively when viewing dynamic faces, which manifests the essential processes underlying our ability to perceive and understand our social world. The generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, therefore, the strong IMs in our frequency spectrum analysis show that the time between frames (1/60 s) is sufficient for temporal integration.Entities:
Mesh:
Year: 2022 PMID: 34996892 PMCID: PMC8742062 DOI: 10.1038/s41598-021-02808-9
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Schematic explanation of the interlaced frequency tagging approach. Left: Temporally interlaced frequency tagging (with Hz and Hz). In this tagging approach, the even and odd frames are sinusoidally contrast modulated (between mid-grey and white) at two different frequencies. The even frames are changing their contrast at 7.5 Hz (moving along the blue sine wave) while odd frames are changing their contrast at 6 Hz (moving along the red sine wave). Top Right: Average response across all conditions. The interlaced frequency tagging yields strong fundamental/harmonic ( and ) components. In the SNR spectrum, we observe not only prominent fundamentals and harmonics but also intermodulation components (IMs: ), which are specifically designed to measure the temporal integration processing during dynamic face perception. Bottom Right: Topographical distributions for the average SNR of the fundamental components. To generate the topographical map, we averaged the fundamental components ( and ) separately for each condition for each electrode.
Figure 2Topographical maps for the comparisons in Table 2 are presented. These maps show the classification accuracies at each channel alone when the selected frequencies in the IM spectrum are used. For instance, in the case of the comparison sequence versus non-face, the selected frequencies are 1.5, 3, 4.5, 19.5, 10.5, 9, 13.5 Hz whereas the selected channels are POz, P8, F4, P5, CP4, Oz, PO4, T8 which are also indicated as black crosses in the map above.
The pairwise comparisons of the conditions at the right and the left hemispheres for harmonics.
| Condition | ||||
|---|---|---|---|---|
| Sequence versus non-face | 2.46 | 0.2 | 3.37 | 0.017* |
| Shuffle versus non-face | 4.32 | < 0.001*** | 4.43 | < 0.001*** |
| Reverse versus non-face | 3.82 | 0.004** | 3.38 | 0.017* |
| Fast versus non-face | 1.5 | 0.99 | 1.35 | 0.99 |
| Static versus non-face | 4.24 | < 0.001*** | 4.18 | < 0.001*** |
P values and confidence intervals are corrected using Bonferroni method. *, **, ***.
Pairwise classification results of our multivariate pattern analysis are presented below.
| Complete spectrum | Harmonic spectrum | Intermodulation spectrum | |
|---|---|---|---|
| Chance 0.5098 | [0.6341, 0.7084] | [0.6282, 0.7028] | [0.5411, 0.6192] |
| Channels | POz, P4, FC4, P3, P8, C4, P6, P2, T7 | POz, P8, F4, AF8, P5, P2, P1 | P1, FC3, AFz, CP5, CP6, FC6, PO7, P4, F3 |
| Frequencies (Hz) | 6, 7.5, 21 | 6, 7.5, 30 | 1.5, 13.5, 3 |
| Chance 0.5034 | [0.6088, 0.6834] | [0.5095, 0.5871] | [0.5835, 0.6592] |
| Channels | CP3, Pz, PO8, F2, Iz, C2, C3 | C2, Oz, PO7, C6 | CP3, C3, Iz, P3, PO8, PO7, F4, P5, FC2 |
| Frequencies (Hz) | 3, 1.5, 12, 6, 16.5 | 6, 22.5, 12 | 3, 4.5, 1.5, 21, 16.5 |
| Chance 0.5017 | [0.5603, 0.6370] | [0.5204, 0.5980] | [0.5580, 0.6347] |
| Channels | P5, P6, PO4, AF4, P7 | O2, T7, F2, P5, FC5, FC1 | P6, PO4, P5, F2 |
| Frequencies (Hz) | 3, 28.5, 25.5 | 6, 7.5, 15 | 3, 19.5 |
| Chance 0.5006 | [0.7200, 0.7874] | [0.6753, 0.7462] | [0.6426, 0.7156] |
| Channels | Oz, P6, P5, POz, F4 | Oz, POz, P8, Iz, O1, PO4 | POz, P8, F4, P5, CP4, Oz, PO4, T8 |
| Frequencies (Hz) | 6, 3, 4.5, 22.5, 13.5, 9, 1.5, 25.5 | 6, 12, 7.5, 22.5 | 1.5, 3, 4.5, 19.5, 10.5, 9, 13.5 |
| Chance 0.5098 | [0.6567, 0.7297] | [0.6247, 0.6994] | [0.5822, 0.6589] |
| Channels | POz, P8, P1, PO4, F7 | POz, FC1, P1, PO4, F7 | P8, C1, CP4, Oz, F8, P5 |
| Frequencies (Hz) | 6, 7.5, 30, 25.5, 16.5, 27 | 6 | 3, 1.5, 4.5, 25.5, 9, 10.5 |
First row: classification accuracy ± standard deviation across subjects, second row: confidence interval for the reported accuracy, third row: identified channels, and fourth row: selected frequency components in the corresponding spectrum with the multiclass accuracy in the bottom row. Comparisons of other condition pairs is given in the supplementary.
ECOC with one-versus-one scheme.
| Classes/classifiers | |||
|---|---|---|---|
| Class 1 | 1 | 0 | − 1 |
| Class 2 | − 1 | 1 | 0 |
| Class 3 | 0 | − 1 | 1 |