| Literature DB >> 30267680 |
Andreas Schindler1, Andreas Bartels2.
Abstract
A key question in vision research concerns how the brain compensates for self-induced eye and head movements to form the world-centered, spatiotopic representations we perceive. Although human V3A and V6 integrate eye movements with vision, it is unclear which areas integrate head motion signals with visual retinotopic representations, as fMRI typically prevents head movement executions. Here we examined whether human early visual cortex V3A and V6 integrate these signals. A previously introduced paradigm allowed participant head movement during trials, but stabilized the head during data acquisition utilizing the delay between blood-oxygen-level-dependent (BOLD) and neural signals. Visual stimuli simulated either a stable environment or one with arbitrary head-coupled visual motion. Importantly, both conditions were matched in retinal and head motion. Contrasts revealed differential responses in human V6. Given the lack of vestibular responses in primate V6, these results suggest multi-modal integration of visual with neck efference copy signals or proprioception in V6.Entities:
Keywords: Neuroscience; Sensory Neuroscience; Techniques in Neuroscience
Year: 2018 PMID: 30267680 PMCID: PMC6153141 DOI: 10.1016/j.isci.2018.09.004
Source DB: PubMed Journal: iScience ISSN: 2589-0042
Figure 1Illustration of Visual Stimuli and Head-Rotation Task, BOLD Signal Acquisition during a Trial, and Experimental Paradigm
(A) Observers performed voluntary head rotations while being approached by a simulated 3D dot cloud in both congruent and incongruent conditions. Head rotations in the congruent condition lead to cloud rotation in opposite direction (-α) to the observer's head (α), as would be experienced when moving forward in a stable environment and looking around. In the incongruent condition, the cloud and head rotated in the same direction (α), resulting in perceptually arbitrary motion of the environment. Note that retinal flow as well as head motion were matched in both conditions.
(B) Model of the evoked BOLD time course as predicted by the paradigm in (C). Stimulus presentation and active head movements induced BOLD signals during the trial phase (green shade) while the slow dynamics of the BOLD signal allowed acquisition of these responses even after stimulus offset, at a time when the observer's head was stabilized (acquisition phase, red shade).
(C) Each trial started with an instruction phase when air cushions were emptied. In the trial phase, green arrowheads guided the observer's head rotation. In the acquisition phase, air cushions were inflated again to record BOLD responses. Observers performed a demanding fixation task across all phases and conditions, except the instruction phase (see Methods).
Figure 2Univariate and Multivariate Differences between Congruent and Incongruent Combinations of Visual and Extra-Retinal Signals in Visual Cortex
(A) A contrast between both conditions revealed significant responses to the congruent combination of both cues in area V6, whereas V3A and early visual areas showed no differential activation.
(B) A subsequent multivariate pattern analysis found robust classification of both conditions in area V6, whereas effects in V3A did not survive multiple comparison correction. Patterns in early visual cortex did not distinguish between congruent and incongruent conditions. Dashed lines indicate chance level.
+p < 0.05, uncorrected; *p < 0.05, FWE-corrected; **p < 0.005; FWE-corrected. Error bars indicate SEM. See Figure S1 for responses to an additional “head only” condition and Figure S2 for retinotopic area definitions of example observers.