| Literature DB >> 35927277 |
Oliver W Layton1,2, Brett R Fajen3.
Abstract
Self-motion along linear paths without eye movements creates optic flow that radiates from the direction of travel (heading). Optic flow-sensitive neurons in primate brain area MSTd have been linked to linear heading perception, but the neural basis of more general curvilinear self-motion perception is unknown. The optic flow in this case is more complex and depends on the gaze direction and curvature of the path. We investigated the extent to which signals decoded from a neural model of MSTd predict the observer's curvilinear self-motion. Specifically, we considered the contributions of MSTd-like units that were tuned to radial, spiral, and concentric optic flow patterns in "spiral space". Self-motion estimates decoded from units tuned to the full set of spiral space patterns were substantially more accurate and precise than those decoded from units tuned to radial expansion. Decoding only from units tuned to spiral subtypes closely approximated the performance of the full model. Only the full decoding model could account for human judgments when path curvature and gaze covaried in self-motion stimuli. The most predictive units exhibited bias in center-of-motion tuning toward the periphery, consistent with neurophysiology and prior modeling. Together, findings support a distributed encoding of curvilinear self-motion across spiral space.Entities:
Mesh:
Year: 2022 PMID: 35927277 PMCID: PMC9352735 DOI: 10.1038/s41598-022-16371-4
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Sample optic flow vector fields and patterns in spiral space continuum. (A) Optic flow experienced by an observer moving straight-forward toward a frontoparallel wall. A stationary observer that performs a downward (B) or rightward (C) eye movement experiences approximately laminar optic flow due to pitch and yaw rotation, respectively. A stationary observer that rotates the head about the line of sight experiences CW (D) or CCW (E) concentric motion due to roll rotation. (F) Sample optic flow patterns in spiral space. The “spirality” metric defines the position in the spiral space and the proportion of roll rotation that is added to the radial translation field (center). Negative (positive) spirality yields CCW (CW) motion. (G–J) Optic flow fields generated by different types of linear and curvilinear self-motion over a ground plane. (G) Self-motion along straight-ahead (left) and rightward (right) linear trajectories. (H) Curvilinear self-motion along circular paths with different radii. (I) Curvilinear self-motion along CW (left) and CCW (right) 5 m radius circular paths. (J) Curvilinear self-motion along 5 m radius circular paths with the observer gaze offset from the path tangent (left: gaze directed outside the future path, center: gaze along path tangent as in (H,I); right: gaze directed inside the future path).
Figure 2Overview of the neural encoder-decoder system. Optic flow generated by simulated self-motion (left) is passed through a neural model of areas MT and MSTd (center), which contains units tuned to optic flow patterns sampled from spiral space with different preferred centers of motion. The activation of the MSTd units is decoded (right) to recover parameters describing the observer’s curvilinear self-motion (path curvature, gaze offset, and path sign).
Figure 3Error in the gaze offset (A,B) and path curvature (C–F) estimates produced by the Full model (top row) and Radial Only model (bottom row). (E,F) Error in path curvature is expressed with respect to the difference in visual angle between the true and predicted path measured at a depth of 10 m. The full model contains units tuned to heterogenous spiral space patterns, while the radial only model contains only units tuned to radial expansion.
Figure 4Comparisons between human judgments and estimates produced by the Full and Radial Only decoding models. (A) Schematic showing error in path curvature expressed as the angular difference in visual angle subtended by the estimated and true paths (“path error”). (B) Comparison between path the error produced by human subjects in the “gaze-along-heading” condition from Li and Cheng[27] and that produced by the decoding models. In this condition, the gaze offset is 0°. Positive (negative) path error corresponds to an overestimation (underestimation) of path curvature. (C) Comparison between the gaze offset error produced by human subjects in the “envelop” condition from Burlingham and Heeger[28] and that produced by the decoding models. In this condition, gaze offset covaried with path curvature. X values are staggered for visual clarity.
Figure 5Spatial and pattern tuning properties of gaze offset and path curvature decoding models. (Top row) Histograms showing the number of cells tuned to CoM positions in the different regions of the visual field models included in the gaze offset (A) and path curvature (B) decoding models. (C) The proportion of the total regression weight accounted for by units tuned to different optic flow patterns (spirality) in each decoding model. (D) Proportion of cells tuned to different patterns included in each decoding model. Units are grouped in (C,D) based on the absolute value of spirality.
Figure 6Comparison of decoding models. Top row: The MAE for gaze offset (A) and path curvature (B) parameter estimates. Bottom row: AIC computed for each of the models. In addition to the Full and Radial Only models, we considered a version of the Full model constrained to have the same number of nonzero weights as the Radial Only model (i.e. same number of free parameters; Full Limited). The Lowest Third, Middle Third, and Upper Third correspond to models that consist only of units tuned to lowest third (nearly radial), middle third, and upper third (nearly concentric) of spiralities in the spiral space. Concentric Only indicates a model that consists only of concentric units. Error bars correspond to ± 1 standard deviation across MAE estimates obtained over 50 bootstraps. The small error indicates robustness to the specific set of optic flow stimuli included in the fitting process of each decoding model.
Figure 7Overview of the Competitive Dynamics neural model of MT-MSTd. (A) Model area MT contains direction and speed tuned units. Units in model area MSTd integrate motion signals from MT consistent with the preferred motion pattern and compete with one another. MSTd units are tuned to full and hemi-field motion in a pattern continuum: radial, CCW spirals, CCW centers, CW spirals, CW centers, and ground flow. (B) Example MT direction tuning curves. (C) Example MT speed tuning curves.
Parameters used in the neural model.
| Parameter | Description | Value |
|---|---|---|
| Number of preferred speeds | 5 | |
| Speed tuning bandwidth | Normal( | |
| Non-negative offset parameter to prevent singularity in speed logarithm | Exp( | |
| Preferred speed of MT neuron | ||
| Number of preferred directions | 24 | |
| – | Shape of MT+ speed layers | 64 × 64 × 24 × 5 |
| Von Mises direction tuning curve bandwidth parameter | 3° | |
| MT+ passive decay rate | 1 | |
| Depressing synapse time constant | 0.1 | |
| Rate at which the synaptic efficacy decreases | 10 | |
| Number of units tuned to CoM positions for each motion pattern (radial, spiral, etc.) arranged in regular across the visual field | 16 × 16 = 256 | |
| Spirality of patterns to which spiral cells are tuned. Negative (positive) values produce CCW (CW) spirals | ± [0.0, 0.05, …, 1.0] | |
| Number of CW and CCW spiral patterns to which units are tuned | 21 (CW) + 21 (CCW) = 42 | |
| – | Indices of units in pattern tuning continuum. Units corresponding to the endpoints exhibit tuning to radial expansion or CW/CCW center patterns | CCW full-field spiral (decreasing spirality): 1–21 CW full-field spiral (increasing spirality): 22–42 CW hemi-field spiral (decreasing spirality): 43–63 CCW hemi-field spiral (increasing spirality): 64–84 |
| – | Shape of each MSTd layer | 16 × 16 × 84 |
| Number of MT+ connections within each MSTd pattern template | 200 | |
| Rate at which MT+ connection weights decrease with distance from the preferred CoM position | 1e−3 | |
| Layer 1a unit passive decay rate | 1 | |
| Layer 1b Gaussian 2D spatial smoothing kernel horizontal and vertical standard deviations | [5, 5] | |
| Diameter of Layer 1b Gaussian 2D spatial smoothing kernel | 8 | |
| Layer 1b Gaussian 1D pattern smoothing standard deviation | 1.5 | |
| Diameter of Layer 1b Gaussian 1D pattern smoothing kernel | 7 | |
| Layer 1b unit passive decay rate | 1 | |
| Layer 2 net input adaptive threshold passive decay rate | 1 | |
| Layer 2 unit passive decay rate | 10 | |
| Layer 2 unit excitatory upper bound | 3 |