| Literature DB >> 32144287 |
Otto Lappi1,2, Jami Pekkanen3,4, Paavo Rinkkala3,4,5, Samuel Tuhkanen3,4, Ari Tuononen6, Juho-Pekka Virtanen7.
Abstract
It is well-established how visual stimuli and self-motion in laboratory conditions reliably elicit retinal-image-stabilizing compensatory eye movements (CEM). Their organization and roles in natural-task gaze strategies is much less understood: are CEM applied in active sampling of visual information in human locomotion in the wild? If so, how? And what are the implications for guidance? Here, we directly compare gaze behavior in the real world (driving a car) and a fixed base simulation steering task. A strong and quantifiable correspondence between self-rotation and CEM counter-rotation is found across a range of speeds. This gaze behavior is "optokinetic", i.e. optic flow is a sufficient stimulus to spontaneously elicit it in naïve subjects and vestibular stimulation or stereopsis are not critical. Theoretically, the observed nystagmus behavior is consistent with tracking waypoints on the future path, and predicted by waypoint models of locomotor control - but inconsistent with travel point models, such as the popular tangent point model.Entities:
Mesh:
Year: 2020 PMID: 32144287 PMCID: PMC7060325 DOI: 10.1038/s41598-020-60531-3
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Schematic illustration of the egocentric visual flow field generated by different types of observer motion (A–C), with different steering points posited in the literature (D,E). (A) During linear translation on a plane, radial visual flow in the lower visual field (below horizon) and a focus of expansion (FOE) coinciding with current heading. (B) During horizontal rotation, an observer experiences uniform horizontal flow. Counter-rotating gaze at 1:1 ratio will stabilize retinal image. (C) Combining translation and rotation, a more complex flow is generated. Note that it lacks a FOE, and that the flow is not uniform meaning the whole flow field cannot be retinally stabilized by compensatory eye rotation. (D) Multiple waypoints (WP1-WP3) on the future path. Left: A waypoint follows the local visual flow. Tracking a waypoint requires counter-rotating gaze (an optokinetic pursuit eye movement). The vertical component of flow is proportional to the sine of vertical elevation for all points. The horizontal component – but for all waypoints on the circular future path it is /2 observer rotation. Right: This property follows from waypoints being fixed in the 3D scene through which the observer is moving. (E) Travel points on the future path and lane edge. Travel points are stable in the visual field. Fixating them implies no eye movement. The tangent point on the lane edge (TP) (cf.[10]) lies on a zero horizontal flow circle–a region of the visual flow field with no horizontal flow on a constant radius path[36]. FP is a future path travel point in the Far Zone beyond the tangent point (cf.[12,13]). Right: Travel points, as their name sugggests, move in the 3D scene together with the observer.
Figure 2A view of the head-mounted eye tracker scene camera in the test track experiment (top) and simulator experiment (bottom), showing the participant’s view of the constant radius (circular) path. Red dot indicates point of regard. The eye tracker calibrates eye images to the head frame-of-reference. As the participant rolls her head, the world horizontal axis is tilted. To estimate horizontal visual flow – i.e. horizon of the plane of translation – the screen horizontal axis in the simulator, and the car’s horizontal axis on the test track were defined based on the optical visual markers seen on the windcreen, dashboard and screen image. (Text and arrows not visible during experiment).
Figure 3Pursuit identification from gaze position data. Time series graph from an example clockwise run–i.e. turning right–in the simulator experiment. Horizontal (in screen reference frame) gaze position is plotted on the y axis against time on the x axis. Positive y values indicate direction to the right. Red lines are linear segments fitted with the NSLR algorithm to identify the tracking fixations – i.e. parse the raw data into pursuit segments. The linear segment slope at each time point was used as the horizontal gaze rotation estimate.
Figure 4Typical individual participant data from the test track experiment. Top. Horizontal gaze angle time series from one. Blue dots are raw gaze positions, red lines the linear segments fitted by the fixation identification algorithm. Middle. Pursuit slopes i.e. gaze velocities (red dots) and negative of half yaw rate (green line) from the same trial. Bottom. All data from the individual participant. Blue dots are individual pursuits plotted against average yaw rate at the time of the pursuit. Red line is orthogonal regression fitted to the data. Black diagonal indicates :1 prediction.
Figure 5Typical individual participant data from the simulator experiment. Top. Horizontal gaze angle time series from one. Blue dots are raw gaze positions, red lines the linear segments fitted by the fixation identification algorithm. Constant acceleration of the car causes the rate of rotation to increase, and the effect of this can be seen as steeper slopes for the pursuits later in the series. Middle. Pursuit slopes i.e. gaze velocities (red dots) and negative of half yaw rate (green line) from the same trial. Bottom. All data from the individual participant. Blue dots are individual pursuits plotted against average yaw rate at the time of the pursuit. Red line is orthogonal regression to the data. Black diagonal indicates :1 prediction.
Figure 6Fixation-level horizontal gaze velocity (i.e. NSLR identified pursuits) plotted against yaw rate. All participants’ data. Left. Test track experiment (n = 4) Right. Simulator experiment (n = 15). Orthogonal regression line (red) is close to the :1 prediction (black diagonal). This is especially true for the simulator data; the test track data has less data points in the lower speeds region and overall shows more variance than simulator data.
Horizontal gaze rotation rate vs. /2 vehicle rotation rate orthogonal regression parameters for individual participants.
| Test track experiment | Participant | Regression slope | Regression intercept |
|---|---|---|---|
| 1 | 0.12 | ||
| 2 | |||
| 3 | 1.3 | ||
| 4 | 1.8 | ||
| All | 0.66 | ||
| 1 | 1.18 | ||
| 2 | |||
| 3 | |||
| 4 | 0.17 | ||
| 5 | 0.52 | ||
| 6 | 0.94 | ||
| 7 | 0.19 | ||
| 8 | |||
| 9 | 0.8 | ||
| 10 | 0.51 | ||
| 11 | 0.2 | ||
| 12 | 0.32 | ||
| 13 | 0.59 | ||
| 14 | |||
| 15 | 1.46 | ||
| All | 0.32 |
Test track experiment participants.
| N | Age | Self-reported driving experience | |
|---|---|---|---|
| Past 12 months | Total | ||
| 4 (4 M) | Range 24–39 y, median 28 y | Range between 15000–30000 km, Median between 15000–20000 km | Range between 100000–500000 km, Median between 100000–300000 km |
Simulator experiment participants.
| N | Age | Self-reported driving experience | |
|---|---|---|---|
| Past 12 months | Total | ||
| 15 (9 W 6 M) | Range 24–46 y median 28 y | Range between 1000–30000 km, Median between 5000–10 000 km | Range between 1000–500 000 km, Median between 30 000–100 000 km |
Figure 7An example of typical locations of the calibration points in the driver visual field (red crosses). Instead of multiple targets, we used a single target on a tripod about 7 m in front of the car (calibration point 1), and the participant was instructed to turn their head and hold fixation in a pre-determined sequence, which brought the calibration target into different parts of the head-referenced visual field.