| Literature DB >> 19803653 |
David M Elder1, Stephen Grossberg, Ennio Mingolla.
Abstract
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3-dimensional virtual reality environment to determine the position of objects on the basis of motion discontinuities and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles so that the goal acts as an attractor of heading and obstacles act as repellers. In addition, the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas middle temporal, medial superior temporal, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually guided steering, obstacle avoidance, and route selection. PsycINFO Database Record (c) 2009 APA, all rights reserved.Entities:
Mesh:
Year: 2009 PMID: 19803653 DOI: 10.1037/a0016459
Source DB: PubMed Journal: J Exp Psychol Hum Percept Perform ISSN: 0096-1523 Impact factor: 3.332