Literature DB >> 28039373

A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search.

Hossein Adeli1, Françoise Vitu2, Gregory J Zelinsky3,4.   

Abstract

Modern computational models of attention predict fixations using saliency maps and target maps, which prioritize locations for fixation based on feature contrast and target goals, respectively. But whereas many such models are biologically plausible, none have looked to the oculomotor system for design constraints or parameter specification. Conversely, although most models of saccade programming are tightly coupled to underlying neurophysiology, none have been tested using real-world stimuli and tasks. We combined the strengths of these two approaches in MASC, a model of attention in the superior colliculus (SC) that captures known neurophysiological constraints on saccade programming. We show that MASC predicted the fixation locations of humans freely viewing naturalistic scenes and performing exemplar and categorical search tasks, a breadth achieved by no other existing model. Moreover, it did this as well or better than its more specialized state-of-the-art competitors. MASC's predictive success stems from its inclusion of high-level but core principles of SC organization: an over-representation of foveal information, size-invariant population codes, cascaded population averaging over distorted visual and motor maps, and competition between motor point images for saccade programming, all of which cause further modulation of priority (attention) after projection of saliency and target maps to the SC. Only by incorporating these organizing brain principles into our models can we fully understand the transformation of complex visual information into the saccade programs underlying movements of overt attention. With MASC, a theoretical footing now exists to generate and test computationally explicit predictions of behavioral and neural responses in visually complex real-world contexts.SIGNIFICANCE STATEMENT The superior colliculus (SC) performs a visual-to-motor transformation vital to overt attention, but existing SC models cannot predict saccades to visually complex real-world stimuli. We introduce a brain-inspired SC model that outperforms state-of-the-art image-based competitors in predicting the sequences of fixations made by humans performing a range of everyday tasks (scene viewing and exemplar and categorical search), making clear the value of looking to the brain for model design. This work is significant in that it will drive new research by making computationally explicit predictions of SC neural population activity in response to naturalistic stimuli and tasks. It will also serve as a blueprint for the construction of other brain-inspired models, helping to usher in the next generation of truly intelligent autonomous systems.
Copyright © 2017 the authors 0270-6474/17/371453-15$15.00/0.

Entities:  

Keywords:  attention; computational models; eye movements; scene viewing; superior colliculus; visual search

Mesh:

Year:  2016        PMID: 28039373      PMCID: PMC6705681          DOI: 10.1523/JNEUROSCI.0825-16.2016

Source DB:  PubMed          Journal:  J Neurosci        ISSN: 0270-6474            Impact factor:   6.167


  8 in total

1.  Saccadic inhibition interrupts ongoing oculomotor activity to enable the rapid deployment of alternate movement plans.

Authors:  Emilio Salinas; Terrence R Stanford
Journal:  Sci Rep       Date:  2018-09-21       Impact factor: 4.379

2.  The effect of target salience and size in visual search within naturalistic scenes under degraded vision.

Authors:  Antje Nuthmann; Adam C Clayden; Robert B Fisher
Journal:  J Vis       Date:  2021-04-01       Impact factor: 2.240

3.  Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps.

Authors:  Marek A Pedziwiatr; Matthias Kümmerer; Thomas S A Wallis; Matthias Bethge; Christoph Teufel
Journal:  J Vis       Date:  2022-02-01       Impact factor: 2.240

4.  DeepGaze III: Modeling free-viewing human scanpaths with deep learning.

Authors:  Matthias Kümmerer; Matthias Bethge; Thomas S A Wallis
Journal:  J Vis       Date:  2022-04-06       Impact factor: 2.004

5.  Visual attention is not deployed at the endpoint of averaging saccades.

Authors:  Luca Wollenberg; Heiner Deubel; Martin Szinte
Journal:  PLoS Biol       Date:  2018-06-25       Impact factor: 8.029

6.  Occluded information is restored at preview but not during visual search.

Authors:  Robert G Alexander; Gregory J Zelinsky
Journal:  J Vis       Date:  2018-10-01       Impact factor: 2.240

7.  Autism Pathogenesis: The Superior Colliculus.

Authors:  Rubin Jure
Journal:  Front Neurosci       Date:  2019-01-09       Impact factor: 4.677

8.  Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.

Authors:  Souradeep Chakraborty; Dimitris Samaras; Gregory J Zelinsky
Journal:  J Vis       Date:  2022-03-02       Impact factor: 2.004

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.