Literature DB >> 30821809

Disentangling bottom-up versus top-down and low-level versus high-level influences on eye movements over time.

Heiko H Schütt1,2, Lars O M Rothkegel2, Hans A Trukenbrod2, Ralf Engbert3, Felix A Wichmann1.   

Abstract

Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.

Mesh:

Year:  2019        PMID: 30821809     DOI: 10.1167/19.3.1

Source DB:  PubMed          Journal:  J Vis        ISSN: 1534-7362            Impact factor:   2.240


  9 in total

1.  Active vision in sight recovery individuals with a history of long-lasting congenital blindness.

Authors:  José P Ossandón; Paul Zerr; Idris Shareef; Ramesh Kekunnaya; Brigitte Röder
Journal:  eNeuro       Date:  2022-09-26

2.  Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation.

Authors:  Seren Zhu; Kaushik J Lakshminarasimhan; Nastaran Arfaei; Dora E Angelaki
Journal:  Elife       Date:  2022-05-03       Impact factor: 8.713

3.  Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR.

Authors:  Olga Lukashova-Sanz; Siegfried Wahl
Journal:  Brain Sci       Date:  2021-02-25

4.  Modeling the effects of perisaccadic attention on gaze statistics during scene viewing.

Authors:  Lisa Schwetlick; Lars Oliver Martin Rothkegel; Hans Arne Trukenbrod; Ralf Engbert
Journal:  Commun Biol       Date:  2020-12-01

5.  Potsdam Eye-Movement Corpus for Scene Memorization and Search With Color and Spatial-Frequency Filtering.

Authors:  Anke Cajar; Ralf Engbert; Jochen Laubrock
Journal:  Front Psychol       Date:  2022-02-23

6.  DeepGaze III: Modeling free-viewing human scanpaths with deep learning.

Authors:  Matthias Kümmerer; Matthias Bethge; Thomas S A Wallis
Journal:  J Vis       Date:  2022-04-06       Impact factor: 2.004

7.  Task-dependence in scene perception: Head unrestrained viewing using mobile eye-tracking.

Authors:  Daniel Backhaus; Ralf Engbert; Lars O M Rothkegel; Hans A Trukenbrod
Journal:  J Vis       Date:  2020-05-11       Impact factor: 2.240

8.  Age-related differences in visual encoding and response strategies contribute to spatial memory deficits.

Authors:  Vladislava Segen; Marios N Avraamides; Timothy J Slattery; Jan M Wiener
Journal:  Mem Cognit       Date:  2021-02

9.  Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.

Authors:  Souradeep Chakraborty; Dimitris Samaras; Gregory J Zelinsky
Journal:  J Vis       Date:  2022-03-02       Impact factor: 2.004

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.