Literature DB >> 31456175

Center bias outperforms image salience but not semantics in accounting for attention during scene viewing.

Taylor R Hayes1, John M Henderson2,3.   

Abstract

How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is 'pulled' to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743-747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model's center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.

Entities:  

Keywords:  Center bias; Meaning map; Saliency; Scene perception; Semantics

Mesh:

Year:  2020        PMID: 31456175     DOI: 10.3758/s13414-019-01849-7

Source DB:  PubMed          Journal:  Atten Percept Psychophys        ISSN: 1943-3921            Impact factor:   2.199


  9 in total

1.  Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.

Authors:  Gwendolyn Rehrig; Candace E Peacock; Taylor R Hayes; John M Henderson; Fernanda Ferreira
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2020-04-09       Impact factor: 3.051

2.  Developmental changes in natural scene viewing in infancy.

Authors:  Katherine I Pomaranski; Taylor R Hayes; Mee-Kyoung Kwon; John M Henderson; Lisa M Oakes
Journal:  Dev Psychol       Date:  2021-07

3.  Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.

Authors:  Gwendolyn Rehrig; Madison Barker; Candace E Peacock; Taylor R Hayes; John M Henderson; Fernanda Ferreira
Journal:  Atten Percept Psychophys       Date:  2022-04-28       Impact factor: 2.157

4.  Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.

Authors:  Taylor R Hayes; John M Henderson
Journal:  Psychol Sci       Date:  2021-07-12

5.  Deep saliency models learn low-, mid-, and high-level features to predict scene attention.

Authors:  Taylor R Hayes; John M Henderson
Journal:  Sci Rep       Date:  2021-09-16       Impact factor: 4.379

6.  When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention.

Authors:  Gwendolyn Rehrig; Taylor R Hayes; John M Henderson; Fernanda Ferreira
Journal:  Mem Cognit       Date:  2020-10

7.  Center Bias Does Not Account for the Advantage of Meaning Over Salience in Attentional Guidance During Scene Viewing.

Authors:  Candace E Peacock; Taylor R Hayes; John M Henderson
Journal:  Front Psychol       Date:  2020-07-28

8.  Overt attentional correlates of memorability of scene images and their relationships to scene semantics.

Authors:  Muxuan Lyu; Kyoung Whan Choe; Omid Kardan; Hiroki P Kotabe; John M Henderson; Marc G Berman
Journal:  J Vis       Date:  2020-09-02       Impact factor: 2.240

9.  Weighting the factors affecting attention guidance during free viewing and visual search: The unexpected role of object recognition uncertainty.

Authors:  Souradeep Chakraborty; Dimitris Samaras; Gregory J Zelinsky
Journal:  J Vis       Date:  2022-03-02       Impact factor: 2.004

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.