Literature DB >> 35138579

Meaning maps detect the removal of local semantic scene content but deep saliency models do not.

Taylor R Hayes1, John M Henderson2,3.   

Abstract

Meaning mapping uses human raters to estimate different semantic features in scenes, and has been a useful tool in demonstrating the important role semantics play in guiding attention. However, recent work has argued that meaning maps do not capture semantic content, but like deep learning models of scene attention, represent only semantically-neutral image features. In the present study, we directly tested this hypothesis using a diffeomorphic image transformation that is designed to remove the meaning of an image region while preserving its image features. Specifically, we tested whether meaning maps and three state-of-the-art deep learning models were sensitive to the loss of semantic content in this critical diffeomorphed scene region. The results were clear: meaning maps generated by human raters showed a large decrease in the diffeomorphed scene regions, while all three deep saliency models showed a moderate increase in the diffeomorphed scene regions. These results demonstrate that meaning maps reflect local semantic content in scenes while deep saliency models do something else. We conclude the meaning mapping approach is an effective tool for estimating semantic content in scenes.
© 2022. The Psychonomic Society, Inc.

Entities:  

Keywords:  Deep learning; Image saliency; Meaning maps; Scene perception; Semantics

Mesh:

Year:  2022        PMID: 35138579     DOI: 10.3758/s13414-021-02395-x

Source DB:  PubMed          Journal:  Atten Percept Psychophys        ISSN: 1943-3921            Impact factor:   2.157


  17 in total

1.  Searching in the dark: cognitive relevance drives attention in real-world scenes.

Authors:  John M Henderson; George L Malcolm; Charles Schandl
Journal:  Psychon Bull Rev       Date:  2009-10

2.  Meaning-based guidance of attention in scenes as revealed by meaning maps.

Authors:  John M Henderson; Taylor R Hayes
Journal:  Nat Hum Behav       Date:  2017-09-25

3.  Neural Correlates of Fixated Low- and High-level Scene Properties during Active Scene Viewing.

Authors:  John M Henderson; Jessica E Goold; Wonil Choi; Taylor R Hayes
Journal:  J Cogn Neurosci       Date:  2020-06-23       Impact factor: 3.225

4.  Semantic guidance of eye movements in real-world scenes.

Authors:  Alex D Hwang; Hsueh-Cheng Wang; Marc Pomplun
Journal:  Vision Res       Date:  2011-03-21       Impact factor: 1.886

5.  Rapid Extraction of the Spatial Distribution of Physical Saliency and Semantic Informativeness from Natural Scenes in the Human Brain.

Authors:  John E Kiat; Taylor R Hayes; John M Henderson; Steven J Luck
Journal:  J Neurosci       Date:  2021-11-08       Impact factor: 6.709

6.  Objects predict fixations better than early saliency.

Authors:  Wolfgang Einhäuser; Merrielle Spain; Pietro Perona
Journal:  J Vis       Date:  2008-11-20       Impact factor: 2.240

7.  Human gaze control during real-world scene perception.

Authors:  John M Henderson
Journal:  Trends Cogn Sci       Date:  2003-11       Impact factor: 20.229

8.  Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes.

Authors:  Taylor R Hayes; John M Henderson
Journal:  Psychol Sci       Date:  2021-07-12

9.  Individual differences in visual salience vary along semantic dimensions.

Authors:  Benjamin de Haas; Alexios L Iakovidis; D Samuel Schwarzkopf; Karl R Gegenfurtner
Journal:  Proc Natl Acad Sci U S A       Date:  2019-05-28       Impact factor: 11.205

10.  Active vision in immersive, 360° real-world environments.

Authors:  Amanda J Haskins; Jeff Mentch; Thomas L Botch; Caroline E Robertson
Journal:  Sci Rep       Date:  2020-08-31       Impact factor: 4.379

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.