Literature DB >> 32430889

When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention.

Gwendolyn Rehrig1, Taylor R Hayes2, John M Henderson3,2, Fernanda Ferreira3.   

Abstract

The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in attentional guidance is modulated by verbal encoding, given that we often use language to process information. In two experiments, 60 subjects studied scenes (N1 = 30 and N2 = 60) for 12 s each in preparation for a scene-recognition task. Half of the time, subjects engaged in a secondary articulatory suppression task concurrent with scene viewing. Meaning and saliency maps were quantified for each of the experimental scenes. In both experiments, we found that meaning explained more of the variance in visual attention than image salience did, particularly when we controlled for the overlap between meaning and salience, with and without the suppression task. Based on these results, verbal encoding processes do not appear to modulate the relationship between scene meaning and visual attention. Our findings suggest that semantic information in the scene steers the attentional ship, consistent with cognitive guidance theory.

Entities:  

Keywords:  Language; Meaning; Salience; Scene processing; Visual attention

Mesh:

Year:  2020        PMID: 32430889      PMCID: PMC8843103          DOI: 10.3758/s13421-020-01050-4

Source DB:  PubMed          Journal:  Mem Cognit        ISSN: 0090-502X


  30 in total

1.  Visual correlates of fixation selection: effects of scale and time.

Authors:  Benjamin W Tatler; Roland J Baddeley; Iain D Gilchrist
Journal:  Vision Res       Date:  2005-03       Impact factor: 1.886

2.  Early activation of object names in visual search.

Authors:  Antje S Meyer; Eva Belke; Anna L Telling; Glyn W Humphreys
Journal:  Psychon Bull Rev       Date:  2007-08

3.  Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli.

Authors:  Wolfgang Einhäuser; Ueli Rutishauser; Christof Koch
Journal:  J Vis       Date:  2008-02-15       Impact factor: 2.240

4.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions.

Authors:  Benjamin W Tatler
Journal:  J Vis       Date:  2007-11-21       Impact factor: 2.240

5.  Activation of distractor names in the picture-picture interference paradigm.

Authors:  Antje S Meyer; Markus F Damian
Journal:  Mem Cognit       Date:  2007-04

6.  Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

Authors:  Franz Faul; Edgar Erdfelder; Axel Buchner; Albert-Georg Lang
Journal:  Behav Res Methods       Date:  2009-11

7.  A working memory account for spatial-numerical associations.

Authors:  Jean-Philippe van Dijck; Wim Fias
Journal:  Cognition       Date:  2011-01-22

8.  Scene semantics involuntarily guide attention during visual search.

Authors:  Taylor R Hayes; John M Henderson
Journal:  Psychon Bull Rev       Date:  2019-10

9.  Meaning guides attention during scene viewing, even when it is irrelevant.

Authors:  Candace E Peacock; Taylor R Hayes; John M Henderson
Journal:  Atten Percept Psychophys       Date:  2019-01       Impact factor: 2.199

Review 10.  Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach.

Authors:  John M Henderson; Taylor R Hayes; Candace E Peacock; Gwendolyn Rehrig
Journal:  Vision (Basel)       Date:  2019-05-10
View more
  1 in total

1.  Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.

Authors:  Gwendolyn Rehrig; Madison Barker; Candace E Peacock; Taylor R Hayes; John M Henderson; Fernanda Ferreira
Journal:  Atten Percept Psychophys       Date:  2022-04-28       Impact factor: 2.157

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.