Literature DB >> 35484443

Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.

Gwendolyn Rehrig1, Madison Barker2, Candace E Peacock3, Taylor R Hayes4, John M Henderson3, Fernanda Ferreira2.   

Abstract

As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.
© 2022. The Psychonomic Society, Inc.

Entities:  

Keywords:  Eye movements and visual attention; Object-based attention; Perception and action

Mesh:

Year:  2022        PMID: 35484443      PMCID: PMC9246959          DOI: 10.3758/s13414-022-02467-6

Source DB:  PubMed          Journal:  Atten Percept Psychophys        ISSN: 1943-3921            Impact factor:   2.157


  43 in total

Review 1.  Eye movements in natural behavior.

Authors:  Mary Hayhoe; Dana Ballard
Journal:  Trends Cogn Sci       Date:  2005-04       Impact factor: 20.229

2.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions.

Authors:  Benjamin W Tatler
Journal:  J Vis       Date:  2007-11-21       Impact factor: 2.240

3.  Electrophysiological study of action-affordance priming between object names.

Authors:  Isabel M Feven-Parsons; Jeremy Goslin
Journal:  Brain Lang       Date:  2018-06-20       Impact factor: 2.381

4.  The time course of picture viewing.

Authors:  J R Antes
Journal:  J Exp Psychol       Date:  1974-07

5.  Meaning-based guidance of attention in scenes as revealed by meaning maps.

Authors:  John M Henderson; Taylor R Hayes
Journal:  Nat Hum Behav       Date:  2017-09-25

6.  Incremental interpretation at verbs: restricting the domain of subsequent reference.

Authors:  G T Altmann; Y Kamide
Journal:  Cognition       Date:  1999-12-17

7.  Meaning guides attention during scene viewing, even when it is irrelevant.

Authors:  Candace E Peacock; Taylor R Hayes; John M Henderson
Journal:  Atten Percept Psychophys       Date:  2019-01       Impact factor: 2.199

8.  Why do we retrace our visual steps? Semantic and episodic memory in gaze reinstatement.

Authors:  Michelle M Ramey; Andrew P Yonelinas; John M Henderson
Journal:  Learn Mem       Date:  2020-06-15       Impact factor: 2.460

9.  When more is more: redundant modifiers can facilitate visual search.

Authors:  Gwendolyn Rehrig; Reese A Cullimore; John M Henderson; Fernanda Ferreira
Journal:  Cogn Res Princ Implic       Date:  2021-02-17

10.  How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models.

Authors:  Antje Nuthmann; Wolfgang Einhäuser; Immo Schütz
Journal:  Front Hum Neurosci       Date:  2017-10-31       Impact factor: 3.169

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.