Literature DB >> 32271065

Where the action could be: Speakers look at graspable objects and meaningful scene regions when describing potential actions.

Gwendolyn Rehrig1, Candace E Peacock2, Taylor R Hayes2, John M Henderson2, Fernanda Ferreira1.   

Abstract

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In 3 eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In 2 experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Entities:  

Mesh:

Year:  2020        PMID: 32271065      PMCID: PMC7483632          DOI: 10.1037/xlm0000837

Source DB:  PubMed          Journal:  J Exp Psychol Learn Mem Cogn        ISSN: 0278-7393            Impact factor:   3.051


  36 in total

1.  What the eyes say about speaking.

Authors:  Z M Griffin; K Bock
Journal:  Psychol Sci       Date:  2000-07

2.  Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing.

Authors:  M K Tanenhaus; J S Magnuson; D Dahan; C Chambers
Journal:  J Psycholinguist Res       Date:  2000-11

3.  Object-based attention in real-world scenes.

Authors:  George L Malcolm; Sarah Shomstein
Journal:  J Exp Psychol Gen       Date:  2015-04

4.  Coding of navigational affordances in the human visual system.

Authors:  Michael F Bonner; Russell A Epstein
Journal:  Proc Natl Acad Sci U S A       Date:  2017-04-17       Impact factor: 11.205

5.  Electrophysiological study of action-affordance priming between object names.

Authors:  Isabel M Feven-Parsons; Jeremy Goslin
Journal:  Brain Lang       Date:  2018-06-20       Impact factor: 2.381

6.  Incremental interpretation at verbs: restricting the domain of subsequent reference.

Authors:  G T Altmann; Y Kamide
Journal:  Cognition       Date:  1999-12-17

7.  Objects predict fixations better than early saliency.

Authors:  Wolfgang Einhäuser; Merrielle Spain; Pietro Perona
Journal:  J Vis       Date:  2008-11-20       Impact factor: 2.240

8.  Meaning guides attention during scene viewing, even when it is irrelevant.

Authors:  Candace E Peacock; Taylor R Hayes; John M Henderson
Journal:  Atten Percept Psychophys       Date:  2019-01       Impact factor: 2.199

9.  Visual scenes are categorized by function.

Authors:  Michelle R Greene; Christopher Baldassano; Andre Esteva; Diane M Beck; Li Fei-Fei
Journal:  J Exp Psychol Gen       Date:  2016-01

10.  Meaning Guides Attention during Real-World Scene Description.

Authors:  John M Henderson; Taylor R Hayes; Gwendolyn Rehrig; Fernanda Ferreira
Journal:  Sci Rep       Date:  2018-09-10       Impact factor: 4.379

View more
  6 in total

1.  Meaning maps detect the removal of local semantic scene content but deep saliency models do not.

Authors:  Taylor R Hayes; John M Henderson
Journal:  Atten Percept Psychophys       Date:  2022-02-09       Impact factor: 2.157

2.  Rapid Extraction of the Spatial Distribution of Physical Saliency and Semantic Informativeness from Natural Scenes in the Human Brain.

Authors:  John E Kiat; Taylor R Hayes; John M Henderson; Steven J Luck
Journal:  J Neurosci       Date:  2021-11-08       Impact factor: 6.709

3.  Look at what I can do: Object affordances guide visual attention while speakers describe potential actions.

Authors:  Gwendolyn Rehrig; Madison Barker; Candace E Peacock; Taylor R Hayes; John M Henderson; Fernanda Ferreira
Journal:  Atten Percept Psychophys       Date:  2022-04-28       Impact factor: 2.157

4.  Semantic object-scene inconsistencies affect eye movements, but not in the way predicted by contextualized meaning maps.

Authors:  Marek A Pedziwiatr; Matthias Kümmerer; Thomas S A Wallis; Matthias Bethge; Christoph Teufel
Journal:  J Vis       Date:  2022-02-01       Impact factor: 2.240

5.  Center Bias Does Not Account for the Advantage of Meaning Over Salience in Attentional Guidance During Scene Viewing.

Authors:  Candace E Peacock; Taylor R Hayes; John M Henderson
Journal:  Front Psychol       Date:  2020-07-28

6.  Meaning and expected surfaces combine to guide attention during visual search in scenes.

Authors:  Candace E Peacock; Deborah A Cronin; Taylor R Hayes; John M Henderson
Journal:  J Vis       Date:  2021-10-05       Impact factor: 2.240

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.