Literature DB >> 26519097

Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.

Moreno I Coco1, Frank Keller2, George L Malcolm3.   

Abstract

The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices.
Copyright © 2015 Cognitive Science Society, Inc.

Entities:  

Keywords:  Anticipation in language processing; Blank screen paradigm; Contextual guidance; Eye-tracking; Visual world

Mesh:

Year:  2015        PMID: 26519097     DOI: 10.1111/cogs.12313

Source DB:  PubMed          Journal:  Cogn Sci        ISSN: 0364-0213


  7 in total

1.  Mouse-tracking evidence for parallel anticipatory option evaluation.

Authors:  Edward A Cranford; Jarrod Moss
Journal:  Cogn Process       Date:  2017-12-23

2.  Look at that: Spatial deixis reveals experience-related differences in prediction.

Authors:  Tracy Reuter; Mia Sullivan; Casey Lew-Williams
Journal:  Lang Acquis       Date:  2021-07-30

3.  Adults and children predict in complex and variable referential contexts.

Authors:  Tracy Reuter; Kavindya Dalawella; Casey Lew-Williams
Journal:  Lang Cogn Neurosci       Date:  2020-11-23       Impact factor: 2.331

4.  Language-driven anticipatory eye movements in virtual reality.

Authors:  Nicole Eichert; David Peeters; Peter Hagoort
Journal:  Behav Res Methods       Date:  2018-06

5.  Picture perfect: A stimulus set of 225 pairs of matched clipart and photographic images normed by Mechanical Turk and laboratory participants.

Authors:  Raheleh Saryazdi; Julie Bannon; Agatha Rodrigues; Chris Klammer; Craig G Chambers
Journal:  Behav Res Methods       Date:  2018-12

6.  Understanding Events by Eye and Ear: Agent and Verb Drive Non-anticipatory Eye Movements in Dynamic Scenes.

Authors:  Roberto G de Almeida; Julia Di Nardo; Caitlyn Antal; Michael W von Grünau
Journal:  Front Psychol       Date:  2019-10-10

7.  Using the Visual World Paradigm to Study Retrieval Interference in Spoken Language Comprehension.

Authors:  Irina A Sekerina; Luca Campanelli; Julie A Van Dyke
Journal:  Front Psychol       Date:  2016-06-14
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.