| Literature DB >> 30135740 |
Jolien C Francken1, Erik L Meijs1, Odile M Ridderinkhof2, Peter Hagoort1,3, Floris P de Lange1, Simon van Gaal1,2.
Abstract
Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language-perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. "rise," "fall"), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language-perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.Entities:
Keywords: feed-forward processing; language; visual perception
Year: 2015 PMID: 30135740 PMCID: PMC6089086 DOI: 10.1093/nc/niv003
Source DB: PubMed Journal: Neurosci Conscious ISSN: 2057-2107
Figure 1Models and task design
(A) In the feedback model of language-perception interactions (left), linguistic information is processed in language-specific regions and subsequently feeds back to the sensory system to modulate perceptual processing. Therefore, the processing of visual stimuli is influenced at the level of visual cortex. In the feed-forward model of language–perception interactions (right), linguistic information is likewise processed in language-specific regions where it activates a conceptual representation. Crucially, in this case, the visual information is also processed up to a conceptual level, and it is here at the conceptual level that linguistic information interacts with visual stimuli. (B) A congruent or incongruent motion word (upward or downward, e.g. “rise” or “fall“) is displayed in advance of every motion discrimination trial. All words are preceded by a forward mask; unaware words are additionally followed by two backward masks. The visual motion stimulus is presented either in the left or right lower visual field and the dots move upward or downward.
Figure 2Results
(A) Mean reaction times (in ms) in the unmasked (aware, left bars) and masked (unaware, right bars) conditions for visual motion stimuli that were preceded by a congruent (green) motion word were faster than when preceded by an incongruent (red) motion word. (B) Mean error rates (%) in the aware condition and unaware condition were lower for congruent than incongruent motion words. (C) The delta plot for reaction times (in ms) congruency effects (incongruent-congruent, CE) in the conscious (gray) condition showed the typical RT conflict-control profile with an initial CE increase over RT bins and a CE decrease in the last bin. In the unaware condition (orange), the CE was not affected by response time and did not decrease in the last bin. (D) Conditional accuracy functions for error rates (%) CE. Stronger response capture is associated with a higher percentage of fast errors. This pattern of decreasing CE across RT bins is present for the aware condition, but not for the unaware condition. Error bars denote SEM. *P < 0.05, **P < 0.01, ***P < 0.001.