| Literature DB >> 26082736 |
Abstract
To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.Entities:
Keywords: continuous flash suppression; multisensory integration; semantic priming; semantic processing; visual awareness
Year: 2015 PMID: 26082736 PMCID: PMC4451233 DOI: 10.3389/fpsyg.2015.00722
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1(A) Schematic diagram of stimulus presentation. (B) Changes in contrast of the suppressor (solid line) and the event stimulus (dashed line).
FIGURE 2Results of Experiment 1. Response times (RTs: time of breakup of suppression) for the aggregated audiovisual soundtrack conditions in the 3AFC event video identification task when soundtracks were heard during event video viewing (***p ≤ 0.001). Error bars represent ± 1 standard error.
FIGURE 3Results of Experiment 2 and 3. (A) RTs for the aggregated audiovisual soundtrack conditions in the 3AFC identification task when soundtracks were heard prior to silent event video viewing. (B) RTs for the aggregated audiovisual soundtrack conditions in the 3AFC identification task when soundtracks were heard during static image viewing (*p ≤0.05). Error bars represent ± 1 standard error.