| Literature DB >> 35641858 |
Hongtao Yu1, Aijun Wang2, Ming Zhang1,3, JiaJia Yang1, Satoshi Takahashi1, Yoshimichi Ejima1, Jinglong Wu1,4.
Abstract
Evidence has shown that multisensory integration benefits to unisensory perception performance are asymmetric and that auditory perception performance can receive more multisensory benefits, especially when the attention focus is directed toward a task-irrelevant visual stimulus. At present, whether the benefits of semantically (in)congruent multisensory integration with modal-based attention for subsequent unisensory short-term memory (STM) retrieval are also asymmetric remains unclear. Using a delayed matching-to-sample paradigm, the present study investigated this issue by manipulating the attention focus during multisensory memory encoding. The results revealed that both visual and auditory STM retrieval reaction times were faster under semantically congruent multisensory conditions than under unisensory memory encoding conditions. We suggest that coherent multisensory representation formation might be optimized by restricted multisensory encoding and can be rapidly triggered by subsequent unisensory memory retrieval demands. Crucially, auditory STM retrieval is exclusively accelerated by semantically congruent multisensory memory encoding, indicating that the less effective sensory modality of memory retrieval relies more on the coherent prior formation of a multisensory representation optimized by modal-based attention.Entities:
Keywords: Audiovisual integration; Modal-based attention; Semantic congruency; Short-term memory
Mesh:
Year: 2022 PMID: 35641858 DOI: 10.3758/s13414-021-02437-4
Source DB: PubMed Journal: Atten Percept Psychophys ISSN: 1943-3921 Impact factor: 2.199