Literature DB >> 18281153

Top-down feature-based selection of matching features for audio-visual synchrony discrimination.

Waka Fujisaki1, Shin'ya Nishida.   

Abstract

Our previous findings suggest that audio-visual synchrony perception is based on the matching of salient temporal features selected from each sensory modality through bottom-up segregation or by top-down attention to a specific spatial position. This study examined whether top-down attention to a specific feature value is also effective in selection of cross-modal matching features. In the first experiment, the visual stimulus was a pulse train in which a flash randomly appeared with a probability of 6.25, 12.5 or 25% for every 6.25 ms. Four flash colors randomly appeared with equal probability, and one of them was selected as the target color on each trial. The paired auditory stimulus was a single-pitch pip sequence that had the same temporal structure as the target color flashes, presented in synchrony with the target flashes (synchronous stimulus) or with a 250-ms relative shift (asynchronous stimuli). The task of the participants was synchrony-asynchrony discrimination, with the target color being indicated to the participant by a probe (with-probe condition) or not (without probe). In another control condition, there was no correlation between color and auditory signals (color-shuffled). In the second experiment, the roles of visual and auditory stimuli were exchanged. The results show that the performance of synchrony-asynchrony discrimination was worst for the color/pitch-shuffled condition, but best under the with-probe condition where the observer knew beforehand which color/pitch should be matched with the signal of the other modality. This suggests that top-down, feature-based attention can aid in feature selection for audio-visual synchrony discrimination even when the bottom-up segmentation processes cannot uniquely determine salient features. The observed feature-based selection, however, is not as effective as position-based selection.

Entities:  

Mesh:

Year:  2008        PMID: 18281153     DOI: 10.1016/j.neulet.2008.01.031

Source DB:  PubMed          Journal:  Neurosci Lett        ISSN: 0304-3940            Impact factor:   3.046


  7 in total

1.  A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities.

Authors:  Waka Fujisaki; Shin'ya Nishida
Journal:  Proc Biol Sci       Date:  2010-03-24       Impact factor: 5.349

2.  Audiovisual temporal capture underlies flash fusion.

Authors:  Takahiro Kawabe
Journal:  Exp Brain Res       Date:  2009-06-12       Impact factor: 1.972

3.  Attention regulates the plasticity of multisensory timing.

Authors:  James Heron; Neil W Roach; David Whitaker; James V M Hanson
Journal:  Eur J Neurosci       Date:  2010-05       Impact factor: 3.386

4.  Visual field differences in temporal synchrony processing for audio-visual stimuli.

Authors:  Yasuhiro Takeshima
Journal:  PLoS One       Date:  2021-12-16       Impact factor: 3.240

5.  Audio-tactile superiority over visuo-tactile and audio-visual combinations in the temporal resolution of synchrony perception.

Authors:  Waka Fujisaki; Shin'ya Nishida
Journal:  Exp Brain Res       Date:  2009-06-05       Impact factor: 1.972

6.  Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

Authors:  Christian Keitel; Matthias M Müller
Journal:  Exp Brain Res       Date:  2015-08-01       Impact factor: 1.972

7.  Temporal structure and complexity affect audio-visual correspondence detection.

Authors:  Rachel N Denison; Jon Driver; Christian C Ruff
Journal:  Front Psychol       Date:  2013-01-22
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.