| Literature DB >> 28469137 |
Boukje Habets1,2, Patrick Bruns3,4, Brigitte Röder3.
Abstract
Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements.Entities:
Mesh:
Year: 2017 PMID: 28469137 PMCID: PMC5431144 DOI: 10.1038/s41598-017-01252-y
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Mean proportion of ‘perceived simultaneity’ responses as a function of stimulus onset asynchrony (SOA) and first modality (negative values = auditory stimulus first, positive values = visual stimulus first) for audio-visual learned (L), audio-visual newly combined (NC), audio-visual visual_learned (V-l), audio-visual auditory_learned (A-l) and audio-visual new (N) stimuli.
Results of two-tailed t-tests for all conditions and SOAs.
| Conditions | SOA | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| −400 | −300 | −200 | −100 | 100 | 200 | 300 | 400 | ||
| L – NC |
| −0.18 | 1.42 | −0.99 | 1.45 |
| 1.63 |
| 1.48 |
|
| 0.862 | 0.183 | 0.344 | 0.175 |
| 0.132 |
| 0.166 | |
| Cohen’s | 0.05 | 0.41 | 0.29 | 0.42 |
| 0.47 |
| 0.43 | |
| L – V_l |
| 0.33 | 1.41 | 1.13 |
| 2.15 | 1.40 |
| 0.16 |
|
| 0.748 | 0.185 | 0.282 |
| 0.054 | 0.189 |
| 0.873 | |
| Cohen’s | 0.10 | 0.41 | 0.33 |
| 0.62 | 0.40 |
| 0.05 | |
| L – A_l |
| 0.00 | 0.83 | −0.16 |
|
| 2.02 |
| 1.46 |
|
| 1 | 0.423 | 0.873 |
|
| 0.069 |
| 0.173 | |
| Cohen’s | 0.00 | 0.24 | 0.05 |
|
| 0.58 |
| 0.42 | |
| L – N |
| 1.39 | 1.50 | 1.46 |
|
| 0.98 |
| 1.61 |
|
| 0.191 | 0.162 | 0.173 |
|
| 0.349 |
| 0.136 | |
| Cohen’s | 0.40 | 0.43 | 0.42 |
|
| 0.28 |
| 0.46 | |
| NC – V_l |
| 0.67 | 0.15 |
| 1.97 | 0.00 | 0.00 | 0.52 | −0.53 |
|
| 0.515 | 0.881 |
| 0.074 | 1 | 1 | 0.615 | 0.606 | |
| Cohen’s | 0.19 | 0.04 |
| 0.57 | 0.00 | 0.00 | 0.15 | 0.15 | |
| NC – A_l |
| 0.23 | −1.00 | 1.88 | 2.18 | 1.07 | 0.56 | 0.70 | 0.73 |
|
| 0.820 | 0.339 | 0.087 | 0.052 | 0.309 | 0.586 | 0.499 | 0.480 | |
| Cohen’s | 0.07 | 0.29 | 0.54 | 0.63 | 0.31 | 0.16 | 0.20 | 0.21 | |
| NC – N |
|
| 0.39 |
|
| 1.59 | 0.00 | 1.34 | 0.54 |
|
|
| 0.701 |
|
| 0.139 | 1 | 0.207 | 0.600 | |
| Cohen’s |
| 0.11 |
|
| 0.46 | 0.00 | 0.39 | 0.16 | |
| V_l – A_l |
| −0.80 | −0.67 | −2.03 | 0.32 | 1.59 | 0.56 | 0.45 | 1.54 |
|
| 0.438 | 0.516 | 0.067 | 0.755 | 0.139 | 0.586 | 0.660 | 0.152 | |
| Cohen’s | 0.23 | 0.19 | 0.59 | 0.09 | 0.46 | 0.16 | 0.13 | 0.44 | |
| V_l – N |
| 1.30 | 0.32 | 0.61 | 1.15 | 1.77 | 0.00 | 1.39 | 0.99 |
|
| 0.220 | 0.755 | 0.555 | 0.276 | 0.104 | 1 | 0.191 | 0.345 | |
| Cohen’s | 0.38 | 0.09 | 0.18 | 0.33 | 0.51 | 0.00 | 0.40 | 0.28 | |
| A_l – N |
| 1.39 | 1.11 |
| 1.00 | 1.25 | −0.39 | 0.67 | −0.48 |
|
| 0.191 | 0.293 |
| 0.339 | 0.236 | 0.701 | 0.515 | 0.638 | |
| Cohen’s | 0.40 | 0.32 |
| 0.29 | 0.36 | 0.11 | 0.19 | 0.14 | |
Note. SOA = stimulus onset asynchrony (negative values = auditory stimulus first, positive values = visual stimulus first); L = audio-visual learned; NC = audio-visual newly combined; V-l = audio-visual visual_learned; A-l = audio-visual auditory_learned; N = audio-visual new.
*p < 0.05 (values of significant pairwise comparisons are in boldface).
Figure 2Mean size of the temporal binding window (TBW) for audio-visual learned (L), audio-visual newly combined (NC), audio-visual visual_learned (V-l), audio-visual auditory_learned (A-l) and audio-visual new (N) stimuli. Error bars denote standard errors of the mean.