| Literature DB >> 35646302 |
Armina Janyan1, Yury Shtyrov2, Ekaterina Andriushchenko3, Ekaterina Blinova3,4, Olga Shcherbakova3,4.
Abstract
One of the unresolved questions in multisensory research is that of automaticity of consistent associations between sensory features from different modalities (e.g. high visual locations associated with high sound pitch). We addressed this issue by examining a possible role of selective attention in the audiovisual correspondence effect. We orthogonally manipulated loudness and pitch, directing participants' attention to the auditory modality only and using pitch and loudness identification tasks. Visual stimuli in high, low or central spatial locations appeared simultaneously with the sounds. If the correspondence effect is automatic, it should not be affected by task changes. The results, however, demonstrated a cross-modal pitch-verticality correspondence effect only when participants' attention was directed to pitch, but not to loudness identification task; moreover, the effect was present only in the upper location. The findings underscore the involvement of selective attention in cross-modal associations and support a top-down account of audiovisual correspondence effects.Entities:
Keywords: RT; audiovisual correspondence; selective attention
Year: 2022 PMID: 35646302 PMCID: PMC9134444 DOI: 10.1177/20416695221095884
Source DB: PubMed Journal: Iperception ISSN: 2041-6695
Figure 1.Examples of trials with different circle positions (down, centre, and up). Trial sequence: Fixation cross appeared for 500 ms, then auditory and visual stimuli were presented simultaneously for 100 ms, followed by a response window of 1 sec or until a response was given. Participants were required to look at the screen and to identify either the pitch (high/low) or loudness (loud/soft) of the presented sound, and press the corresponding button.
Reaction times: means (M) and standard deviations (SD) per condition, ms. Within each cell, descriptive statistics of tasks is separated by a forward slash (Pitch/Loudness).
| Task | Loudness/Position | High Pitch | Low Pitch | ||||
|---|---|---|---|---|---|---|---|
| Up | Centre | Down | Up | Centre | Down | ||
| Pitch/ Loudness | Loud | 471(81)/557(90) | 479(75)/545(81) | 486(81)/560(84) | 508(78)/542(81) | 493(87)/538(86) | 492(95)/537(82) |
| Soft | 478(85)/557(85) | 489(85)/550(79) | 498(83)/546(83) | 514(74)/566(79) | 488(70)/566(81) | 482(79)/563(89) | |
Figure 2.Data visualisation, descriptive. Reaction times per condition, separately for each task (Left panel: Loudness task, Right panel: Pitch Task). Upper part of the x-axis indicates Pitch levels, and lower part—Loudness levels. Dot position levels are indicated on the y-axis. Note. Dots denote Means, Boxes—Mean ± SE, and vertical bars—Mean ± 95% CI.
Figure 3.Reaction times in pitch and loudness tasks for different visual stimuli. Left panel: task X pitch X visuo-spatial position interaction. The RT of identification of high pitch was significantly faster than that of low pitch when the visually presented circle was in the upper position. A difference between RTs in upper and central positions during low pitch identification was also found. Right panel: task X pitch X loudness interaction. Identification of a soft sound was significantly impeded by low pitch. No other theoretically important difference was found. Note. Dots denote Means, Boxes—Mean ± SE, and vertical bars—Mean ± 95% CI *p < .05; **p < .01; ***p < .001.