| Literature DB >> 33355930 |
Thijs van Laarhoven1, Jeroen J Stekelenburg1, Jean Vroomen1.
Abstract
The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition.Entities:
Keywords: event-related potentials; predictive coding; visual-auditory
Mesh:
Year: 2020 PMID: 33355930 PMCID: PMC7900976 DOI: 10.1111/psyp.13749
Source DB: PubMed Journal: Psychophysiology ISSN: 0048-5772 Impact factor: 4.016
Experimental conditions
| Condition | Sound timing | Sound identity |
|---|---|---|
| Natural | Synchronized with video | Handclap |
| Random‐timing | Randomb | Handclap |
| Random‐identity | Synchronized with video | Randoma |
aThe identity of the sound was randomly selected in every trial out of 100 different environmental sounds (e.g., doorbell, dog bark, and car horn) with equal rise and fall times, equal length and matched amplitudes.
bThe sound could either precede or follow the visual collision moment of the two hands at a randomly selected SOA of −250, −230, −210, −190, −170, 210, 240, 260, 290, or 320 (all values in ms, negative and positive values indicate sound leading and following the natural synchrony point, respectively).
FIGURE 1Time‐course of the video presented in audiovisual and visual trials
FIGURE 2Grand average auditory (a), audiovisual‒visual (AV‒V) event‐related potential (ERP) waveforms and difference waveforms (A‒AV‒V) for the natural, random‐timing, and random‐identity condition. Analyzed time windows are marked in gray on the relevant electrodes for the N1a, N1b, N1c, and P2 component
FIGURE 3Scalp potential maps of the grand average auditory (a) and audiovisual‒visual (AV‒V) modality and difference topographies (A‒AV‒V) in the analyzed N1a, N1b, N1c, and P2 time windows for the natural, random‐timing, and random‐identity condition
FIGURE 4Average amplitude suppression in microvolts (µV) and percentage (%) of auditory amplitude of the N1b (a, b) and P2 (c, d) components for the natural, random‐timing, and random‐identity condition. Error bars represent ± one standard error of the mean