| Literature DB >> 32317142 |
Anna-Katharina R Bauer1, Stefan Debener2, Anna C Nobre3.
Abstract
At any given moment, we receive multiple signals from our different senses. Prior research has shown that signals in one sensory modality can influence neural activity and behavioural performance associated with another sensory modality. Recent human and nonhuman primate studies suggest that such cross-modal influences in sensory cortices are mediated by the synchronisation of ongoing neural oscillations. In this review, we consider two mechanisms proposed to facilitate cross-modal influences on sensory processing, namely cross-modal phase resetting and neural entrainment. We consider how top-down processes may further influence cross-modal processing in a flexible manner, and we highlight fruitful directions for further research.Entities:
Keywords: causal inference; cross-modal influence; multisensory; neural entrainment; neural oscillations; phase reset
Mesh:
Year: 2020 PMID: 32317142 PMCID: PMC7653674 DOI: 10.1016/j.tics.2020.03.003
Source DB: PubMed Journal: Trends Cogn Sci ISSN: 1364-6613 Impact factor: 20.229
Figure 1Basic Principles of Phase Resetting (A) and Neural Entrainment (B) Mechanisms.
(A) Phase reset results from a single transient event (e.g., sound or flash of light) that ‘resets’ the phase of ongoing neural oscillations. Schematic representation of phase realignment of neural oscillations in the auditory cortex (blue) and visual cortex (red) due to a transient event. (B) Phase entrainment occurs as the result of a rhythmic stimulus gradually shifting the phase of the neural oscillation. Schematic representation of phase realignment of ongoing neural oscillations in the auditory cortex (blue) and visual cortex (red) due to external rhythmic stimulation. For both transient and rhythmic stimulation, the phase of ongoing neural oscillations aligns to the driving stimulus, thereby modulating the excitation-inhibition cycle of the neural oscillation.
Empirical Reports of Cross-modal Phase Reseta
| First author | Refs | Year | CM influence | Method | Species | Reset event | Affected oscillations | Perceptual consequence |
|---|---|---|---|---|---|---|---|---|
| Fiebelkorn | [ | 2011 | A to V | Beh | Human | Short tone | Low frequency | Periodic modulations of target detection rate |
| Naue | [ | 2011 | A to V | EEG | Human | White noise burst | Beta | None reported |
| Diederich | [ | 2012 | A to V | Beh | Human | White noise burst | Beta, gamma | Periodic modulations of saccadic response times |
| Romei | [ | 2012 | A to V | EEG-TMS | Human | Short tone | Alpha | Periodic modulation of TMS-induced phosphene perception |
| Fiebelkorn | [ | 2013 | A to V | EEG | Human | Short tone | Delta to beta | Periodic modulations of target detection rate |
| Mercier | [ | 2013 | A to V | ECoG | Human | Short tone | Theta to gamma | None reported |
| Diederich | [ | 2014 | A to V | EEG | Human | Short tone | Theta, alpha | Periodic modulations of saccadic response times |
| Cecere | [ | 2015 | A to V | EEG-tACS | Human | Short tone (sound-induced double-flash illusion) | Alpha | None reported |
| Keil | [ | 2017 | A to V | EEG | Human | Short tone (sound-induced double-flash illusion) | Alpha | None reported |
| Plass | [ | 2019 | A to V | ECoG | Human | Short tone | Theta, alpha, beta | None reported |
| Senkowski | [ | 2005 | V to A | EEG | Human | AV grating/short tone | Gamma | Faster behavioural responses |
| Kayser | [ | 2008 | V to A | LFP | Macaque | Naturalistic scenes | Alpha | None reported |
| Thorne | [ | 2011 | V to A | EEG | Human | AV dash/tone streams | Theta, alpha | Faster behavioural responses |
| Mercier | [ | 2015 | V to A | ECoG | Human | Red disk | Delta, theta | None reported |
| Perrodin | [ | 2015 | V to A | LFP | Macaque | Naturalistic scenes | Theta | None reported |
| ten Oever | [ | 2015 | V to A | EEG | Human | Circle | Delta, alpha | None reported |
| Lakatos | [ | 2007 | T to A | CSD | Macaque | Median nerve stimulation | Delta, theta, gamma | None reported |
| Lakatos | [ | 2009 | A to V | CSD | Macaque | Short tones, flicker | Theta, gamma | None reported |
Abbreviations: A, auditory; AV, audiovisual; Beh, behavioural; CSD, current source density; ECoG, electrocorticography; EEG, electroencephalography; LFP, local field potential; MEG, magnetoencephalography; T, tactile; tACS, transcranial alternating current stimulation; TMS, transcranial magnetic stimulation; V, visual.
Empirical Reports of Cross-modal Entrainmenta
| First author | Refs | Year | CM influence | Method | Entraining sequence | Affected oscillations | Perceptual consequence |
|---|---|---|---|---|---|---|---|
| Bolger | [ | 2013 | A to V | Beh | Isochronous tone sequence (2 Hz) and classical music excerpts | – | Faster behavioural responses for salient metric positions |
| Brochard | [ | 2013 | A to V | Beh | Syncopated rhythm (1.25 Hz) | – | Facilitated word recognition for on-beat times |
| Miller | [ | 2013 | A to V | Beh | Isochronous tone sequence (1.67 Hz) | – | Faster saccadic responses for on-beat times |
| Escoffier | [ | 2015 | A to V | EEG | Isochronous tone sequence (1.3 Hz) | Beta | None reported |
| Simon | [ | 2017 | A to V | EEG | Amplitude modulated white noise (3 Hz) | Delta, theta, alpha | Periodic modulation of target detection rate |
| Barnhart | [ | 2018 | A to V | Beh | Isochronous tone sequence (0.67 Hz and 1.5 Hz) | – | Faster behavioural responses for on-beat times |
| Park | [ | 2016 | V to A | MEG | AV speech | Delta, theta | None reported |
| Megevand | [ | 2019 | A to V (V to A) | iEEG | AV speech | Delta, theta | None reported |
Abbreviations: V, visual; A, auditory; AV, audiovisual; Beh, behavioural; EEG, electroencephalography; iEEG, intracranial EEG; MEG, magnetoencephalography.
Preprint.
Figure IComputational Modelling of Cross-modal Interactions.
(A–C) The first row depicts a schematic representation of different causal structures in the environment. SA, SV, and SAV represent sources of auditory, visual, or cross-modal stimuli, and XA and XV indicate the respective sensory representations (e.g., time or location). The bottom row depicts the probability distributions of these sensory representations derived from the Bayesian model. (A) Assuming separate sources (C=2) leads to independent estimates for auditory and visual stimuli, with the optimal value matching the most likely unimodal response. (B) Assuming a common source (C=1) leads to fusion of the two sensory signals. The optimal Bayesian estimate is the combination of both auditory and visual input, each weighted by its relative reliability. (C) In Bayesian Causal Inference, the two different hypotheses about the causal structure (e.g., one or two sources) are combined, each weighted by its inferred probability given the auditory and visual input, known as model averaging. The optimal stimulus estimate is a mixture of the unimodal and fused estimates. (D–F) Schematized temporal relations between two stimuli. (D) When stimuli are presented with large temporal discrepancy, they are typically perceived as independent events and are processed separately. (E) When auditory–visual stimuli are presented with no or little temporal discrepancy, they are typically perceived as originating from the same source and their spatial evidence is integrated (fused). (C) When the temporal discrepancy is intermediate, causal inference can result in partial integration: the perceived timings of the two stimuli are pulled towards each other but do not converge.