| Literature DB >> 36246513 |
Abstract
In pandemic times, when visual speech cues are masked, it becomes particularly evident how much we rely on them to communicate. Recent research points to a key role of neural oscillations for cross-modal predictions during speech perception. This article bridges several fields of research - neural oscillations, cross-modal speech perception and brain stimulation - to propose ways forward for research on human communication. Future research can test: (1) whether "speech is special" for oscillatory processes underlying cross-modal predictions; (2) whether "visual control" of oscillatory processes in the auditory system is strongest in moments of reduced acoustic regularity; and (3) whether providing information to the brain via electric stimulation can overcome deficits associated with cross-modal information processing in certain pathological conditions.Entities:
Keywords: Cross-modality; Neural entrainment; Neural oscillations; Prediction; Speech perception; Transcranial alternating current stimulation
Year: 2021 PMID: 36246513 PMCID: PMC9559900 DOI: 10.1016/j.crneur.2021.100015
Source DB: PubMed Journal: Curr Res Neurobiol ISSN: 2665-945X
Fig. 1A. Interpretation of experimental results reported by Biau et al. (in press). In typically produced human speech (left), lip movements (blue) precede the acoustic speech signal (green). Example signals have been band-pass filtered to illustrate fluctuations in the theta-range (~4–8 Hz). In the receiver's brain (right), neural oscillations in visual regions (blue) align to lip movements and prepare oscillations in auditory regions (green) so that they become aligned with the acoustic signal (faint green). In the study described, the acoustic signal was not presented; instead, the detection of a target tone was found to be modulated by the theta phase of lip movements, demonstrating visual speech cues driving auditory perception. B. In scenarios in which such visual cues are unavailable, oscillations in auditory regions can still align to the speech signal. However, due to the lack of preparatory signals from visual regions, they are less efficient in adjusting to upcoming input, potentially leading to impaired speech perception.