| Literature DB >> 35432052 |
Maëva Michon1,2, José Zamorano-Abramson3, Francisco Aboitiz1.
Abstract
While influential works since the 1970s have widely assumed that imitation is an innate skill in both human and non-human primate neonates, recent empirical studies and meta-analyses have challenged this view, indicating other forms of reward-based learning as relevant factors in the development of social behavior. The visual input translation into matching motor output that underlies imitation abilities instead seems to develop along with social interactions and sensorimotor experience during infancy and childhood. Recently, a new visual stream has been identified in both human and non-human primate brains, updating the dual visual stream model. This third pathway is thought to be specialized for dynamics aspects of social perceptions such as eye-gaze, facial expression and crucially for audio-visual integration of speech. Here, we review empirical studies addressing an understudied but crucial aspect of speech and communication, namely the processing of visual orofacial cues (i.e., the perception of a speaker's lips and tongue movements) and its integration with vocal auditory cues. Along this review, we offer new insights from our understanding of speech as the product of evolution and development of a rhythmic and multimodal organization of sensorimotor brain networks, supporting volitional motor control of the upper vocal tract and audio-visual voices-faces integration.Entities:
Keywords: audiovisual speech; face-voice integration; imitation; multimodal integration; primate social brain; speech development; speech evolution; visual speech
Year: 2022 PMID: 35432052 PMCID: PMC9007199 DOI: 10.3389/fpsyg.2022.829083
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1(A) Human and (B) chimpanzee neonates imitating orofacial gestures (left panel: tongue protrusion; middle panel: mouth-opening; right panel: lips protrusion) [(A) Reprinted with permission from Meltzoff and Moore (1977) and (B) Reprinted with permission from Myowa-Yamakoshi et al. (2004)]. (C) A twenty-eight-week gestational age fetus producing aerodigestive stereotypies Reprinted with permission from Kurjak et al. (2004).
FIGURE 2Updated version of the visual streams model: The ventral and dorsal pathways are represented by the blue and yellow arrays, respectively, and the third visual pathway proposed by Pitcher and Ungerleider (2020) is depicted in green. While these authors emphasized the role of the third pathway in the right hemisphere (left panel), in this article we are focusing on its functions in the left hemisphere (right panel).
FIGURE 3Rhythmic properties of audiovisual speech and cortical oscillations. Visual and auditory speech cues as well as neural oscillations are depicted in green and yellow, respectively. The blue array in the bottom panel indicates feedforward modulation of auditory cortex responses via theta synchronization to visual cortex oscillatory rhythm.