| Literature DB >> 15925100 |
Tetsuaki Kawase1, Keiichiro Yamaguchi, Takenori Ogawa, Ken-Ichi Suzuki, Maki Suzuki, Masatoshi Itoh, Toshimitsu Kobayashi, Toshikatsu Fujii.
Abstract
For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory-visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the "face area") was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory-visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.Entities:
Mesh:
Year: 2005 PMID: 15925100 DOI: 10.1016/j.neulet.2005.03.050
Source DB: PubMed Journal: Neurosci Lett ISSN: 0304-3940 Impact factor: 3.046