Literature DB >> 15925100

Recruitment of fusiform face area associated with listening to degraded speech sounds in auditory-visual speech perception: a PET study.

Tetsuaki Kawase1, Keiichiro Yamaguchi, Takenori Ogawa, Ken-Ichi Suzuki, Maki Suzuki, Masatoshi Itoh, Toshimitsu Kobayashi, Toshikatsu Fujii.   

Abstract

For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory-visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the "face area") was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory-visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.

Entities:  

Mesh:

Year:  2005        PMID: 15925100     DOI: 10.1016/j.neulet.2005.03.050

Source DB:  PubMed          Journal:  Neurosci Lett        ISSN: 0304-3940            Impact factor:   3.046


  6 in total

1.  Emotional expressions in voice and music: same code, same effect?

Authors:  Nicolas Escoffier; Jidan Zhong; Annett Schirmer; Anqi Qiu
Journal:  Hum Brain Mapp       Date:  2012-04-16       Impact factor: 5.038

2.  Auditory cortical activation in severe-to-profound hearing-impaired patients monitored by SPET.

Authors:  W Di Nardo; D Di Giuda; E Scarano; P M Picciotti; S Galla; G De Rossi
Journal:  Acta Otorhinolaryngol Ital       Date:  2006-08       Impact factor: 2.124

3.  Speech comprehension aided by multiple modalities: behavioural and neural interactions.

Authors:  Carolyn McGettigan; Andrew Faulkner; Irene Altarelli; Jonas Obleser; Harriet Baverstock; Sophie K Scott
Journal:  Neuropsychologia       Date:  2012-01-17       Impact factor: 3.139

4.  Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech.

Authors:  Inga M Schepers; Daniel Yoshor; Michael S Beauchamp
Journal:  Cereb Cortex       Date:  2014-06-05       Impact factor: 5.357

5.  Toward Realigning Automatic Speaker Verification in the Era of COVID-19.

Authors:  Awais Khan; Ali Javed; Khalid Mahmood Malik; Muhammad Anas Raza; James Ryan; Abdul Khader Jilani Saudagar; Hafiz Malik
Journal:  Sensors (Basel)       Date:  2022-03-30       Impact factor: 3.576

6.  Strategies for tonal and atonal musical interpretation in blind and normally sighted children: an fMRI study.

Authors:  Coral Guerrero Arenas; Silvia S Hidalgo Tobón; Pilar Dies Suarez; Eduardo Barragán Pérez; Eduardo Castro Sierra; Julio García; Benito de Celis Alonso
Journal:  Brain Behav       Date:  2016-03-22       Impact factor: 2.708

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.