Literature DB >> 30905837

Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise.

Linda Drijvers1, Mircea van der Plas2, Asli Özyürek3, Ole Jensen2.   

Abstract

Listeners are often challenged by adverse listening conditions during language comprehension induced by external factors, such as noise, but also internal factors, such as being a non-native listener. Visible cues, such as semantic information conveyed by iconic gestures, can enhance language comprehension in such situations. Using magnetoencephalography (MEG) we investigated whether spatiotemporal oscillatory dynamics can predict a listener's benefit of iconic gestures during language comprehension in both internally (non-native versus native listeners) and externally (clear/degraded speech) induced adverse listening conditions. Proficient non-native speakers of Dutch were presented with videos in which an actress uttered a degraded or clear verb, accompanied by a gesture or not, and completed a cued-recall task after every video. The behavioral and oscillatory results obtained from non-native listeners were compared to an MEG study where we presented the same stimuli to native listeners (Drijvers et al., 2018a). Non-native listeners demonstrated a similar gestural enhancement effect as native listeners, but overall scored significantly slower on the cued-recall task. In both native and non-native listeners, an alpha/beta power suppression revealed engagement of the extended language network, motor and visual regions during gestural enhancement of degraded speech comprehension, suggesting similar core processes that support unification and lexical access processes. An individual's alpha/beta power modulation predicted the gestural benefit a listener experienced during degraded speech comprehension. Importantly, however, non-native listeners showed less engagement of the mouth area of the primary somatosensory cortex, left insula (beta), LIFG and ATL (alpha) than native listeners, which suggests that non-native listeners might be hindered in processing the degraded phonological cues and coupling them to the semantic information conveyed by the gesture. Native and non-native listeners thus demonstrated similar yet distinct spatiotemporal oscillatory dynamics when recruiting visual cues to disambiguate degraded speech.
Copyright © 2019 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Degraded speech; Gesture; Magnetoencephalography; Multimodal integration; Non-native language comprehension; Oscillations; Semantics

Mesh:

Year:  2019        PMID: 30905837     DOI: 10.1016/j.neuroimage.2019.03.032

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  3 in total

1.  Degree of Language Experience Modulates Visual Attention to Visible Speech and Iconic Gestures During Clear and Degraded Speech Comprehension.

Authors:  Linda Drijvers; Julija Vaitonytė; Asli Özyürek
Journal:  Cogn Sci       Date:  2019-10

2.  Beat Gestures for Comprehension and Recall: Differential Effects of Language Learners and Native Listeners.

Authors:  Patrick Louis Rohrer; Elisabeth Delais-Roussarie; Pilar Prieto
Journal:  Front Psychol       Date:  2020-10-19

3.  Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information.

Authors:  Linda Drijvers; Ole Jensen; Eelke Spaak
Journal:  Hum Brain Mapp       Date:  2020-11-18       Impact factor: 5.399

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.