Literature DB >> 25786953

A kinematic study of critical and non-critical articulators in emotional speech production.

Jangwon Kim1, Asterios Toutios1, Sungbok Lee1, Shrikanth S Narayanan1.   

Abstract

This study explores one aspect of the articulatory mechanism that underlies emotional speech production, namely, the behavior of linguistically critical and non-critical articulators in the encoding of emotional information. The hypothesis is that the possible larger kinematic variability in the behavior of non-critical articulators enables revealing underlying emotional expression goal more explicitly than that of the critical articulators; the critical articulators are strictly controlled in service of achieving linguistic goals and exhibit smaller kinematic variability. This hypothesis is examined by kinematic analysis of the movements of critical and non-critical speech articulators gathered using eletromagnetic articulography during spoken expressions of five categorical emotions. Analysis results at the level of consonant-vowel-consonant segments reveal that critical articulators for the consonants show more (less) peripheral articulations during production of the consonant-vowel-consonant syllables for high (low) arousal emotions, while non-critical articulators show less sensitive emotional variation of articulatory position to the linguistic gestures. Analysis results at the individual phonetic targets show that overall, between- and within-emotion variability in articulatory positions is larger for non-critical cases than for critical cases. Finally, the results of simulation experiments suggest that the postural variation of non-critical articulators depending on emotion is significantly associated with the controls of critical articulators.

Year:  2015        PMID: 25786953     DOI: 10.1121/1.4908284

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  2 in total

1.  Transforming an embodied conversational agent into an efficient talking head: from keyframe-based animation to multimodal concatenation synthesis.

Authors:  Guillaume Gibert; Kirk N Olsen; Yvonne Leung; Catherine J Stevens
Journal:  Comput Cogn Sci       Date:  2015-09-08

2.  Articulation constrained learning with application to speech emotion recognition.

Authors:  Mohit Shah; Ming Tu; Visar Berisha; Chaitali Chakrabarti; Andreas Spanias
Journal:  EURASIP J Audio Speech Music Process       Date:  2019-08-20
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.