Literature DB >> 33501131

Synchronization in Interpersonal Speech.

Shahin Amiriparian1, Jing Han1, Maximilian Schmitt1, Alice Baird1, Adria Mallol-Ragolta1, Manuel Milling1, Maurice Gerczuk1, Björn Schuller1,2.   

Abstract

During both positive and negative dyadic exchanges, individuals will often unconsciously imitate their partner. A substantial amount of research has been made on this phenomenon, and such studies have shown that synchronization between communication partners can improve interpersonal relationships. Automatic computational approaches for recognizing synchrony are still in their infancy. In this study, we extend on previous work in which we applied a novel method utilizing hand-crafted low-level acoustic descriptors and autoencoders (AEs) to analyse synchrony in the speech domain. For this purpose, a database consisting of 394 in-the-wild speakers from six different cultures, is used. For each speaker in the dyadic exchange, two AEs are implemented. Post the training phase, the acoustic features for one of the speakers is tested using the AE trained on their dyadic partner. In this same way, we also explore the benefits that deep representations from audio may have, implementing the state-of-the-art Deep Spectrum toolkit. For all speakers at varied time-points during their interaction, the calculation of reconstruction error from the AE trained on their respective dyadic partner is made. The results obtained from this acoustic analysis are then compared with the linguistic experiments based on word counts and word embeddings generated by our word2vec approach. The results demonstrate that there is a degree of synchrony during all interactions. We also find that, this degree varies across the 6 cultures found in the investigated database. These findings are further substantiated through the use of 4,096 dimensional Deep Spectrum features.
Copyright © 2019 Amiriparian, Han, Schmitt, Baird, Mallol-Ragolta, Milling, Gerczuk and Schuller.

Entities:  

Keywords:  autoencoders; computational paralinguistics; human-human interaction; machine learning; speech processing; speech synchronization

Year:  2019        PMID: 33501131      PMCID: PMC7806071          DOI: 10.3389/frobt.2019.00116

Source DB:  PubMed          Journal:  Front Robot AI        ISSN: 2296-9144


  14 in total

1.  THE SIGNIFICANCE OF POSTURE IN COMMUNICATION SYSTEMS.

Authors:  A E SCHEFLEN
Journal:  Psychiatry       Date:  1964-11       Impact factor: 2.458

Review 2.  Toward a mechanistic psychology of dialogue.

Authors:  Martin J Pickering; Simon Garrod
Journal:  Behav Brain Sci       Date:  2004-04       Impact factor: 12.579

3.  Universals and cultural variation in turn-taking in conversation.

Authors:  Tanya Stivers; N J Enfield; Penelope Brown; Christina Englert; Makoto Hayashi; Trine Heinemann; Gertie Hoymann; Federico Rossano; Jan Peter de Ruiter; Kyung-Eun Yoon; Stephen C Levinson
Journal:  Proc Natl Acad Sci U S A       Date:  2009-06-24       Impact factor: 11.205

Review 4.  Emotional mimicry as social regulation.

Authors:  Ursula Hess; Agneta Fischer
Journal:  Pers Soc Psychol Rev       Date:  2013-01-24

5.  Facial and emotional reactions to Duchenne and non-Duchenne smiles.

Authors:  V Surakka; J K Hietanen
Journal:  Int J Psychophysiol       Date:  1998-06       Impact factor: 2.997

6.  Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging.

Authors:  Katja U Likowski; Andreas Mühlberger; Antje B M Gerdes; Matthias J Wieser; Paul Pauli; Peter Weyers
Journal:  Front Hum Neurosci       Date:  2012-07-26       Impact factor: 3.169

7.  Nonverbal synchrony and affect in dyadic interactions.

Authors:  Wolfgang Tschacher; Georg M Rees; Fabian Ramseyer
Journal:  Front Psychol       Date:  2014-11-24

8.  Neural Correlates of Facial Mimicry: Simultaneous Measurements of EMG and BOLD Responses during Perception of Dynamic Compared to Static Facial Expressions.

Authors:  Krystyna Rymarczyk; Łukasz Żurawski; Kamila Jankowiak-Siuda; Iwona Szatkowska
Journal:  Front Psychol       Date:  2018-02-06

9.  SEWA DB: A Rich Database for Audio-Visual Emotion and Sentiment Research in the Wild.

Authors:  Jean Kossaifi; Robert Walecki; Yannis Panagakis; Jie Shen; Maximilian Schmitt; Fabien Ringeval; Jing Han; Vedhas Pandit; Antoine Toisoul; Bjorn Schuller; Kam Star; Elnar Hajiyev; Maja Pantic
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2021-02-04       Impact factor: 6.226

Review 10.  Facial mimicry in its social setting.

Authors:  Beate Seibt; Andreas Mühlberger; Katja U Likowski; Peter Weyers
Journal:  Front Psychol       Date:  2015-08-11
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.