Literature DB >> 17683236

It's not what you say but the way you say it: matching faces and voices.

Karen Lander1, Harold Hill, Miyuki Kamachi, Eric Vatikiotis-Bateson.   

Abstract

Recent studies have shown that the face and voice of an unfamiliar person can be matched for identity. Here the authors compare the relative effects of changing sentence content (what is said) and sentence manner (how it is said) on matching identity between faces and voices. A change between speaking a sentence as a statement and as a question disrupted matching performance, whereas changing the sentence itself did not. This was the case when the faces and voices were from the same race as participants and speaking a familiar language (English; Experiment 1) or from another race and speaking an unfamiliar language (Japanese; Experiment 2). Altering manner between conversational and clear speech (Experiment 3) or between conversational and casual speech (Experiment 4) was also disruptive. However, artificially slowing (Experiment 5) or speeding (Experiment 6) speech did not affect cross-modal matching performance. The results show that bimodal cues to identity are closely linked to manner but that content (what is said) and absolute tempo are not critical. Instead, prosodic variations in rhythmic structure and/or expressiveness may provide a bimodal, dynamic identity signature. (c) 2007 APA, all rights reserved

Entities:  

Mesh:

Year:  2007        PMID: 17683236     DOI: 10.1037/0096-1523.33.4.905

Source DB:  PubMed          Journal:  J Exp Psychol Hum Percept Perform        ISSN: 0096-1523            Impact factor:   3.332


  9 in total

1.  Rhesus macaques recognize unique multimodal face-voice relations of familiar individuals and not of unfamiliar ones.

Authors:  Holly M Habbershon; Sarah Z Ahmed; Yale E Cohen
Journal:  Brain Behav Evol       Date:  2013-06-14       Impact factor: 1.808

2.  Elastic facial movement influences part-based but not holistic processing.

Authors:  Naiqi G Xiao; Paul C Quinn; Liezhong Ge; Kang Lee
Journal:  J Exp Psychol Hum Percept Perform       Date:  2013-02-11       Impact factor: 3.332

3.  Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases.

Authors:  Nadine Lavan; Harriet Smith; Li Jiang; Carolyn McGettigan
Journal:  Atten Percept Psychophys       Date:  2021-04-01       Impact factor: 2.199

4.  Matching novel face and voice identity using static and dynamic facial images.

Authors:  Harriet M J Smith; Andrew K Dunn; Thom Baguley; Paula C Stacey
Journal:  Atten Percept Psychophys       Date:  2016-04       Impact factor: 2.199

5.  Cross-modal signatures in maternal speech and singing.

Authors:  Sandra E Trehub; Judy Plantinga; Jelena Brcic; Magda Nowicki
Journal:  Front Psychol       Date:  2013-11-01

6.  Hyper-realistic face masks: a new challenge in person identification.

Authors:  Jet Gabrielle Sanders; Yoshiyuki Ueda; Kazusa Minemoto; Eilidh Noyes; Sakiko Yoshikawa; Rob Jenkins
Journal:  Cogn Res Princ Implic       Date:  2017-10-25

7.  Identity information content depends on the type of facial movement.

Authors:  Katharina Dobs; Isabelle Bülthoff; Johannes Schultz
Journal:  Sci Rep       Date:  2016-09-29       Impact factor: 4.379

8.  Matching Unfamiliar Voices to Static and Dynamic Faces: No Evidence for a Dynamic Face Advantage in a Simultaneous Presentation Paradigm.

Authors:  Sujata M Huestegge
Journal:  Front Psychol       Date:  2019-08-23

9.  Unimodal and cross-modal identity judgements using an audio-visual sorting task: Evidence for independent processing of faces and voices.

Authors:  Nadine Lavan; Harriet M J Smith; Carolyn McGettigan
Journal:  Mem Cognit       Date:  2021-07-12
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.