Literature DB >> 23276114

Matching voice and face identity from static images.

Lauren W Mavica1, Elan Barenholtz.   

Abstract

Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.

Entities:  

Mesh:

Year:  2012        PMID: 23276114     DOI: 10.1037/a0030945

Source DB:  PubMed          Journal:  J Exp Psychol Hum Percept Perform        ISSN: 0096-1523            Impact factor:   3.332


  14 in total

1.  Developmental Shifts in Detection and Attention for Auditory, Visual, and Audiovisual Speech.

Authors:  Susan Jerger; Markus F Damian; Cassandra Karl; Hervé Abdi
Journal:  J Speech Lang Hear Res       Date:  2018-12-10       Impact factor: 2.297

2.  Categorical congruence facilitates multisensory associative learning.

Authors:  Elan Barenholtz; David J Lewkowicz; Meredith Davidson; Lauren Mavica
Journal:  Psychon Bull Rev       Date:  2014-10

3.  Experience with a talker can transfer across modalities to facilitate lipreading.

Authors:  Kauyumari Sanchez; James W Dias; Lawrence D Rosenblum
Journal:  Atten Percept Psychophys       Date:  2013-10       Impact factor: 2.199

4.  Detection and Attention for Auditory, Visual, and Audiovisual Speech in Children with Hearing Loss.

Authors:  Susan Jerger; Markus F Damian; Cassandra Karl; Hervé Abdi
Journal:  Ear Hear       Date:  2020 May/Jun       Impact factor: 3.570

5.  Face-voice space: Integrating visual and auditory cues in judgments of person distinctiveness.

Authors:  Joshua R Tatz; Zehra F Peynircioğlu; William Brent
Journal:  Atten Percept Psychophys       Date:  2020-10       Impact factor: 2.199

6.  How the human brain exchanges information across sensory modalities to recognize other people.

Authors:  Helen Blank; Stefan J Kiebel; Katharina von Kriegstein
Journal:  Hum Brain Mapp       Date:  2014-09-13       Impact factor: 5.038

7.  Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases.

Authors:  Nadine Lavan; Harriet Smith; Li Jiang; Carolyn McGettigan
Journal:  Atten Percept Psychophys       Date:  2021-04-01       Impact factor: 2.199

8.  Matching novel face and voice identity using static and dynamic facial images.

Authors:  Harriet M J Smith; Andrew K Dunn; Thom Baguley; Paula C Stacey
Journal:  Atten Percept Psychophys       Date:  2016-04       Impact factor: 2.199

9.  Cross-modal signatures in maternal speech and singing.

Authors:  Sandra E Trehub; Judy Plantinga; Jelena Brcic; Magda Nowicki
Journal:  Front Psychol       Date:  2013-11-01

10.  Visual mechanisms for voice-identity recognition flexibly adjust to auditory noise level.

Authors:  Corrina Maguinness; Katharina von Kriegstein
Journal:  Hum Brain Mapp       Date:  2021-05-27       Impact factor: 5.038

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.