Literature DB >> 16617832

Hearing a face: cross-modal speaker matching using isolated visible speech.

Lawrence D Rosenblum1, Nicolas M Smith, Sarah M Nichols, Steven Hale, Joanne Lee.   

Abstract

An experiment was performed to test whether cross-modal speaker matches could be made using isolated visible speech movement information. Visible speech movements were isolated using a point-light technique. In five conditions, subjects were asked to match a voice to one of two (unimodal) speaking point-light faces on the basis of speaker identity. Two of these conditions were designed to maintain the idiosyncratic speech dynamics of the speakers, whereas three of the conditions deleted or distorted the dynamics in various ways. Some of these conditions also equated video frames across dynamically correct and distorted movements. The results revealed generally better matching performance in the conditions that maintained the correct speech dynamics than in those conditions that did not, despite containing exactly the same video frames. The results suggest that visible speech movements themselves can support cross-modal speaker matching.

Mesh:

Year:  2006        PMID: 16617832     DOI: 10.3758/bf03193658

Source DB:  PubMed          Journal:  Percept Psychophys        ISSN: 0031-5117


  7 in total

1.  Specification of cross-modal source information in isolated kinematic displays of speech.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Acoust Soc Am       Date:  2004-07       Impact factor: 1.840

2.  Crossmodal Source Identification in Speech Perception.

Authors:  Lorin Lachs; David B Pisoni
Journal:  Ecol Psychol       Date:  2004

3.  Experience with a talker can transfer across modalities to facilitate lipreading.

Authors:  Kauyumari Sanchez; James W Dias; Lawrence D Rosenblum
Journal:  Atten Percept Psychophys       Date:  2013-10       Impact factor: 2.199

4.  Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases.

Authors:  Nadine Lavan; Harriet Smith; Li Jiang; Carolyn McGettigan
Journal:  Atten Percept Psychophys       Date:  2021-04-01       Impact factor: 2.199

5.  The signer and the sign: cortical correlates of person identity and language processing from point-light displays.

Authors:  Ruth Campbell; Cheryl M Capek; Karine Gazarian; Mairéad MacSweeney; Bencie Woll; Anthony S David; Philip K McGuire; Michael J Brammer
Journal:  Neuropsychologia       Date:  2011-07-07       Impact factor: 3.139

6.  Matching novel face and voice identity using static and dynamic facial images.

Authors:  Harriet M J Smith; Andrew K Dunn; Thom Baguley; Paula C Stacey
Journal:  Atten Percept Psychophys       Date:  2016-04       Impact factor: 2.199

7.  Cross-modal signatures in maternal speech and singing.

Authors:  Sandra E Trehub; Judy Plantinga; Jelena Brcic; Magda Nowicki
Journal:  Front Psychol       Date:  2013-11-01
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.