Literature DB >> 12013377

Visual speech information for face recognition.

Lawrence D Rosenblum1, Deborah A Yakel, Naser Baseer, Anjani Panchal, Brynn C Nodarse, Ryan P Niehus.   

Abstract

Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.

Entities:  

Mesh:

Year:  2002        PMID: 12013377     DOI: 10.3758/bf03195788

Source DB:  PubMed          Journal:  Percept Psychophys        ISSN: 0031-5117


  14 in total

1.  Learning to recognize talkers from natural, sinewave, and reversed speech samples.

Authors:  Sonya M Sheffert; David B Pisoni; Jennifer M Fellowes; Robert E Remez
Journal:  J Exp Psychol Hum Percept Perform       Date:  2002-12       Impact factor: 3.332

2.  Specification of cross-modal source information in isolated kinematic displays of speech.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Acoust Soc Am       Date:  2004-07       Impact factor: 1.840

3.  Cross-modal source information and spoken word recognition.

Authors:  Lorin Lachs; David B Pisoni
Journal:  J Exp Psychol Hum Percept Perform       Date:  2004-04       Impact factor: 3.332

4.  A search advantage for faces learned in motion.

Authors:  Karin S Pilz; Ian M Thornton; Heinrich H Bülthoff
Journal:  Exp Brain Res       Date:  2005-12-06       Impact factor: 1.972

5.  Visual influences on interactive speech alignment.

Authors:  James W Dias; Lawrence D Rosenblum
Journal:  Perception       Date:  2011       Impact factor: 1.490

6.  Rigid facial motion influences featural, but not holistic, face processing.

Authors:  Naiqi G Xiao; Paul C Quinn; Liezhong Ge; Kang Lee
Journal:  Vision Res       Date:  2012-02-08       Impact factor: 1.886

7.  The Effects of 'Face' on Listening Comprehension: Evidence from Advanced Jordanian Speakers of English.

Authors:  Jihad M Hamdan; Rose Fowler Al-Hawamdeh
Journal:  J Psycholinguist Res       Date:  2018-10

8.  Experience with a talker can transfer across modalities to facilitate lipreading.

Authors:  Kauyumari Sanchez; James W Dias; Lawrence D Rosenblum
Journal:  Atten Percept Psychophys       Date:  2013-10       Impact factor: 2.199

9.  Individual differences and the effect of face configuration information in the McGurk effect.

Authors:  Yuta Ujiie; Tomohisa Asai; Akio Wakabayashi
Journal:  Exp Brain Res       Date:  2018-01-30       Impact factor: 1.972

10.  Perception of Multisensory Gender Coherence in 6- and 9-month-old Infants.

Authors:  Anne Hillairet de Boisferon; Eve Dupierrix; Paul C Quinn; Hélène Lœvenbruck; David J Lewkowicz; Kang Lee; Olivier Pascalis
Journal:  Infancy       Date:  2015-06-05
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.