| Literature DB >> 12013377 |
Lawrence D Rosenblum1, Deborah A Yakel, Naser Baseer, Anjani Panchal, Brynn C Nodarse, Ryan P Niehus.
Abstract
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.Entities:
Mesh:
Year: 2002 PMID: 12013377 DOI: 10.3758/bf03195788
Source DB: PubMed Journal: Percept Psychophys ISSN: 0031-5117