Literature DB >> 20969886

Recognizing people from dynamic and static faces and bodies: dissecting identity with a fusion approach.

Alice J O'Toole1, P Jonathon Phillips, Samuel Weimer, Dana A Roark, Julianne Ayyad, Robert Barwick, Joseph Dunlop.   

Abstract

The goal of this study was to evaluate human accuracy at identifying people from static and dynamic presentations of faces and bodies. Participants matched identity in pairs of videos depicting people in motion (walking or conversing) and in "best" static images extracted from the videos. The type of information presented to observers was varied to include the face and body, the face-only, and the body-only. Identification performance was best when people viewed the face and body in motion. There was an advantage for dynamic over static stimuli, but only for conditions that included the body. Control experiments with multiple-static images indicated that some of the motion advantages we obtained were due to seeing multiple images of the person, rather than to the motion, per se. To computationally assess the contribution of different types of information for identification, we fused the identity judgments from observers in different conditions using a statistical learning algorithm trained to optimize identification accuracy. This fusion achieved perfect performance. The condition weights that resulted suggest that static displays encourage reliance on the face for recognition, whereas dynamic displays seem to direct attention more equitably across the body and face.
Copyright © 2010 Elsevier Ltd. All rights reserved.

Entities:  

Mesh:

Year:  2010        PMID: 20969886     DOI: 10.1016/j.visres.2010.09.035

Source DB:  PubMed          Journal:  Vision Res        ISSN: 0042-6989            Impact factor:   1.886


  28 in total

1.  Whole-agent selectivity within the macaque face-processing system.

Authors:  Clark Fisher; Winrich A Freiwald
Journal:  Proc Natl Acad Sci U S A       Date:  2015-10-13       Impact factor: 11.205

2.  Rigid facial motion influences featural, but not holistic, face processing.

Authors:  Naiqi G Xiao; Paul C Quinn; Liezhong Ge; Kang Lee
Journal:  Vision Res       Date:  2012-02-08       Impact factor: 1.886

3.  The efficiency of dynamic and static facial expression recognition.

Authors:  Jason M Gold; Jarrett D Barker; Shawn Barr; Jennifer L Bittner; W Drew Bromfield; Nicole Chu; Roy A Goode; Doori Lee; Michael Simmons; Aparna Srinath
Journal:  J Vis       Date:  2013-04-25       Impact factor: 2.240

4.  Spatiotemporal information during unsupervised learning enhances viewpoint invariant object recognition.

Authors:  Moqian Tian; Kalanit Grill-Spector
Journal:  J Vis       Date:  2015       Impact factor: 2.240

5.  Independent contributions of the face, body, and gait to the representation of the whole person.

Authors:  Noa Simhi; Galit Yovel
Journal:  Atten Percept Psychophys       Date:  2021-01       Impact factor: 2.199

6.  Elastic facial movement influences part-based but not holistic processing.

Authors:  Naiqi G Xiao; Paul C Quinn; Liezhong Ge; Kang Lee
Journal:  J Exp Psychol Hum Percept Perform       Date:  2013-02-11       Impact factor: 3.332

Review 7.  Perceptual-motor styles.

Authors:  Pierre-Paul Vidal; Francesco Lacquaniti
Journal:  Exp Brain Res       Date:  2021-03-06       Impact factor: 2.064

8.  Preference for orientations commonly viewed for one's own hand in the anterior intraparietal cortex.

Authors:  Regine Zopf; Mark A Williams
Journal:  PLoS One       Date:  2013-01-07       Impact factor: 3.240

9.  A quantitative meta-analysis of face recognition deficits in autism: 40 years of research.

Authors:  Jason W Griffin; Russell Bauer; K Suzanne Scherf
Journal:  Psychol Bull       Date:  2020-10-26       Impact factor: 17.737

10.  Separated and overlapping neural coding of face and body identity.

Authors:  Celia Foster; Mintao Zhao; Timo Bolkart; Michael J Black; Andreas Bartels; Isabelle Bülthoff
Journal:  Hum Brain Mapp       Date:  2021-05-25       Impact factor: 5.038

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.