| Literature DB >> 35446619 |
Joshua C Peterson1, Stefan Uddenberg2, Thomas L Griffiths1,3, Alexander Todorov2, Jordan W Suchow4.
Abstract
The diversity of human faces and the contexts in which they appear gives rise to an expansive stimulus space over which people infer psychological traits (e.g., trustworthiness or alertness) and other attributes (e.g., age or adiposity). Machine learning methods, in particular deep neural networks, provide expressive feature representations of face stimuli, but the correspondence between these representations and various human attribute inferences is difficult to determine because the former are high-dimensional vectors produced via black-box optimization algorithms. Here we combine deep generative image models with over 1 million judgments to model inferences of more than 30 attributes over a comprehensive latent face space. The predictive accuracy of our model approaches human interrater reliability, which simulations suggest would not have been possible with fewer faces, fewer judgments, or lower-dimensional feature representations. Our model can be used to predict and manipulate inferences with respect to arbitrary face photographs or to generate synthetic photorealistic face stimuli that evoke impressions tuned along the modeled attributes.Entities:
Keywords: computational models; face perception; social traits
Mesh:
Year: 2022 PMID: 35446619 PMCID: PMC9169911 DOI: 10.1073/pnas.2115228119
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 12.779
Fig. 1.Correlation matrix for 34 average attribute ratings for each of 1,000 faces. Rows and columns are arranged according to a hierarchical clustering of the correlation values.
Fig. 2.Average cross-validated model performance (black bars) compared to intersubject reliability (red markers).
Fig. 3.Model performance (R2) for each attribute as a function of the number of face examples (Top), the number of participant ratings for each face example (Middle), and the number of image feature dimensions (Bottom). Attributes are ordered by the maximum model performance observed in Top.
Fig. 4.(A) The faces judged on average to have the highest and lowest ratings along six sample perceived attribute dimensions. (B) Model-based manipulations of two sample base faces along the sample dimensions, demonstrating smooth and effective manipulations along each attribute.