| Literature DB >> 23087622 |
Jonathan B Freeman1, Kerri L Johnson, Reginald B Adams, Nalini Ambady.
Abstract
Research is increasingly challenging the claim that distinct sources of social information-such as sex, race, and emotion-are processed in discrete fashion. Instead, there appear to be functionally relevant interactions that occur. In the present article, we describe research examining how cues conveyed by the human face, voice, and body interact to form the unified representations that guide our perceptions of and responses to other people. We explain how these information sources are often thrown into interaction through bottom-up forces (e.g., phenotypic cues) as well as top-down forces (e.g., stereotypes and prior knowledge). Such interactions point to a person perception process that is driven by an intimate interface between bottom-up perceptual and top-down social processes. Incorporating data from neuroimaging, event-related potentials (ERP), computational modeling, computer mouse-tracking, and other behavioral measures, we discuss the structure of this interface, and we consider its implications and adaptive purposes. We argue that an increased understanding of person perception will likely require a synthesis of insights and techniques, from social psychology to the cognitive, neural, and vision sciences.Entities:
Keywords: face perception; person perception; social categorization; visual perception
Year: 2012 PMID: 23087622 PMCID: PMC3474279 DOI: 10.3389/fnint.2012.00081
Source DB: PubMed Journal: Front Integr Neurosci ISSN: 1662-5145
Figure 1A general diagram of the dynamic interactive model. Adapted from Freeman and Ambady (2011a).
Figure 2An instantiation of the dynamic interactive model that gives rise to category interactions driven by top-down stereotypes. Adapted from Freeman and Ambady (2011a).
Figure 3An instantiation of the dynamic interactive model that gives rise to category interactions driven by bottom-up perceptual cues. Adapted from Freeman and Ambady (2011a).