| Literature DB >> 28114490 |
Haojiang Ying1, Hong Xu2.
Abstract
How do we interpret the rapidly changing visual stimuli we encounter? How does our past visual experience shape our perception? Recent work has suggested that our visual system is able to interpret multiple faces presented temporally via integration or ensemble coding. Visual adaptation is widely used to probe such short term plasticity. Here we use an adaptation paradigm to investigate whether integration or averaging of emotional faces occurs during a rapid serial visual presentation (RSVP). In four experiments, we tested whether the RSVP of distinct emotional faces could induce adaptation aftereffects and whether these aftereffects were of similar magnitudes as their statistically averaged face. Experiment 1 showed that the RSVP faces could generate significant facial expression aftereffects (FEAs) across happy and sad emotions. Experiment 2 revealed that the magnitudes of the FEAs from RSVP faces and their paired average faces were comparable and significantly correlated. Experiment 3 showed that the FEAs depended on the mean emotion of the face stream, regardless of variations in emotion or the temporal frequency of the stream. Experiment 4 further indicated that the emotion of the average face of the stream, but not the emotion of individual faces matched for identity to the test faces, determined the FEAs. Together, our results suggest that the visual system interprets rapidly presented faces by ensemble coding, and thus implies the formation of a facial expression norm in face space.Entities:
Mesh:
Year: 2017 PMID: 28114490 DOI: 10.1167/17.1.15
Source DB: PubMed Journal: J Vis ISSN: 1534-7362 Impact factor: 2.240