| Literature DB >> 34229489 |
David Alais1, Yiben Xu1, Susan G Wardle2, Jessica Taubert2.
Abstract
Facial expressions are vital for social communication, yet the underlying mechanisms are still being discovered. Illusory faces perceived in objects (face pareidolia) are errors of face detection that share some neural mechanisms with human face processing. However, it is unknown whether expression in illusory faces engages the same mechanisms as human faces. Here, using a serial dependence paradigm, we investigated whether illusory and human faces share a common expression mechanism. First, we found that images of face pareidolia are reliably rated for expression, within and between observers, despite varying greatly in visual features. Second, they exhibit positive serial dependence for perceived facial expression, meaning an illusory face (happy or angry) is perceived as more similar in expression to the preceding one, just as seen for human faces. This suggests illusory and human faces engage similar mechanisms of temporal continuity. Third, we found robust cross-domain serial dependence of perceived expression between illusory and human faces when they were interleaved, with serial effects larger when illusory faces preceded human faces than the reverse. Together, the results support a shared mechanism for facial expression between human faces and illusory faces and suggest that expression processing is not tightly bound to human facial features.Entities:
Keywords: emotion processing; facial expression; pareidolia; rapid adaptation; serial dependence
Mesh:
Year: 2021 PMID: 34229489 PMCID: PMC8261219 DOI: 10.1098/rspb.2021.0966
Source DB: PubMed Journal: Proc Biol Sci ISSN: 0962-8452 Impact factor: 5.349
Figure 1(a) Example human face and illusory face stimuli used in Experiments 1 and 2. Stimuli of each face type (human, illusory) were categorized into four groups as ‘low’ or ‘high’ along the expression dimension of happy versus angry. (b) Example trial sequence for the serial dependence paradigm used in Experiments 1 and 2. On each trial, subjects rated the perceived expression of the presented face on a scale of ‘very angry’ to ‘very happy’ (with ‘neutral’ anchored at the centre) using a slider response bar. (Online version in colour.)
Figure 2(a) Mean expression ratings averaged over the 15 participants of Experiment 1. Expression ratings were very consistent between observers and clustered into four levels, validating the four discrete expression levels of the face stimuli. Data points show group means with ±1 s.e.m. (b) Scatter plots of within-subject variability of expression ratings. Each data point shows the standard deviation of one participant's ratings of all images at a given expression level. Most points lie above the unity line, indicating more variable ratings for pareidolia images. Arrows indicate the mean standard deviation of ratings on each axis. (Online version in colour.)
Figure 3(a) Expression ratings from Experiment 1 analysed for serial dependence between the current and the previous (i.e. one-back) trial. The data show a positive serial dependence between the current and previous trial for both face and pareidolia images. Data points show the group mean serial effect (n = 15) for face (red) and pareidolia (blue) images with error bars showing ±1 s.e.m. Continuous lines show the best-fitting DoG model (see equation (2.1)). The inset graphs plot the two parameters of the DoG model. Each column shows the mean parameter value produced from 10 000 iterations of bootstrapping, together with ±1 s.d. error bars. (b) Data from Experiment 1 analysed for serial dependence between the current and the two-back trial. Expression ratings show a smaller but still significant positive dependence on the two-back trial rating. (Online version in colour.)
Figure 4Expression ratings from Experiment 2 analysed for serial dependence between the current and the previous (i.e. one-back) trial. Whereas images in Experiment 1 were blocked in separate conditions of all-face or all-pareidolia images, Experiment 2 used randomly interleaved sequences of both image categories. (a) The serial effect and best-fitting model calculated from consecutive images in the random sequence that were both faces (red) or both pareidolia (blue). The results closely replicate those in figure 3a from the blocked design. (b) The serial effect and model calculated from consecutive images that came from different categories, either a face followed by pareidolia (red) or pareidolia followed by a face (blue). Statistical tests comparing the amplitude and width of the best-fitting DoG model between same-category and cross-category images pairs show they do not differ. (Online version in colour.)