| Literature DB >> 27499252 |
Chang Hong Liu1, Wenfeng Chen2, James Ward3, Nozomi Takahashi4.
Abstract
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.Entities:
Mesh:
Year: 2016 PMID: 27499252 PMCID: PMC4976339 DOI: 10.1038/srep31001
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Face matching d′ results as a function of stimulus format and exposure in the learning session of Experiment 1.
Error bars represent one standard error above the means. N = 78.
Criterion results of the matching task as a function of stimulus format and exposure level in Experiment 1.
| Learning Condition | Stimulus Format | |||
|---|---|---|---|---|
| Dynamic | Static | |||
| Multiple expression | 0.03 | 0.42 | 0.03 | 0.47 |
| Single expression | 0.01 | 0.31 | −0.27 | 0.37 |
Figure 2Face recognition d′ results of as a function of stimulus format and exposure in the test session of Experiment 1.
Error bars represent one standard error above the means. N = 78.
Criterion results of the recognition task as a function of stimulus format and exposure level in Experiment 1.
| Learning Condition | Stimulus Format | |||
|---|---|---|---|---|
| Dynamic | Static | |||
| Multiple expression | −0.17 | 0.60 | −0.35 | 0.48 |
| Single expression | −0.20 | 0.62 | −0.22 | 0.46 |
Criterion results of the matching task as a function of stimulus format and exposure level in Experiment 2.
| Face view | Stimulus Format | |||
|---|---|---|---|---|
| Dynamic | Static | |||
| Frontal | 0.10 | 0.33 | 0.14 | 0.42 |
| Three-quarter | 0.03 | 0.29 | 0.28 | 0.35 |
Figure 3Results of d′ as a function of learn view and test view in Experiment 2.
Error bars represent one standard error above the means. N = 101.
Criterion results of the recognition task as a function of stimulus format, learn view, and test view in Experiment 2.
| Learn view | Test view | Stimulus Format | |||
|---|---|---|---|---|---|
| Dynamic | Static | ||||
| Frontal | Frontal | −0.29 | 0.61 | −0.33 | 0.64 |
| Three-quarter | 0.13 | 0.52 | 0.36 | 0.58 | |
| Three-quarter | Frontal | −0.01 | 0.38 | 0.22 | 0.48 |
| Three-quarter | −0.22 | 0.52 | −0.22 | 0.49 | |