| Literature DB >> 32029826 |
Jennifer X Haensel1, Matthew Danvers2, Mitsuhiko Ishikawa3, Shoji Itakura3, Raffaele Tucciarelli2, Tim J Smith2, Atsushi Senju2.
Abstract
Recent studies have revealed significant cultural modulations on face scanning strategies, thereby challenging the notion of universality in face perception. Current findings are based on screen-based paradigms, which offer high degrees of experimental control, but lack critical characteristics common to social interactions (e.g., social presence, dynamic visual saliency), and complementary approaches are required. The current study used head-mounted eye tracking techniques to investigate the visual strategies for face scanning in British/Irish (in the UK) and Japanese adults (in Japan) who were engaged in dyadic social interactions with a local research assistant. We developed novel computational data pre-processing tools and data-driven analysis techniques based on Monte Carlo permutation testing. The results revealed significant cultural differences in face scanning during social interactions for the first time, with British/Irish participants showing increased mouth scanning and the Japanese group engaging in greater eye and central face looking. Both cultural groups further showed more face orienting during periods of listening relative to speaking, and during the introduction task compared to a storytelling game, thereby replicating previous studies testing Western populations. Altogether, these findings point to the significant role of postnatal social experience in specialised face perception and highlight the adaptive nature of the face processing system.Entities:
Mesh:
Year: 2020 PMID: 32029826 PMCID: PMC7005015 DOI: 10.1038/s41598-020-58802-0
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Medians and interquartile ranges for face fixation time (in %).
| Japanese | British/Irish | ||
|---|---|---|---|
| Introduction | Speaking | 43.95 (32.76) | 63.89 (30.32) |
| Listening | 84.14 (18.64) | 91.02 (22.28) | |
| Storytelling | Speaking | 31.05 (34.79) | 39.77 (42.79) |
| Listening | 81.46 (19.54) | 80.00 (24.79) | |
Medians and interquartile ranges for upper face fixation time (in %).
| Japanese | British/Irish | ||
|---|---|---|---|
| Introduction | Speaking | 79.40 (31.72) | 58.71 (61.12) |
| Listening | 84.66 (25.53) | 53.10 (67.12) | |
| Storytelling | Speaking | 69.33 (42.66) | 57.70 (46.03) |
| Listening | 78.25 (32.86) | 49.26 (56.62) | |
Figure 1Descriptive and statistical gaze density difference maps. Red and blue regions indicate significantly greater scanning in Japanese and British/Irish participants, respectively. (A) Descriptive difference map for face scanning during periods of listening and (B) during periods of speaking. (C) Uncorrected t-scores (p < 0.01) indicate several gaze clusters for periods of listening, (D) as well as periods of speaking. (E) Monte Carlo permutation testing revealed significant gaze clusters for periods of listening, (F) but not for periods of speaking.
Spearman’s rho and corresponding p-values for the relationship between AQ/LSAS scores and (upper) face fixation time.
| Japanese | British/Irish | ||||
|---|---|---|---|---|---|
| AQ | LSAS | AQ | LSAS | ||
| Speaking | Face (Intro) | −0.268 | −0.480* | −0.101 | −0.244 |
| Face (Story) | −0.277 | −0.078 | −0.071 | −0.312 | |
| Upper face | −0.171 | −0.312 | −0.431* | 0.134 | |
| Listening | Face (Intro) | 0.001 | −0.025 | −0.119 | −0.326 |
| Face (Story) | −0.310 | −0.207 | −0.004 | −0.008 | |
| Upper face | 0.221 | 0.067 | −0.195 | −0.067 | |
*p < 0.05.
Figure 2A participant’s view of the local research assistant. Snapshot taken from the head-mounted eye tracking footage during a dyadic interaction of a participant with the local research assistant in the UK (A) or in Japan (B).
Figure 3Regions-of-interest coding for the upper and lower face. A randomly selected frame from the scene recording showing the manually coded face region based on pre-defined guidelines, and the division of the face area into an upper and lower region.