| Literature DB >> 30852999 |
Cesco Willemse1, Agnieszka Wykowska1.
Abstract
Initiating joint attention by leading someone's gaze is a rewarding experience which facilitates social interaction. Here, we investigate this experience of leading an agent's gaze while applying a more realistic paradigm than traditional screen-based experiments. We used an embodied robot as our main stimulus and recorded participants' eye movements. Participants sat opposite a robot that had either of two 'identities'-'Jimmy' or 'Dylan'. Participants were asked to look at either of two objects presented on screens to the left and the right of the robot. Jimmy then looked at the same object in 80% of the trials and at the other object in the remaining 20%. For Dylan, this proportion was reversed. Upon fixating on the object of choice, participants were asked to look back at the robot's face. We found that return-to-face saccades were conducted earlier towards Jimmy when he followed the gaze compared with when he did not. For Dylan, there was no such effect. Additional measures indicated that our participants also preferred Jimmy and liked him better. This study demonstrates (a) the potential of technological advances to examine joint attention where ecological validity meets experimental control, and (b) that social reorienting is enhanced when we initiate joint attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.Entities:
Keywords: gaze contingency; gaze leading; joint attention; mobile eyetracking; social robots
Mesh:
Year: 2019 PMID: 30852999 PMCID: PMC6452241 DOI: 10.1098/rstb.2018.0036
Source DB: PubMed Journal: Philos Trans R Soc Lond B Biol Sci ISSN: 0962-8436 Impact factor: 6.237
Figure 1.Trial sequence. Starting top-left: (a) The participants looked at the robot until they heard a beep. (b) They looked to the left or right object as quickly as possible. (c) iCub looked at an object (gaze-following example provided). (d) In their own time, the participants looked back at the robot's face (return-to-face saccade onset-time), upon which the robot looked at the participant again.
Figure 2.Example output of a K-means cluster classification of one participant's fixation locations in one block. Three distinct AOIs are clearly visible: left screen, iCub, right screen. All cluster outputs are available at https://osf.io/zxkwn. (Online version in colour.)
Figure 3.Mean onset latencies for the return-to-face saccades for each identity and contingency in milliseconds. Error bars: ±1 s.e.m. (Online version in colour.)