| Literature DB >> 31848909 |
Pauline Chevalier1, Kyveli Kompatsiari2, Francesca Ciardo2, Agnieszka Wykowska2.
Abstract
This article reviews methods to investigate joint attention and highlights the benefits of new methodological approaches that make use of the most recent technological developments, such as humanoid robots for studying social cognition. After reviewing classical approaches that address joint attention mechanisms with the use of controlled screen-based stimuli, we describe recent accounts that have proposed the need for more natural and interactive experimental protocols. Although the recent approaches allow for more ecological validity, they often face the challenges of experimental control in more natural social interaction protocols. In this context, we propose that the use of humanoid robots in interactive protocols is a particularly promising avenue for targeting the mechanisms of joint attention. Using humanoid robots to interact with humans in naturalistic experimental setups has the advantage of both excellent experimental control and ecological validity. In clinical applications, it offers new techniques for both diagnosis and therapy, especially for children with autism spectrum disorder. The review concludes with indications for future research, in the domains of healthcare applications and human-robot interaction in general.Entities:
Keywords: Autism; Healthy and clinical populations; Human–robot interaction; Joint attention; Review
Mesh:
Year: 2020 PMID: 31848909 PMCID: PMC7093354 DOI: 10.3758/s13423-019-01689-4
Source DB: PubMed Journal: Psychon Bull Rev ISSN: 1069-9384
Fig. 1Examples of classical and novel paradigms used to study joint attention. (a) A gaze-cueing paradigm with schematic faces for congruent (upper frame) and incongruent (lower frame) trials (Friesen & Kingstone, 1998). From Ciardo et al., 2018. (b) Experimental setup in a gaze-following task using avatar faces. From “Studying the Influence of Race on the Gaze Cueing Effect Using Eye Tracking Method,” by G. Y. Menshikova, A. I. Kovalev, and E. G. Luniakova, 2017, National Psychological Journal, 2, p. 50, Fig. 1. Copyright 2017 by Lomonosov Moscow State University and the Russian Psychological Society (Menshikova, Kovalev, & Luniakova, 2017). (c) Adapted gaze-cueing procedure for gaze cueing in a real-world experimental setup. From “Mental State Attribution and the Gaze Cueing Effect,” by G. G. Cole, D. T. Smith, and M. A. Atkinson, 2015, Attention, Perception, & Psychophysics, 77, Fig. 5. Copyright 2015 by the Psychonomic Society (Cole, Smith, & Atkinson, 2015). (d) Gaze-cueing task in human–robot interaction (paradigm of Kompatsiari, Ciardo, et al., 2018).
Summary of the studies examining joint attention in healthy population, from classical to more naturalistic and recent approaches
| Agent | Authors | SOA (ms) | GCE Magnitude (ms) | Effect Size ( | |
|---|---|---|---|---|---|
| Screen based/Schematic and human faces | Friesen & Kingstone ( | 24 | 105, 300, 600, 1,005 | 7.5 | 1.11 |
| Schuller & Rossion ( | 14 | 500 | 19 | 2.26 | |
| Hietenan et al. (2006) | 52 | 200 | 19 | 0.90 | |
| Ciardo et al. (2018)a | 32 | 200 | 16 | 2.58 | |
| Dalmaso et al. ( | 19 | 200, 1200 | 19 | 1.97 | |
| Screen based/Avatars | Jones et al. (2010)a | 20 | 200 | 10 | 0.49 |
| Pavan et al.(2011)b | 32 | 200 | 12 | 1.14 | |
| Screen based/Robotic agent | Wiese et al. ( Wiese et al. ( Martini, Buzzell, & Wiese ( | 23 46 35 | 500 500 400–600 | 9 9 7 | 1.96 1.71 0.77 |
| Interactive setup/Human agent | Cole et al. ( | 16 | 600 | n/a | 2.94 |
| Lachat et al. ( | 50 | 700–900 | 11 | 0.83 | |
| Interactive setup/Robotic agent | Wykowska et al. ( | 34 | 600 | 13 | 1.32 |
| Kompatsiari, Perez-Osorio, et al. (2018) | 21 | 500 | 15 | 0.73 | |
| Kompatsiari et al. ( | 33 | 1,000 | 18 | 1.02 |
For each study we report the sample size (N); the stimulus onset asynchrony (SOA; separated by commas when multiple SOA were applied), the magnitude of the gaze-cueing effect (GCE; estimated as the difference in mean reaction times between invalid and valid trials; n/a = the authors did not report mean values for valid and invalid trials), and the effect size of the main validity effect (Cohen’s d, estimated using the Practical Meta-Analysis Effect Size Calculator), if calculable. *We only report results of the identification task. a We only report results of Exp. 1. b We only report results of Exp. 2. c We only report results of Exp. 3.
Fig. 2Examples of setups using robots to train and examine joint attention in children diagnosed with ASD. (a) Setup using the robot CuDDler. From “Robot-Assisted Training of Joint Attention Skills in Children Diagnosed With Autism,” by J. Kajopoulos et al., 2015, in A. Arvah, J.-J. Cabibihan, A. M. Howard, M. A. Salichs, and H. He (Eds.), Social Robotics, Cham, Switzerland: Springer. Copyright 2015 by Springer International Publishing Switzerland. (b) Setup using the robot Nao. From the thesis “Impact of Sensory Preferences in Individuals With Autism Spectrum Disorder on Their Social Interaction With a Robot,” by P. Chevalier, 2016, Université Paris-Saclay. Copyright 2016 by the author.
Summary of the articles reviewed here
| Study | Age | Gender (M–F) | Robot | Study Design | Control With a Human Partner | Measure Used | Main Results of the Study on Joint Attention | |
|---|---|---|---|---|---|---|---|---|
| Anzalone et al., | 16 ASD 14 TD | 9.25 ASD 8.06 TD | 13–5 ASD 9–6 TD | Nao | Cross-sectional with control (single session) | All participants interacted first with the human and then with robot partner | Task performance; Behavioral observations; 3-D motion tracking system | The robot needs higher level of prompting than the human partner |
Anzalone et al., 2018 First study | 25 ASD 12 TD | 7.94 ASD 8.06 TD | 10–5 ASD 10–6 TD | Nao | Cross-sectional without control (single session) | No | Behavioral metrics based on 3-D motion trackers | During joint attention with a robot, children with ASD and without ASD present different motion and gaze patterns |
| Second study | 8 ASD | 6.85 | 8–0 | Nao | Cross-sectional without control (single session) | No | Behavioral metrics based on 3-D motion trackers | After training joint attention for 6 months (nonrobotic intervention), the use of the same setup of the first study enables to observe that the children’s motion and gaze behaviors are closer to typically developed children’s behaviors. |
| Bekele et al., 2013* | 6 ASD 6 TD | 4.7 ASD 4.4 TD | 5–1 ASD 4–2 TD | Nao | Cross sectional with control (single session) | All participants interacted with the human and robot partners, with quasi-randomized order of presentation of the partner across participants | Task performance; Behavioral observations | Robot partner successfully induce joint attention in children with and without ASD; Robot partner needs higher level of prompting than a human partner |
| Boccanfuso et al., | 8 ASD | Between 3 and 6 | ? | CHARLIE | Longitudinal study (12 sessions) | Comparison between an experimental group (speech therapy + interaction with a robot) and a control group (speech therapy without robot) | Pre- and posttest with clinical questionnaires; Behavioral observations | Similar patterns in the children’s performances in joint attention if trained by a robot or by a human partner; Improvement of joint attention skills |
| Chevalier et al., | 11 ASD | 11.9 | 9–2 | Nao | Cross sectional without control (single session) | No | Task performance; Behavioral observations; Sensory profiles | Joint attention response time seems to be leaded by the sensory profiles of the participants |
| David et al., | 5 ASD | 4.2 | 4–1 | Nao | Longitudinal study (at least 16 sessions) | Each participant went through at least ~8 robot interventions and ~8 human interventions | Task performance; Behavioral observation | Similar patterns in the children’s behaviors and performances in joint attention if trained by a robot or by a human partner; Improvement of joint attention skills Robot partner needs higher level of prompting than a human partner |
| Kajopoulos et al., | 7 ASD | 4.6 | 4–3 | CuDDler | Longitudinal study (6 sessions) | No | Pre- and posttest with clinical questionnaire; Task performance | Improvements of the response to joint attention skills; Generalization of the trained skills from a robot to a human partner |
Michaud et al., Duquette et al., | 4 ASD | 5 | 3–1 | Tito | Longitudinal study (22 sessions) | Two participants with a human partner, two participants with a robot partner | Task performance; Behavioral observation | Higher improvements in the joint attention skills after training with a robot partner than with a human partner |
| Simut et al., | 30 ASD | 6.67 | 27–3 | Probo | Cross sectional with control (single session) | All participants interacted with the human and robot partners, with randomized order of presentation of the partner across participants | Task performance; Behavioral observations | Similar patterns in the children’s performances in joint attention if trained by a robot or by a human partner; Improvement of joint attention skills |
| Taheri et al., | 6 ASD | 8.67 | 6–0 | Nao Alice-R50 | Longitudinal study (12 sessions) | Yes | Pre- and posttest with clinical questionnaires; Behavioral observations; Task performance; Interview with subjects’ parents; Assessment of the participants pre- and postintervention by a clinical child psychologist | Too many games to be sure of the specific effects of the therapy on joint attention or imitation and the effect of the partner |
Warren et al., 2013* Zheng et al., | 6 ASD | 3.46 | 6–0 | Nao | Longitudinal study (4 sessions) | No | Task performance; Behavioral observations | Improvement of joint attention skills |
| Zheng et al., | 14 ASD | 2.78 | 12–2 | Nao | Longitudinal study (4 sessions) | No | Task performance | Improvement of joint attention skills |
For each study, we report the number of participants that effectively participated to the studies (N) and whether they were children diagnosed with autism spectrum disorder (ASD) or typically developed children (TD). *These studies were included in Pennisi et al. (2016) systematic review.