| Literature DB >> 31856239 |
Josje Verhagen1,2, Rianne van den Berghe1, Ora Oudgenoeg-Paz1, Aylin Küntay3, Paul Leseman1.
Abstract
Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot's non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children's reliance on a robot's non-verbal behaviors.Entities:
Mesh:
Year: 2019 PMID: 31856239 PMCID: PMC6922398 DOI: 10.1371/journal.pone.0217833
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Child performing the task contrasting pointing and labeling with the robot and human experimenter.
(Written informed consent for publication has been obtained from the child’s parents).
Fig 2Images used in the disambiguation task.
Novel objects in upper row; familiar objects in lower row.
Mean proportions and standard deviations for children’s point following after hearing a novel label or familiar label from the robot or human.
| Novel label | Familiar label | |||
|---|---|---|---|---|
| M | (SD) | M | (SD) | |
| Robot | 0.67 | (0.47) | 0.69 | (0.47) |
| Human | 0.64 | (0.48) | 0.64 | (0.48) |
Results of a linear mixed-effect model on children’s point following with speaker (robot vs. human) and label (novel vs. familiar) as fixed factors.
| Estimate | SE | |||
|---|---|---|---|---|
| Intercept | 2.313 | 0.740 | 3.127 | .002 |
| Speaker | -0.782 | 0.696 | -1.124 | .261 |
| Label | -0.232 | 0.248 | -0.937 | .349 |
| Speaker*Label | -0.242 | 0.497 | -0.487 | .626 |
Results of a linear mixed-effect model on children’s point following with speaker (robot vs. human), label (novel vs. familiar), and perception as fixed factors.
| Estimate | SE | |||
|---|---|---|---|---|
| Intercept | 2.473 | 0.779 | 3.173 | .002 |
| Speaker | -1.020 | 0.748 | -1.362 | .173 |
| Label | -0.031 | 0.271 | -0.113 | .910 |
| Perception | 0.303 | 0.228 | 1.324 | .186 |
| Speaker*Label | -0.412 | 0.537 | -0.768 | .443 |
| Speaker*Perception | -0.200 | 0.185 | -1.077 | .281 |
| Label*Perception | 0.398 | 0.087 | 4.559 | < .001 |
| Speaker*Label*Perception | -0.295 | 0.173 | -1.706 | .088 |
Fig 3Interaction effect between label and perception.
1 = low perception, scores between 0 and 8 (n = 30); 2 = high perception, scores between 9 and 12 (n = 30).
Fig 4Child Performing the task contrasting eye gaze and labeling with the robot and human experimenter.
(Written informed consent for publication has been obtained from the child’s parents).
Mean proportions and standard deviations for children’s gaze following after hearing a novel label or familiar label from a robot or a human.
| Novel label | Familiar label | |||
|---|---|---|---|---|
| M | (SD) | M | (SD) | |
| Robot | 0.21 | (0.41) | 0.08 | (0.28) |
| Human | 0.24 | (0.43) | 0.11 | (0.31) |
Results of a linear mixed-effect model on children’s gaze following with speaker (robot vs. human) and label (novel vs. familiar) as fixed factors.
| Estimate | SE | |||
|---|---|---|---|---|
| Intercept | -5.067 | 1.089 | -4.655 | < .001 |
| Speaker | 1.783 | 1.504 | 1.186 | .236 |
| Label | 2.453 | 0.454 | 5.395 | < .001 |
| Speaker*Label | -0.351 | 0.916 | -0.383 | .702 |
Results of a linear mixed-effect model on children’s gaze following with speaker (robot vs. human), label (novel vs. familiar), and perception as fixed factors.
| Estimate | SE | |||
|---|---|---|---|---|
| Intercept | -5.152 | 1.156 | -4.458 | < .001 |
| Speaker | 1.371 | 1.595 | 0.860 | .390 |
| Label | 2.582 | 0.497 | 5.200 | < .001 |
| Perception | -0.150 | 0.229 | -0.655 | .513 |
| Speaker*Label | -0.091 | 0.996 | -0.092 | .927 |
| Speaker*Perception | 0.156 | 0.230 | 0.677 | .499 |
| Label*Perception | -0.139 | 0.151 | -0.926 | .355 |
| Speaker*Label*Perception | -0.26 | 0.288 | -0.884 | .377 |