| Literature DB >> 33063181 |
Oliver Herbort1, Lisa-Marie Krause2, Wilfried Kunde2.
Abstract
Pointing is a ubiquitous means of communication. Nevertheless, observers systematically misinterpret the location indicated by pointers. We examined whether these misunderstandings result from the typically different viewpoints of pointers and observers. Participants either pointed themselves or interpreted points while assuming the pointer's or a typical observer perspective in a virtual reality environment. The perspective had a strong effect on the relationship between pointing gestures and referents, whereas the task had only a minor influence. This suggests that misunderstandings between pointers and observers primarily result from their typically different viewpoints.Entities:
Keywords: Deictic reference; Pointing gestures; Pointing production and interpretation; Virtual reality
Mesh:
Year: 2020 PMID: 33063181 PMCID: PMC8062365 DOI: 10.3758/s13423-020-01823-7
Source DB: PubMed Journal: Psychon Bull Rev ISSN: 1069-9384
Fig. 1(a) A pointer who intends to indicate position A is typically believed to point at position B when watched from the side. (b-d) The screenshots show an overview of the virtual reality environment (b), the pointer perspective (c), and the observer perspective (d). The red-and-white disk either served as a referent for pointing or was used by participants to mark the pointed-at position
Fig. 2Figures plotting the mean arm azimuths against the mean referent x-positions (a) and the mean arm elevations against the mean referent y-positions (b) for each condition. Positive values indicate rightward and upward arm orientations or positions. Error bars show ± 1 SEM and are sometimes shrouded by the markers. The model predictions were derived by averaging the trial-wise extrapolation of the eye-finger vector and shoulder-finger vector. The eye position was defined as the head-mounted display position in the pointer-perspective conditions and as the point between the virtual pointer’s eyes in the observer-perspective conditions
Fig. 3(a–d) Charts showing the intercepts (a, b) and slopes (c, d) of the linear regressions for the horizontal and vertical dimension. The colors of the bars are matched to Fig. 2. (e–f) Charts showing how well pointing gestures can be predicted from the linear regression models derived from the same task but the other perspective or vice versa. Note that the y-axis is compressed for negative R2s. (g–h) The charts show the mean intraindividual variability in all conditions. Error bars show ± 1 SEM
Results of ANOVA on regression parameters
| Task | Perspective | Interaction | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Parameter | η2p | η2p | η2p | ||||||
| X intercept | 17.8 | < .001 | .44* | 268.2 | <.001 | .92* | 28.0 | <.001 | .55* |
| X slope | 0.2 | .660 | .01 | 686.3 | <.001 | .97* | 0.3 | .592 | .01 |
| Y intercept | 6.4 | .019 | .22 | 479.0 | <.001 | .95* | 15.1 | .001 | .40* |
| Y slope | 7.1 | .014 | .24 | 10.4 | .004 | .31* | 0.1 | .790 | .00 |