| Literature DB >> 28675815 |
Jessica Podda1, Caterina Ansuini1, Roberta Vastano2, Andrea Cavallo3, Cristina Becchio4.
Abstract
Observation of others' actions has been proposed to provide a shared experience of the properties of objects acted upon. We report results that suggest a similar form of shared experience may be gleaned from the observation of pantomimed grasps, i.e., grasps aimed at pretended objects. In a weight judgment task, participants were asked to observe a hand reaching towards and grasping either a real or imagined glass, and to predictively judge its weight. Results indicate that participants were able to discriminate whether the to-be-grasped glass was empty, and thus light, or full, and thus heavy. Worthy of further investigation, this finding suggests that by observing others' movements we can make predictions, and form expectations about the characteristics of objects that exist only in others' minds.Entities:
Keywords: Action observation; Hand kinematics; Object representation; Prediction; Shared experience
Mesh:
Year: 2017 PMID: 28675815 PMCID: PMC5585416 DOI: 10.1016/j.cognition.2017.06.023
Source DB: PubMed Journal: Cognition ISSN: 0010-0277
Confusion matrix from LDAs for real and pantomimed reach-to-grasp movements directed at light and heavy objects. Bold values indicate cross-validated grouped cases that were correctly classified. Actual number of observations is shown in parentheses.
| Real reach-to-grasp movements | Pantomimed reach-to-grasp movements | |||||
|---|---|---|---|---|---|---|
| Light object | Heavy object | Total | Light object | Heavy object | Total | |
| Light object | 8.6% (27) | 100% (315) | 33% (109) | 100% (330) | ||
| Heavy object | 10.8% (35) | 100% (323) | 33.4% (108) | 100% (323) | ||
Fig. 1Illustration of an experimental trial. Each trial started with word cues for the two weights (light versus heavy), followed by the video clip of a reach-to-grasp movement. Participants were free to respond at any time after video stimulus onset during video presentation, and the subsequent 3000-ms response interval. After their response, participants were then asked to rate how confident they felt in their decision on a 4-point scale (from 1 = least confident, to 4 = most confident).
Results from one-sample t-tests on AUC, d' and c values. M = Mean; SE = Standard Error; t = t-test; d = Cohen’s d; 95% CI = 95% Confidence Interval of the difference from the test value (i.e., 0.50 for AUC and 0 for d’ and c).
| Real reach-to-grasp movements | Pantomimed reach-to-grasp movements | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| M ± 1SE | t | p value | d | 95% CI | M ± 1SE | t | p value | d | 95% CI | |
| AUC | 0.55 ± 0.01 | 4.50 | <0.001 | 0.92 | 0.02, 0.07 | 0.53 ± 0.01 | 3.29 | 0.003 | 0.67 | 0.01, 0.05 |
| 0.16 ± 0.05 | 3.31 | 0.003 | 0.67 | 0.06, 0.25 | 0.13 ± 0.05 | 2.72 | 0.012 | 0.56 | 0.03, 0.22 | |
| 0.00 ± 0.04 | 0.05 | >0.250 | 0.01 | −0.08, 0.09 | 0.01 ± 0.03 | 0.40 | >0.250 | 0.08 | −0.06, 0.08 | |
Fig. 2Weight discrimination from observed real and pantomimed reach-to-grasp movements. Results for AUC for real (a) and pantomimed (b) reach-to-grasp movements at individual (grey lines) and group (black line) level. Participants were able to correctly discriminate the weight of the to-be-grasped object from the observation of both real and pantomimed movements (p < 0.001 and p = 0.003, respectively). The dashed line indicates a random guess performance (AUC = 0.50).