| Literature DB >> 34341941 |
Rena Bayramova1, Irene Valori2, Phoebe E McKenna-Plumley3, Claudio Zandonella Callegher2, Teresa Farroni4.
Abstract
Past research on the advantages of multisensory input for remembering spatial information has mainly focused on memory for objects or surrounding environments. Less is known about the role of cue combination in memory for own body location in space. In a previous study, we investigated participants' accuracy in reproducing a rotation angle in a self-rotation task. Here, we focus on the memory aspect of the task. Participants had to rotate themselves back to a specified starting position in three different sensory conditions: a blind condition, a condition with disrupted proprioception, and a condition where both vision and proprioception were reliably available. To investigate the difference between encoding and storage phases of remembering proprioceptive information, rotation amplitude and recall delay were manipulated. The task was completed in a real testing room and in immersive virtual reality (IVR) simulations of the same environment. We found that proprioceptive accuracy is lower when vision is not available and that performance is generally less accurate in IVR. In reality conditions, the degree of rotation affected accuracy only in the blind condition, whereas in IVR, it caused more errors in both the blind condition and to a lesser degree when proprioception was disrupted. These results indicate an improvement in encoding own body location when vision and proprioception are optimally integrated. No reliable effect of delay was found.Entities:
Keywords: Immersive virtual reality; Memory; Multisensory integration; Proprioception; Spatial cognition
Mesh:
Year: 2021 PMID: 34341941 PMCID: PMC8460581 DOI: 10.3758/s13414-021-02344-8
Source DB: PubMed Journal: Atten Percept Psychophys ISSN: 1943-3921 Impact factor: 2.199
Fig. 1A Experimental room, interior. The swivel chair is in the center of the room with a protractor and a camera videotaping the protractor located under it. A R_VP: the swivel chair in the visuo-proprioceptive real environment; B R_V: a participant wearing the black poncho in the ‘vision’ real environment. UV light on; C R_P: a participant in complete darkness in the ‘proprioception only’ condition. A Nikon KeyMission 360 camera was used to create 360∘ images of the room and to build the IVR. Therefore, participants saw the same environment in the IVR conditions
Fig. 2Amplitude distribution of the actual passive rotations (i.e., task difficulty; n = 48; n = 1723)
Fig. 3Frequencies of the observed self-turn errors (n = 48; n = 1723)
Descriptive statistics. Means and standard deviations of self-turn error according to experimental conditions
| Proprioception | Vision | Vision + Proprioception | Total | |||||
|---|---|---|---|---|---|---|---|---|
| Mean | SD | Mean | SD | Mean | SD | Mean | SD | |
| Reality | ||||||||
| 0 s | 16.6 | 9.8 | 6.3 | 5.2 | 5.0 | 4.1 | 9.3 | 4.0 |
| 3 s | 17.5 | 12.1 | 5.0 | 3.9 | 5.4 | 3.4 | 9.5 | 5.6 |
| 6 s | 17.4 | 11.1 | 5.4 | 3.4 | 5.8 | 5.5 | 9.5 | 4.6 |
| Total | 17.2 | 7.3 | 5.6 | 2.9 | 5.4 | 3.1 | 9.4 | 3.0 |
| IVR | ||||||||
| 0 s | 19.1 | 10.4 | 14.7 | 7.8 | 7.4 | 5.3 | 13.7 | 4.5 |
| 3 s | 17.8 | 9.9 | 15.1 | 9.7 | 11.4 | 9.6 | 14.8 | 6.1 |
| 6 s | 20.8 | 12.3 | 13.0 | 8.5 | 9.5 | 8.3 | 14.5 | 6.1 |
| Total | 19.1 | 8.1 | 14.3 | 6.6 | 9.4 | 5.9 | 14.3 | 4.9 |
| Total | ||||||||
| 0 s | 17.7 | 7.9 | 10.6 | 5.3 | 6.1 | 3.5 | 11.5 | 3.2 |
| 3 s | 18.2 | 10.8 | 10.1 | 5.3 | 8.5 | 5.2 | 12.1 | 4.8 |
| 6 s | 19.0 | 8.6 | 9.4 | 4.8 | 7.7 | 5.1 | 12.0 | 3.9 |
| Total | 18.3 | 6.5 | 10.0 | 3.8 | 7.4 | 3.4 | 11.9 | 3.3 |
Note: IVR = immersive virtual reality. n = 48; n = 1723
Fig. 4Predicted mean of self-turn error according to Amplitude in reality and in IVR (n = 48; n = 1723). The line represents the mean value, the shaded area the 95% BCI values
Predicted self-turn error mean according to experimental conditions
| Conditions | Error | 95 % BCI | |||
|---|---|---|---|---|---|
| Environment | Amplitude | Perception | Mean | Lower | Upper |
| Proprioception | 11.99 | 10.42 | 13.79 | ||
| Vision | 5.38 | 4.56 | 6.29 | ||
| Reality | 90 | Vision + Proprioception | 5.32 | 4.48 | 6.26 |
| Proprioception | 26.40 | 22.25 | 31.19 | ||
| Vision | 5.67 | 4.60 | 6.90 | ||
| 180 | Vision + Proprioception | 5.37 | 4.01 | 6.35 | |
| Proprioception | 11.89 | 10.24 | 13.74 | ||
| Vision | 8.57 | 7.35 | 9.93 | ||
| IVR | 90 | Vision + Proprioception | 7.08 | 6.01 | 8.29 |
| Proprioception | 29.17 | 24.81 | 34.09 | ||
| Vision | 21.44 | 18.33 | 24.93 | ||
| 180 | Vision + Proprioception | 11.30 | 9.64 | 13.23 | |
Note: IVR = immersive virtual reality. n = 48; n = 1723
Fig. 5Distributions of the predicted means of self-turn error according to Perception and Delay (n = 48; n = 1723)
Predicted self-turn error mean according to perception and delay conditions
| Conditions | Error | 95 % BCI | ||
|---|---|---|---|---|
| Perception | Delay | Mean | Lower | Upper |
| Proprioception | 0 s | 21.39 | 18.45 | 24.66 |
| 3 s | 20.89 | 18.03 | 24.16 | |
| 6 s | 22.75 | 19.61 | 26.24 | |
| Vision | 0 s | 12.06 | 10.34 | 13.97 |
| 3 s | 10.74 | 9.22 | 12.45 | |
| 6 s | 10.31 | 8.81 | 11.98 | |
| Vision + Proprioception | 0 s | 6.28 | 5.39 | 7.30 |
| 3 s | 8.17 | 7.05 | 9.39 | |
| 6 s | 7.90 | 6.81 | 9.15 | |
Note: n = 48; n = 1723