| Literature DB >> 35986076 |
Sandra Chiquet1, Corinna S Martarelli2, Fred W Mast3.
Abstract
During recall of visual information people tend to move their eyes even though there is nothing to see. Previous studies indicated that such eye movements are related to the spatial location of previously seen items on 2D screens, but they also showed that eye movement behavior varies significantly across individuals. The reason for these differences remains unclear. In the present study we used immersive virtual reality to investigate how individual tendencies to process and represent visual information contribute to eye fixation patterns in visual imagery of previously inspected objects in three-dimensional (3D) space. We show that participants also look back to relevant locations when they are free to move in 3D space. Furthermore, we found that looking back to relevant locations depends on individual differences in visual object imagery abilities. We suggest that object visualizers rely less on spatial information because they tend to process and represent the visual information in terms of color and shape rather than in terms of spatial layout. This finding indicates that eye movements during imagery are subject to individual strategies, and the immersive setting in 3D space made individual differences more likely to unfold.Entities:
Mesh:
Year: 2022 PMID: 35986076 PMCID: PMC9391428 DOI: 10.1038/s41598-022-18080-4
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Schematic illustration of the virtual environment (A) during encoding and (B) during recall. During encoding, each trial started with the title followed by the appearance of the object either in front, to the right, to the left or behind the participants. During recall, participants were cued by the title (1) to visualize the objects they had encoded before (image generation), and they evaluated a statement (true/false) (2) about visual details of the object (image inspection).
Mean (SD), and matrix of correlations [l-95% CI, u-95% CI].
| Mean (SD) | (1) | (2) | (3) | (4) | |
|---|---|---|---|---|---|
| (1) Accuracy | 0.72 (0.10) | 1 | [0.04, 0.38] | [0.05, 0.39] | −0.09 [−0.25, 0.09] |
| (2) RT | 1.74 (0.84) | 1 | −0.13 [−0.29, 0.06] | −0.04 [−0.21, 0.14] | |
| (3) Spatial | 2.75 (0.66) | 1 | −0.13 [−0.30, 0.04] | ||
| (4) Object | 3.33 (0.54) | 1 | |||
(1) IST, Accuracy = Accuracy in the image-scanning task. (2) IST, RT = response time in the image-scanning task. (3) OSIQ, Spatial = mean score of the spatial imagery scale. (4) OSIQ, Object = mean score of the object imagery scale.
Estimates with credible intervals that do not include zero are in bold.
Logit transformed regression coefficients (posterior mean, standard error, 95% credible intervals) of the continuous fixation proportion as a function of area of interest, task and object imagery scores.
| Estimate | Est.Error | l-95% CI | u-95% CI | |
|---|---|---|---|---|
| Trial (sd) | 0.04 | 0.02 | 0.00 | 0.09 |
| Participant (sd) | 0.27 | 0.03 | 0.22 | 0.33 |
| Intercept | −0.38 | 0.05 | −0.47 | −0.29 |
| phi_Intercept1 | 1.31 | 0.03 | 1.26 | 1.36 |
| zoi_Intercept | 0.65 | 0.02 | 0.60 | 0.69 |
| coi_Intercept | −3.53 | 0.08 | −3.69 | −3.38 |
| NC | 0.05 | |||
| ImIn | 0.05 | |||
| Object | 0.09 | |||
| NC:ImIn | 0.07 | |||
| NC:Object | 0.09 | |||
| ImIn:Object | −0.11 | 0.11 | −0.31 | 0.09 |
| NC:ImIn:Object | −0.00 | 0.14 | −0.29 | 0.27 |
Phi_Intercept = beta precision (dispersion) parameter (1log transformed). Zoi_Intercept = zero–one inflation. Coi_Intercept = conditional one inflation. NC = non-corresponding AOI. ImIn = image inspection. Object = object imagery scores. Estimates with credible intervals not including zero are indicated in bold.
Figure 2Posterior means and 95% credible intervals for the estimated fixation proportion per stimulus as a function of area of interest, task and object imagery scores (centered around the grand mean).