| Literature DB >> 28973112 |
Ellen M Kok1,2, Avi M Aizenman2,3, Melissa L-H Võ4, Jeremy M Wolfe2.
Abstract
People know surprisingly little about their own visual behavior, which can be problematic when learning or executing complex visual tasks such as search of medical images. We investigated whether providing observers with online information about their eye position during search would help them recall their own fixations immediately afterwards. Seventeen observers searched for various objects in "Where's Waldo" images for 3 s. On two-thirds of trials, observers made target present/absent responses. On the other third (critical trials), they were asked to click twelve locations in the scene where they thought they had just fixated. On half of the trials, a gaze-contingent window showed observers their current eye position as a 7.5° diameter "spotlight." The spotlight "illuminated" everything fixated, while the rest of the display was still visible but dimmer. Performance was quantified as the overlap of circles centered on the actual fixations and centered on the reported fixations. Replicating prior work, this overlap was quite low (26%), far from ceiling (66%) and quite close to chance performance (21%). Performance was only slightly better in the spotlight condition (28%, p = 0.03). Giving observers information about their fixation locations by dimming the periphery improved memory for those fixations modestly, at best.Entities:
Mesh:
Year: 2017 PMID: 28973112 PMCID: PMC5627674 DOI: 10.1167/17.12.2
Source DB: PubMed Journal: J Vis ISSN: 1534-7362 Impact factor: 2.240
Figure 1Trial schematic: On search trials, observers were presented with a target word (in the example “Bottle”) and then searched the image for the target word for 3 s after which they provided a target present/absent response. On critical trials (1/3 of all trials), observers skipped the present/absent response, and were instead asked to mark 12 locations fixated in the prior 3 s. The scene was now presented without the gaze-contingent spotlight.
Figure 2Data represent the percentage overlap between “windows” of different size radius (in degrees of visual angle) around each fixation and clicks on where observers thought they looked in the image for the spotlight (gaze-contingent) condition (dark blue) and the control condition (light blue, dotted). An observer's fixations overlapped with where a different observer thought someone else might look at in the same image is shown in purple dotted. The perfect memory model represents ceiling performance (green), while the overlap with another image represents chance performance (red). Error bars represent SE.
Figure 3Average percentage overlap between “windows” of different size radius (average overlap 2.6° of visual angle) around each fixation and clicks on where observers thought they looked in the image for the spotlight (gaze-contingent) condition (black and yellow) and the control condition (dark blue). An observer's fixations overlapped with where a different observer thought someone else might look at in the same image is shown in purple. The perfect memory model represents ceiling performance (green), while the overlap with another image represents chance performance (red). Error bars represent standard deviations.