| Literature DB >> 31735822 |
Jordana S Wynn1,2, Kelly Shen1, Jennifer D Ryan1,2,3.
Abstract
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.Entities:
Keywords: aging; eye movements; eye tracking; gaze; memory; retrieval; vision
Year: 2019 PMID: 31735822 PMCID: PMC6802778 DOI: 10.3390/vision3020021
Source DB: PubMed Journal: Vision (Basel) ISSN: 2411-5150
Figure 1Schematic of encoding-related eye movement effects. Adapted from Henderson, William, and Falk, 2005 [12]. Participants viewed, and were subsequently tested on their memory for, a series of faces. During encoding (left column), participants were presented with images of faces. In the free viewing conditions (row 1, left), participants were able to move their eyes freely during learning, whereas in the fixed viewing condition (row 2, left), participants were required to maintain central fixation. During a recognition test (right), participants were presented with repeated and novel faces under free viewing conditions and were required to make an old/new recognition response. The mean percentage of correctly identified faces was significantly lower for faces encoded under the fixed viewing condition compared to faces encoded under the free viewing condition, suggesting that eye movements facilitate the binding of stimulus features at encoding for subsequent memory.
Figure 2Schematic comparing the predictions of the standard scanpath model (left) and the proposed gaze reinstatement model (right). Scanpath model (left): Row 1: a simplified scanpath enacted during the encoding of a line drawing of a scene. The same encoding scanpath is used to illustrate the predictions of both the scanpath model and the gaze reinstatement model (row 2, right). Row 2: the predictions of the standard scanpath model regarding retrieval-related viewing. In the present example, retrieval consists of visualization while “looking at nothing”. However, these predictions could similarly apply to repeated viewing of the stimulus. Early tests of scanpath theory used string similarity analyses to measure the similarity between encoding and retrieval fixation sequences [29,30,32]. These methods label fixations based on their location within predefined interest areas (often based on a grid, as shown here) and compute the number of transitions required to convert one scanpath into the other. Scanpath theory does not make any predictions regarding scanpath reinstatement over time or with memory decline [27,28]. Row 3: the predictions of the standard scanpath model regarding the relationship between reinstatement and mnemonic performance. The scanpath model predicts that scanpath reinstatement will be positively correlated with mnemonic performance [27,28]. Gaze reinstatement model (right): Row 1: a simplified scanpath enacted during the encoding of a line drawing of a scene. This is the same scanpath that is used to make predictions regarding the scanpath model (top left). Row 2: the gaze reinstatement model proposes that retrieval-related viewing patterns broadly reinstate the temporal order and spatial locations of encoding-related fixations. In the present example, gaze reinstatement decreases across time. This would be expected in the case of image recognition, wherein reinstatement declines when sufficient visual information has been gathered, e.g., [27,28,35,47], or in the case of image visualization, when the most salient parts of the image have been reinstated, e.g., [43,48]. The duration of gaze reinstatement would be expected to change based on the nature of the retrieval task (e.g., visual search, [37]). The gaze reinstatement model additionally predicts that reinstatement will be greater and extended in time for older adults (OA), relative to younger adults (YA) [36,37]. Row 3: The gaze reinstatement model (right) predicts that the relationship between reinstatement and mnemonic performance is modulated by memory demands (i.e., memory for spatial, temporal, or object-object relations) and memory integrity (indexed here by age). When relational memory demands are low (A), older adults, and some low performing younger adults, use gaze reinstatement to support mnemonic performance [36]. As demands on relational memory increase (B), the relationship between reinstatement and mnemonic performance in older adults plateaus, whereas younger adults use gaze reinstatement to support performance [36,43]. Based on findings from the compensation literature [49], we predict that as relational memory demands overwhelm older adults, gaze reinstatement will not be sufficient to support performance and will thus decline, whereas in younger adults, the relationship between gaze reinstatement and mnemonic performance would plateau before eventually declining as well.
Figure 3Schematic of “looking at nothing” behavior, whereby participants reinstate encoding-related eye movements during retrieval in the absence of visual input, across three task types. Row 1 depicts tasks in which participants are required to remember the relative locations of presented objects (left). During maintenance (whereby a representation is held in an active state in memory) or retrieval (right), participants’ eye movements reinstate the locations and spatial relations among encoded objects, e.g., [36,43]. Row 2 depicts tasks in which participants are required to remember a complex scene that was presented either visually or auditorily (left). During retrieval (right), participants’ eye movements return to regions that were inspected during encoding, e.g., [48,56]. Row 3 depicts tasks in which participants are required to answer questions or make judgments about previously presented items. During retrieval (right), participants look in the region of the scene that previously contained the target item, even when successful task performance does not require the retrieval of the previously observed spatial locations [42,45]. For within-item effects, see [40]; for words, see [46]; such effects persist even after a week-long delay [61].