| Literature DB >> 35703049 |
Roger Johansson1, Marcus Nyström2, Richard Dewhurst3, Mikael Johansson1.
Abstract
When we bring to mind something we have seen before, our eyes spontaneously unfold in a sequential pattern strikingly similar to that made during the original encounter, even in the absence of supporting visual input. Oculomotor movements of the eye may then serve the opposite purpose of acquiring new visual information; they may serve as self-generated cues, pointing to stored memories. Over 50 years ago Donald Hebb, the forefather of cognitive neuroscience, posited that such a sequential replay of eye movements supports our ability to mentally recreate visuospatial relations during episodic remembering. However, direct evidence for this influential claim is lacking. Here we isolate the sequential properties of spontaneous eye movements during encoding and retrieval in a pure recall memory task and capture their encoding-retrieval overlap. Critically, we show that the fidelity with which a series of consecutive eye movements from initial encoding is sequentially retained during subsequent retrieval predicts the quality of the recalled memory. Our findings provide direct evidence that such scanpaths are replayed to assemble and reconstruct spatio-temporal relations as we remember and further suggest that distinct scanpath properties differentially contribute depending on the nature of the goal-relevant memory.Entities:
Keywords: episodic memory; eye movements; reinstatement; replay; scanpaths
Mesh:
Year: 2022 PMID: 35703049 PMCID: PMC9198773 DOI: 10.1098/rspb.2022.0964
Source DB: PubMed Journal: Proc Biol Sci ISSN: 0962-8452 Impact factor: 5.530
Figure 1Encoding and recollection of scenes and object arrangements. (a) Example of stimuli images (scenes: studio, waterfalls, city street, office; object arrangements: vegetables, cookies, bathroom things, Lego). (b) Experimental design of the encoding phase, recall phase and surprise test. (Online version in colour.)
Mean values for the performance data during the recall phase and the surprise test, with standard deviations within brackets. The mnemonic content score represents the mean of the three subjective ratings (recollection strength, vividness, spatial accuracy). Only correct trials were considered for confidence, response time and gaze transitions between options (a measure of choice certainty [45]).
| total | scenes | object arrangements | |
|---|---|---|---|
| performance data | |||
| recall phase | |||
| mnemonic content score (%) | 58 (30) | 66 (26) | 49 (31) |
| surprise test | |||
| accuracy (%) | 85 (36) | 95 (22) | 75 (43) |
| response time (ms) | 7699 (6746) | 5583 (5672) | 9815 (7065) |
| confidence (%) | 78 (29) | 90 (18) | 66 (33) |
| gaze transitions between options | 10.8 (7.4) | 8.5 (5.2) | 13.2 (8.4) |
Figure 2Illustrations of the method to capture sequential encoding-recollection similarity. (a) Overview of the MultiMatch scanpath similarity analysis. In the first panel (i), the scanpaths from encoding and recall to be compared are shown, where fixations are represented as dots and saccades as arrows between the fixations, and where larger dots represent longer fixation duration. In the second panel (ii), the basic principle behind the temporal alignment of the two scanpaths is illustrated. In the matrix to the left, each saccadic vector during encoding (E1–E3) is compared to each saccadic vector during recall (R1–R3) according to their shape. Using the Dijkstra algorithm [50], the optimal temporal alignment between the two scanpaths is then computed as the minimal cost—shortest path—from the upper left corner to the bottom right corner of the comparison matrix. All possible paths along with the cost for each transition (ω) between the matrix elements is outlined in the right figure. The minimal cost—shortest path—in this example (E1R1 to E2R2 to E3R3) is highlighted.
Finally, the temporally aligned scanpaths from encoding and recall are shown superimposed in Euclidean space (iii) to illustrate how sequential encoding-recollection similarity (SERS) can be calculated for each individual fixation and saccade pairings over the five MultiMatch-dimensions. SERS for the complete scanpaths is then quantified as the average similarity over all the temporally aligned saccade (encoding: ES1–ES4; recall: RS1–RS4) or fixation (encoding: EF1–EF4; recall: RF1–RF4) pairings. (b) An illustration of the five MulitMatch dimensions of fixation position, fixation duration, saccade shape, saccade direction and saccade length for the temporally aligned fixation pairs FE3 − FR3 and saccade pairs SE3 − SR3. The numeric difference in SERS between each dimension is illustrated with a dotted line for each dimension separately. The fixation dimension of position relies on spatial coordinates in absolute space and quantifies how similar temporally aligned fixations are in respect to Euclidean distances, thus representing a similarity measure of fixation order. In contrast, the saccade dimensions of shape, direction and length rely on differences in relative space. The shape dimension quantifies how similar temporally aligned saccadic vectors are in overall geometric shape. The direction dimension quantifies how similar temporally aligned saccadic vectors are in geometric angle, thus representing a similarity measure of the particular heading of eye movements. The length dimension quantifies how similar temporally aligned saccades are in their absolute amplitude, irrespective of shape and direction. The fixation dimension duration does not rely on any spatial coordinates and quantifies how similar temporally aligned fixations are in their duration. (c) Examples of varying SERS: (i) complete SERS in respect to all five MM dimensions; (ii) relatively high SERS in shape, but during recall there is a dislocation in absolute space, large dissimilarities in saccadic angles, overall shorter saccades and overall longer fixation durations—therefore the SERS in all other MM dimensions are relatively low; (iii) high SERS in direction, but during recall there are dislocations in absolute space, disproportional saccadic lengths and overall longer fixation durations—therefore the SERS in all other MM dimensions are relatively low; (iv) low SERS over all five MM dimensions. (Online version in colour.)
Figure 3Scanpath replay over the five MM dimensions for the (a) scenes and (b) object arrangements. The measure of scanpath replay represents the difference between SERS and baseline similarity (a value greater than zero indicates scanpath replay). (c) The relationship between position replay and memory quality. (d) The relationship between shape replay and memory quality. (e) The relationship between direction replay and memory quality. Memory quality corresponds to the mnemonic content score during recall. Error bars and shaded areas denote 95% confidence intervals, *p < 0.05, **p < 0.01, ***p < 0.001. (Online version in colour.)