Literature DB >> 35834590

The eyes don't have it: Eye movements are unlikely to reflect refreshing in working memory.

Vanessa M Loaiza1, Alessandra S Souza2,3.   

Abstract

There is a growing interest in specifying the mechanisms underlying refreshing, i.e., the use of attention to keep working memory (WM) contents accessible. Here, we examined whether participants' visual fixations during the retention interval of a WM task indicate the current focus of internal attention, thereby serving as an online measure of refreshing. Eye movements were recorded while participants studied and maintained an array of colored dots followed by probed recall of one (Experiments 1A and 1B) or all (Experiment 2) of the memoranda via a continuous color wheel. Experiments 1A and 2 entailed an unfilled retention interval in which refreshing is assumed to occur spontaneously, and Experiment 1B entailed a retention interval embedded with cues prompting the sequential refreshment of a subset of the memoranda. During the retention interval, fixations revisited the locations occupied by the memoranda, consistent with a looking-at-nothing phenomenon in WM, but the pattern was only evident when placeholders were onscreen in Experiment 2, indicating that most of these fixations may largely reflect random gaze. Furthermore, spontaneous fixations did not predict recall precision (Experiments 1A and 2), even when ensuring that they did not reflect random gaze (Experiment 2). In Experiment 1B, refreshing cues increased fixations to the eventually tested target and predicted better recall precision, which interacted with an overall benefit of target fixations, such that the benefit of fixations decreased as the number of refreshing cues increased. Thus, fixations under spontaneous conditions had no credible effect on recall precision, whereas the beneficial effect of fixations under instructed refreshing conditions may indicate situations in which cues were disregarded. Consequently, we conclude that eye movements do not seem suitable as an online measure of refreshing.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35834590      PMCID: PMC9282440          DOI: 10.1371/journal.pone.0271116

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Working memory (WM) is the system that briefly holds and manipulates information in mind from moment-to-moment. Much work concerns the role of attention to prioritize the most relevant contents of WM via refreshing [see 1 for a recent review]. Refreshing is considered a domain-general function that brings a representation into the focus of attention in WM, thereby improving its accessibility. A persistent quest is to find direct evidence that refreshing exists and has a functional role in attention-based maintenance in WM. One method of manipulating refreshing is to explicitly instruct participants to refresh [2-6]. For example, in their instructed-refreshing paradigm, Souza and colleagues presented a memory array (e.g., to-be-remembered colors) with one of the items later probed. During the retention interval (RI), a series of sequentially presented cues (i.e., arrows) pointed to the memory items between 0 and 2 times to prompt participants to “think of” (i.e., refresh) the cued items. A beneficial impact of refreshing was observed: items cued more often to be refreshed were better recalled from WM [4, 5], regardless of their modality (verbal or visuospatial) [7]. This paradigm reveals that instructed refreshing benefits WM. However, it does not elucidate how participants use the cues or whether this underlying process is similar to what participants spontaneously do in a typical WM paradigm without instruction. Furthermore, there is no means to measure whether all the cues were followed or how much attention they engaged. Thus, there may be substantial individual differences in instruction-following, and hence, in in the cues’ facilitative effect. In this paradigm, the only evidence that the cues were used was their beneficial effect on recall. If the cues did not help, it would be impossible to disentangle whether refreshing is ineffective, or participants simply ignored the cues. One means to address these issues is to consider an online yet unobtrusive measure, such as eye movements, of when and where participants focus their attention. Much work has demonstrated the link between eye movements and covert shifts of attention, especially the looking-at-nothing phenomenon [8-10]. It is often found that fixating at now-empty locations of previously presented memoranda facilitates retrieval from episodic memory [11, 12]. This supports the notion that covert shifts of attention, as reflected by fixations, have a functional purpose, perhaps because looking at the now-empty location reactivates the associated content [8]. Curiously, the looking-at-nothing phenomenon has been largely overlooked by researchers interested in refreshing despite the fact that the notion of reactivation through covert shifts of attention greatly resonates between the literatures. One recent exception investigating eye movements in visual WM has shown that participants spontaneously gazed toward the location of a tested item despite no instruction or incentive to do so [13]. Furthermore, presenting a cue after encoding to indicate which item would be tested (i.e., retro-cue) biased gaze toward the location of the retro-cued item, and the size of the gaze bias predicted response times during WM recall. These results cohere with those of the looking-at-nothing literature by suggesting that fixations may reflect covert shifts of attention. Still, it remains unclear whether fixations on now-empty locations of previously presented memoranda are functional in WM and hence could be linked to acts of refreshing, and whether spontaneous and instructed refreshing as measured by fixations yield similar WM improvements. In the current study, we investigated whether fixations indicate acts of refreshing in a classic visual WM task. If so, then we can make a more explicit link between spontaneous and instructed refreshing using fixations as an online measure of refreshing in WM. Participants maintained a memory array of colored dots for a probed recall test of one (Experiments 1A and 1B) or all of the dots (Experiment 2) along a continuous color wheel (Fig 1). The measure of performance was the distance between the tested and the reported color (i.e., recall error). Crucial to our question are the events during the RI. In Experiment 1A, participants simply maintained the colors during an unfilled 2.5-s or 4-s RI with no further instruction. In Experiment 1B, the same group of participants was instructed to “think of” a cued color 0, 1, or 2 times during the RI according to a series of refreshing cues (arrows) presented for either 0.5s or 1s thereby varying cue frequency and duration, respectively. Thus, the aim of Experiments 1A and 1B were to investigate whether fixations indicate acts of refreshing in spontaneous and instructed conditions, respectively. As will become clearer further on, the results of Experiment 1B were relatively conclusive whereas those of Experiment 1A were not, and so we designed Experiment 2 to specifically address the lingering issues of Experiment 1A regarding fixations under spontaneous conditions. Specifically, during Experiment 2, a semi-circular array of 4 colored dots was displayed followed by a RI wherein either the screen remained blank, or placeholders of the to-be-remembered dots remained on screen. The design of these experiments allowed us to investigate three main research questions.
Fig 1

Example of the Task in Experiments 1A (Panel A), 1B (Panel B), and 2 (Panel C).

Example of the Task in Experiments 1A (Panel A), 1B (Panel B), and 2 (Panel C). First, we considered the occurrence of fixations toward previously presented locations of the memoranda during the RI (i.e., looking-at-nothing). We assessed this by considering fixation rate (i.e., fixations per second) to the memoranda compared to other screen locations during encoding and the RI. Fixations during encoding are often associated with heightened attention to these items gating their preferential encoding [14-16]. If attention continues to be engaged to the memoranda during maintenance, then fixations should be directed more often to memory locations during the RI as well [13, 17]. Given that the memoranda were evenly distributed across the array in Experiment 1A, it was difficult to clearly distinguish whether fixations in any direction reflected either looking-at-nothing or just random looking. Thus, Experiment 2 more reliably associated fixations with a looking-at-nothing strategy by presenting memoranda in a randomly determined half of the screen. This allowed us to assess how often participants looked back to locations of previous memoranda versus previously empty locations. We also varied the presence of placeholders during retention to consider whether this could facilitate gaze-based rehearsal [18, 19]. If participants are biased toward looking at the locations of currently maintained items in memory, we should observe far more fixations toward locations of previously presented memoranda compared to other locations. Second, in Experiments 1A and 2 we examined whether spontaneous looking-at-nothing behavior during a blank RI predicts WM recall. If fixations reflect functional shifts of attention for refreshing, then increasing fixations toward a now-empty item location should improve its recall when this item is eventually tested. Third, we examined whether fixations toward a cued location index the attention allocated to the cued item, thereby providing an independent predictor of cue use. Thus, in Experiment 1B, we recorded how fixations changed in relation to the presentation of refreshing cues during the RI. Consistent with previous work [4, 7], recall should improve with increasing refreshing cues. Critically, if fixations provide an online and unobtrusive measure of attentional allocation, then they should capture an additional source of individual variation in the use of the cues to guide attention to memoranda. Altogether, answering these three questions will indicate if fixations can provide a useful tool to study refreshing online, both when it occurs spontaneously as well as when instructed by the experimenter.

General method

Participants

In Experiments 1A and 1B, we collected data from 30 adults (Mage = 20.54, SD = 1.82) from the participant pool at the University of Essex who were compensated with partial course credit or £15. An additional five participants were excluded from analysis due to failure to complete the experiment (e.g., failing to pass the first eye calibration procedure after multiple attempts; n = 3) or due to experimenter error causing the loss of the eye-tracking data (n = 2). Sample size was determined based on our previous experience with these tasks and effects. In line with our pre-registration (see https://osf.io/vqd76/), we monitored the evidence for our hypotheses after the initial 30 datasets were available, with the plan to continue collecting to a maximum of 50 participants (due to resource limitations) if the evidence for the effects of interest up to that point was ambiguous. In the pre-registration of Experiment 1, we had assumed that we would use Bayes factors to decide whether the results were ambiguous. Given that we used a mixed effects approach described further on, however, we used credible intervals instead to draw inferences. Relying on Bayesian inference allows our analyses to be unbiased by changes in sampling plan after looking at the data [20]. In Experiment 2, we collected data from 19 adults (Mage = 22.95, SD = 2.76) from the participant pool at the University of Zurich who were compensated with partial course credit or 15 CHF. The data of one additional participant were excluded from analysis due to experiment failure. Experiment 2 was not pre-registered, so there is no sample size justification; our aim was to recruit a similar number of participants as Experiment 1. Due to the coronavirus pandemic, we were not able to continue with data collection. All participants provided written informed consent and were debriefed at the conclusion of the experiments. Experiment 1 was approved by the ethics committee at the University of Essex and Experiment 2 was conducted in accordance with University of Zurich regulations (automatic approval via an ethic self-check list). All participants self-reported normal or corrected-to-normal vision and normal color vision.

Materials

The materials for both experiments can be found on the Open Science Framework (OSF): https://osf.io/qkmtc/. Experiments 1A and 1B were programmed in Matlab using the Psychophysics and Eyelink Toolbox extensions [21–23; see http://psychtoolbox.org]. These experiments were presented on a 50x30 cm CRT monitor with resolution of 1920x1080 with 75% illumination. Participants were seated at a distance of 80 cm from the computer screen and their heads were supported by a chinrest. Eye movements of the right eye were recorded using a desk-based SR Research Eyelink CL 1000/2000 eye-tracker sampling eye position at 1000 Hz. The eye-tracker was calibrated using a nine-point calibration procedure, which occurred before receiving instructions for and approximately every 20 trials during the main tasks. Following successful calibration of the eye-tracker, participants received instructions for or continued with the task. In Experiment 2, the eye-tracker comprised a GazePoint 150 Hz device, which was connected with Matlab via the iMotions software. The experiment was presented on LCD 56.8 x 33.5 cm BenQ XL2430T monitor with a resolution of 1920x1080 and refreshing rate of 144 Hz. The eye-tracker was connected to the same computer that presented the stimuli. Participants sat at distance of 60 cm from the computer screen with their heads supported by a chinrest. In Experiment 2, eye movements of both eyes were tracked with a sampling rate of 150 Hz. The eye-tracker was calibrated with a nine-point calibration procedure in the beginning of the experiment only. The main task comprised remembering the colors of a set of dots, and to reproduce the color of a probed item using a continuous color wheel. To-be-remembered dots (dots radius = 32 pixels) were sampled randomly from a continuous color wheel of 360 possible colors from a circle in the CIELAB color space (L = 70, a = 20, b = 38, radius = 60). In Experiments 1A and 1B, the memory set consisted of a variable number of dots that were evenly spaced on an imaginary circle (radius = 200 pixels). In Experiment 2, the memory set consisted of four dots which were presented in four out of eight locations evenly spaced along an imaginary circle (radius = 250 pixels). The four locations were selected such that half of the screen was occupied with memoranda.

Procedure

Experiment 1A and 1B

Participants were tested individually in quiet booths with an experimenter present to ensure that the instructions were understood and followed and to ensure proper functioning of the eye-tracker. After completing a web-based color vision test, participants completed a set size calibration phase without the eye-tracker, followed by two sessions of a visual WM task with the eye-tracker: Experiment 1A, serving as the baseline session with no refreshing cues during the RI, and Experiment 1B in which refreshing cues were presented during the RI of the task. During the set size calibration task, participants completed 40 trials (with four practice trials) of a visual WM task wherein the set size (i.e., the number of presented to-be-remembered colored dots) was gradually adjusted to achieve a criterion level of 40° of recall error. To begin each trial, a fixation cross was displayed for 0.5s followed by a set of n colored dots presented for 1s. After a brief RI of 2.5s or 4s (half of either length, randomly intermixed), memory for one of the colored dots was tested by presenting a dark-grey disk at the location of one of the dots (i.e., probe) surrounded by a color wheel around the location of all the dots and a mouse cursor at the center of the screen. Participants moved the mouse around the color wheel to adjust the color of the probe disk to the color they remembered as being presented in that location. When they were satisfied with their answer, they pressed the left-mouse button to confirm their response, and a new trial began after a 1.5s inter-trial interval (ITI). The initial set size of the memory array was 6 items, and based on participants’ ongoing performance in the last 4 trials, the n in the follow trial was adjusted. If mean recall error fell below 40°, then n was increased by 1; if it exceeded 40°, then n was decreased by 1. Set size could range between 3 and maximum of 10 colored dots. The average n in the last 20 trials was used to determine the set size used in Experiments 1A and 1B. For example, if the average n = 5.3 items, then 70% of the trials in Experiments 1A and 1B contained a set size of 5 items and 30% of the trials contained 6 items, randomly intermixed. The calibration phase was successful at adapting participants’ recall error close to 40° (M = 44.66, SD = 5.87) with a set size of about 5–6 items for most participants (M = 5.76, SD = 1.00, range = 4.15–8.40), similar to our previous work [24]. Participants next completed the baseline Experiment 1A. There were four practice trials and 100 critical trials (Fig 1A). Participants began each trial by fixating on a fixation cross; the trial started when the eye-tracker detected their fixation on the cross. If fixation was not detected within 5s, the experimenter re-positioned the camera if necessary and reminded the participants to fixate on the cross before re-starting the procedure. If it still failed, then calibration of the eye-tracker was reinstated by the experimenter. Participants were informed that they could freely move their eyes thereafter. The trials then progressed much like the set size calibration phase: a memory array of n colored dots was presented for 1s, with the n having been individually determined during the set size calibration phase. After a brief RI of 2.5s or 4s (approximately half the trials of either length, randomly intermixed), memory for one of the colored dots was tested by presenting a dark-grey probe disk in the original location of one of the dots. Participants moved the mouse cursor from the center of the screen around the continuous color wheel surrounding the location of all the dots to select the color they remembered as having been presented in that location. Note that we aimed to have 50 trials per design cell in Experiments 1A (i.e., RI) and 1B (i.e., cue duration and number of refreshings). However, it was not evident until late into data collection that the trials of both Experiments 1A and 1B were inadvertently unbalanced. Thus, the split of the trials across the designs was only approximately even, but the cell size varied across participants (e.g., 48 vs. 52 between two conditions). Although inconvenient, all of the principal analyses pertaining to the research questions were conducted at the trial level, and thus there is no problem for drawing inferences. Following completion of Experiment 1A, participants were allowed a break before beginning Experiment 1B. After successfully completing the eye-tracking calibration procedure, participants received instructions to complete four practice trials and 300 critical trials (Fig 1B). As in Experiment 1A, a fixation cross appeared onscreen and required participants’ fixation before the trial progressed to the presentation of the memory array of n colored dots for 1s. Following a 0.5s interstimulus interval (ISI), three sequentially presented white arrows (i.e., refreshing cues) appeared at the center of the screen, each pointing to the location of one of the to-be-remembered dots. The refreshing cues were presented for either 0.5s or 1s, with trials of each duration randomly intermixed. Following the offset of the last refreshing cue, a final ISI of 0.5s preceded the test. Thus, the entire RI of Experiment 1B lasted either 2.5 or 4s depending on the cue duration, just as in Experiment 1A. Participants were instructed to think about the color of the dot that each arrow points to for as long as it is presented, with the instructions specifying that this is the most important task in the block. Participants were also explicitly informed that the cues did not predict which item would be tested. There were two possible sequences of the refreshing cues: Assume ABC refers to three randomly selected colors from the memory array, then the refreshing cues could point to three different items (A-B-C) or two different items once and a second item twice (A-B-A). These were the only two possible sequences of refreshing cues; thus, the cues always alternated between different items, and no items were cued twice in succession. Thereafter, participants’ memory was tested as in the previous tasks, with a dark-grey probe disk in the location of one of the colors and surrounded by the continuous color wheel.

Experiment 2

Participants in Experiment 2 were tested in up to groups of 2 at individual stations, with an experimenter present to answer questions and ensure that instructions were followed. Experiment 2 was similar in procedure to the previous experiments (Fig 1C): After receiving instructions and 12 practice trials, participants completed two blocks of 60 trials, with each trial beginning with a cross requiring the participants’ fixation before presenting a memory array of four colored dots for 1s. The fixation cross remained at the center of the screen for the duration of the trial. Four contiguous locations along an imaginary circle of 8 possible fixed spatial locations were selected such that the to-be-remembered dots were randomly clustered on the top, right, or left side of the screen (see Fig 1C). The two blocks of trials only differed regarding whether the memoranda disappeared entirely from the screen during the RI, as in Experiment 1A, or black outlines of the memoranda remained on the screen during the RI. The order of the blocks was counterbalanced across participants. After 4s, each of the memoranda were probed in a random order with an arrow pointing to each dot, prompting the participant to use the mouse to select the color from the continuous color wheel that they remembered for that probed location. Note that, due to experiment failure, eight participants did not fully complete the second block of trials. Once again, all of the primary analyses were conducted at the trial level, and so this issue is inconvenient but not problematic for drawing inferences.

Data pre-processing and analysis

The raw data and analysis scripts can be found on the OSF: https://osf.io/qkmtc/. The pre-processing of the raw data and analyses were conducted in R [25]. Offline parsing of the raw eye-tracking data was conducted with the R package “saccades” [26] that applies Engbert and Kliegl’s [27] velocity-based classifying algorithm. The instances where a timestamp was misprinted in the raw eye-tracking data were removed before parsing (0.00005% and 0.54% of the samples in Experiments 1 and 2, respectively). Practice trials were excluded from parsing and analysis. Although eye movements were recorded and parsed for the full trial sequences, henceforth we focus on the encoding and RI phases. Any events that were not classified as a fixation (i.e., blinks or events that were deemed “too short” or “too dispersed” artifacts) were excluded from analysis (approximately 23.1% and 19.7% of the detected events in Experiments 1 and 2, respectively). Finally, in Experiment 1 there were several instances that occurred where the program crashed and had to be re-started, resuming at the same block and trial where the crash had occurred. In these cases (0.05% of the trials), the trial where the crashed occurred and the one following were removed from analysis. During data processing, we defined a threshold radius around the area of the dots to qualify looking at the location of a given dot versus the center of the screen or anything else. This threshold was defined as a radius of 142 pixels, which is the radius around 4 dots (i.e., the minimum calibrated set size in Experiment 1 and the presented set size in Experiment 2) before touching the area around the next dot (allowing classification of 76%, 67%, and 59% of the detected fixations in Experiments 1A, 1B, and 2, respectively). We also considered a stricter threshold for both experiments, the analyses for which can be found on the OSF. In the instances where a fixation was ambiguous (i.e., classified as fixating on two dots, 17.2% and 16.4% of fixations in Experiments 1 and 2, respectively), the dot with the minimum distance to the fixation was taken. The threshold for looking at the center of the screen was a radius of 57.5 pixels, which is the minimum radius before touching the aforementioned defined area surrounding the dots (reflecting 16%, 27%, and 23% of the detected fixations in Experiments 1A, 1B, and 2, respectively). Finally, fixations falling outside of these areas were labeled “other” (7%, 6%, and 18% for 1A, 1B and 2, respectively); in Experiment 2, these “other” areas could include stimulus locations not used during the current trial (e.g., when a trial showed the memoranda on the left, the unused center and right locations were “other” during that trial). In Experiment 1, only 14% of the total fixations were qualified as looking toward the eventually tested item during the RI. The fact that most of the fixations to the memoranda were essentially irrelevant to the eventually tested item prompted us to test all the presented items in Experiment 2 so as to have more data for analysis. We assessed the evidence for our hypotheses using Bayesian inference. Bayesian inference involves updating one’s prior beliefs about some parameters of interest in light of the observed data. For example, key parameters of interest in this work concern the effect of eye movements during the RI on WM performance and its interaction with other manipulated independent variables, such as the number of presented refreshing cues. The updated beliefs are the resulting posterior distributions of each of these parameters that serve as means to assess how confident we are that the parameters are credibly different from 0. For example, one could observe whether the interval covering 95% of the posterior distribution (i.e., the credibility interval, CI) for the effect of fixations includes 0. To this end, we used the brms package [28] to conduct Bayesian mixed effects models. The mixed effects approach was ideal because it allowed us to account for variability in fixations at both the participant level and the trial level, which was particularly useful for assessing the effect of fixations on WM recall. The brms package uses Stan [29] to estimate posterior distributions of the model’s parameter estimates using Monte-Carlo algorithms. An advantage of brms is that it comprises different families of distributions (most relevantly, ex-Gaussian and von Mises) that can be fit to the data, as we explain further on. We applied uninformative/flat priors on the coefficients of the effects and intercept for all the tested models. The posterior parameter estimates of all the tested models were sampled through four independent Markov chains each comprising 2,000 iterations, with the first 1,000 warmup iterations excluded from analysis. We checked the chains for convergence via visual inspection as well as verifying that the Ȓs of the fitted models’ parameters were close to 1. Posterior predictive checks ensured appropriate model fit to the data. The analysis for our first research question concerned fixation rate (i.e., the number of detected fixations per second) to the memoranda, center, and other screen locations as a function of phase (Encoding vs. RI) and RI duration (Experiment 1; 2.5s vs. 4s) or placeholders (Experiment 2; absent vs. present). Each of these variables was manipulated within-subjects and treated as a fixed effect in the analysis, with the condition listed first in Table 1 serving as the reference. Additionally, fixation rate in Experiment 1B to either the cued locations or eventually tested target was also considered as a function of cue duration (0.5s vs. 1s) and the number of times the cued location or eventually tested item was cued (0, 1, or 2), respectively. For analysis, fixation rate was aggregated across trials for each of these relevant variables for each participant. We also adjusted fixation rate according to the time allotted during the encoding (1s) and RI (2.5s or 4s) phases to ensure that the measure was comparable between these phases. Fixation rate was assumed to follow an ex-Gaussian distribution because it was left-skewed with a long tail (see Fig 2, top panels). The effects of the factors were applied to the mean of the distribution, and their effects on the additional parameters of sigma (i.e., standard deviation of the Gaussian component) and beta (i.e., scale of the exponential component) were not included.
Table 1

Research question 1: Mean posterior estimates [and 95% credibility intervals, CIs] of the effects of the predictors on fixation rate in each experiment.

The Condition Appearing First in the Parentheses was the Reference Variable.

EffectExperiment 1AExperiment 1BExperiment 2
Intercept 1.56 [1.46, 1.67] 1.72 [1.49, 1.93] 1.34 [1.13, 1.57]
Phase (encoding vs. RI) -1.00 [-1.11, -0.88] -0.14 [-0.34, 0.07] -0.50 [0.71, -0.30]
RI (2.5 s vs. 4 s)0.01 [-0.07, 0.08]0.04 [-0.02, 0.11]-
Placeholders (absent vs. present)---0.06 [-0.20, 0.08]
Location (center vs. other) -1.23 [-1.35, -1.12] -1.48 [-1.65, -1.30] -0.70 [-1.03, -0.38]
Location (center vs. dots) 4.59 [3.69, 5.52] 3.40 [2.65, 4.12] 1.59 [0.83, 2.34]
Phase x RI0.07 [-0.04, 0.19] -0.17 [-0.27, -0.08] -
Phase x Placeholders-- -0.27 [-0.47, 0.06]
Phase x Location (center vs. other) 1.12 [0.98, 1.26] 0.35 [0.19, 0.51] 0.68 [0.39, 0.97]
Phase x Location (center vs. dots) -1.45 [-2.00, -0.90] -1.21 [-1.75, 0.64] -1.11 [-1.54, -0.70]
RI x Location (center vs. other)0.01 [-0.09, 0.11]-0.05 [-0.13, 0.02]-
RI x Location (center vs. dots)-0.12 [-0.39, 0.15] 0.23 [0.03, 0.44] -
Placeholders x Location (center vs. other)--0.09 [-0.11, 0.30]
Placeholders x Location (center vs. dots)--0.09 [-0.12, 0.31]
Three-way interaction (center vs. other)-0.09 [-0.25, 0.06] 0.16 [0.06, 0.27] -0.07 [-0.35, 0.22]
Three-way interaction (center vs. dots) -0.42 [-0.71, -0.13] -0.62 [-0.83, -0.40] 1.06 [0.61, 1.49]

Note. Boldface font denotes a credible effect (i.e., CIs do not overlap with 0). RI = retention interval. The three-way interaction included the factors of phase, location, and RI (Experiment 1) or placeholders (Experiment 2).

Fig 2

Density Plots of Fixation Rate (Fixations Per Second; Top Panels) and Recall Error (Transformed into Radians; Bottom Panels) in Experiments 1A (Panel A), 1B (Panel B), and 2 (Panel C).

Density Plots of Fixation Rate (Fixations Per Second; Top Panels) and Recall Error (Transformed into Radians; Bottom Panels) in Experiments 1A (Panel A), 1B (Panel B), and 2 (Panel C).

Research question 1: Mean posterior estimates [and 95% credibility intervals, CIs] of the effects of the predictors on fixation rate in each experiment.

The Condition Appearing First in the Parentheses was the Reference Variable. Note. Boldface font denotes a credible effect (i.e., CIs do not overlap with 0). RI = retention interval. The three-way interaction included the factors of phase, location, and RI (Experiment 1) or placeholders (Experiment 2). The analyses concerning our second and third research questions concerned recall error as a function of fixation frequency to the eventually tested target(s) during the RI (i.e., the proportion of fixations to the target(s) out of the total detected fixations for each trial) and the independent variables of each experiment. Alternative analyses instead using proportion fixation duration (i.e., the proportion of time spent looking at the target(s) out of the total duration of detected fixations) yielded similar results to those reported. This is not surprising given that the measures were highly correlated (rs ranging .80 to .92). These alternative analyses can be found on the OSF. The independent variable(s) in Experiment 1A were RI (2.5s or 4s) and in Experiment 1B the number (0, 1, or 2) and duration (0.5s or 1s) of the presented cues to instruct refreshing. Experiment 2 manipulated the presence of placeholders of the memoranda during the RI (absent or present), and we also included an overall effect of test order (i.e., output position, 1 to 4) given that all of the memoranda were tested. Each of these within-subjects variables were entered as a fixed effect in the analysis, with the first level listed above serving as the reference, except the number of presented cues (Experiment 1B) and test order (Experiment 2) which, like fixation frequency during the RI (in all experiments), were treated as continuous predictors. The signed recall error (ranging from -180 to 180°) was transformed from degrees into radians in order to fit the data with a von Mises distribution, a circular normal distribution. The effects of the variables were applied to the kappa parameter, representing the precision of the von Mises distribution; the mean of the distribution was set to 0 given that responses were centered on the value of the tested item (see Fig 2, bottom panels). Thus, recall precision (i.e., kappa) was the specific outcome variable of interest to address our second and third research questions. During the review process, we conducted two further alternative analyses that can be found on the OSF: First, we applied the effects of the variables to the mean of the von Mises distribution rather than kappa, but given the clear mean of 0 shown in Fig 2, there were no credible effects on the mean. Second, we fit an ex-Gaussian distribution to recall error in degrees as the dependent variable and the effects of the variables applied to the mean of the distribution. In most instances this model failed to converge (i.e., Ȓs greatly exceeding 1.06) even with many more iterations per chain, whereas in the other instances where the model converged, the pattern of results was the same as the von Mises distribution. Thus, we have presented the results of the original von Mises distribution.

Results and discussion

For the sake of brevity and coherence, we have organized the results and discussion according to our three principal research questions.

1. Is there evidence of looking-at-nothing during the RI of a visual WM task?

We first assessed whether looking-at-nothing behavior was evident by examining the fixation rate to the memoranda compared to the center or other locations during the encoding and RI phases (see Fig 3 and Table 1). As a reminder, Experiments 1 and 2 used different eye trackers that sampled at different rates (1000 Hz and 150 Hz, respectively), and thus the overall fixation rates between the experiments should not be interpreted. We first assessed looking-at-nothing during encoding and the RI for each individual experiment, and then considered fixations to the cued locations in Experiment 1B.
Fig 3

Research Question 1: Is There Evidence of Looking-at-Nothing During the Retention Interval (RI) of a Visual WM Task?

Fixation Rate (Fixations per Second) Was Highest for Memory Locations (Dots) Compared to Center and Other Locations During Encoding and the RI in Experiments 1A (Panel A) and 1B (Panel B). For Experiment 2, Dot Fixations during the RI Were Only Credibly Higher When Placeholders Were Present (Panel C). The Dark Lines Represent the Posterior Means and the Lighter Individual Lines Represent Individual Participant Means. The Dots represent the Posterior Mean and the Xs Represent the Mean of the Observed Values.

Research Question 1: Is There Evidence of Looking-at-Nothing During the Retention Interval (RI) of a Visual WM Task?

Fixation Rate (Fixations per Second) Was Highest for Memory Locations (Dots) Compared to Center and Other Locations During Encoding and the RI in Experiments 1A (Panel A) and 1B (Panel B). For Experiment 2, Dot Fixations during the RI Were Only Credibly Higher When Placeholders Were Present (Panel C). The Dark Lines Represent the Posterior Means and the Lighter Individual Lines Represent Individual Participant Means. The Dots represent the Posterior Mean and the Xs Represent the Mean of the Observed Values.

Experiments 1A and 1B

As explained previously, we separately conducted Phase x RI x Location mixed effects models on fixation rate for Experiments 1A and 1B (see Table 1 left and middle columns and Fig 3A and 3B). Fixation rate toward the dots was greater compared to the center and other locations during both the encoding and RI phases as reflected in the main effect of Location (center vs. dots). This pattern occurred regardless of the RI duration, although the fixation rate to the dots was greater during the 2.5s than 4s RI (estimates = 0.47 [0.23, 0.73] and 0.52 [0.34, 0.71] for both Experiments 1A and 1B, respectively), thus yielding credible three-way interactions (see Fig 3A and 3B, respectively). This is consistent with previous findings that fixation rate tends to decrease over the duration of the retention interval (Souza et al., 2020), possibly due to a reduction in novelty. Given that fixations toward any direction on the screen could be classified as looking at the memoranda when using a circular array as in Experiment 1, Experiment 2 presented the memoranda in only half of the screen to more precisely associate fixations to the memoranda’s locations. Furthermore, to assess if gaze support was necessary to observe looking at nothing, placeholders were either absent or present during the RI. Indeed, a credible three-way interaction between Phase, Placeholders, and Location indicated that fixation rate was sensitive to the placeholders during the RI. Fixation rate to the memoranda’s locations was only credibly greater than the center (estimate = 1.63 [0.83, 2.41]) and other locations (estimate = 1.64 [0.88, 2.42]) when the placeholders were present during the RI, but no credible differences between locations were observed when placeholders were absent (see Table 1 right column and Fig 3C). Thus, although fixations toward the memoranda showed a similar pattern between the encoding and RI phases in Experiments 1A and 1B, this was not the case in Experiment 2 wherein memoranda were presented in a specific region of the screen unless placeholders were presented. Since Experiments 1A and 1B did not include placeholders and memoranda were presented across all screen, fixations during the RI in Experiments 1A and 1B may largely reflect random gaze rather than looking-at-nothing.

Experiment 1B

We also investigated whether looking-at-nothing was sensitive to the instructions to refresh the memoranda in Experiment 1B (see Fig 4). There are two ways to consider this: First, we considered whether fixations to the memoranda during the RI depended on the number of cues to their locations. This allowed us to assess whether participants’ fixations were directed to the cued location by the refreshing cues as they appeared during the RI. We counted fixations to the cued locations only if the cue had been presented up to that point during the RI given the potential delay between cue onset and any subsequent change in fixations. Furthermore, we scaled fixation rate according to the number of items in the array that were not cued, cued once, or cued twice for each trial and for each participant given that the task was individually calibrated. That is, given the A-B-C and A-B-A cue sequences, only one item can be cued twice (i.e., A in an A-B-A trial), whereas there could be one (A-B-A) or three (A-B-C) items cued once, and the remaining items of the array are not cued. Thus, scaling fixation rate according to the number of items falling in each cue frequency category corrects for this artifact. We observed a positive effect of cue frequency on fixation rate to the cued locations (estimate = 1.47 [1.03, 1.95]), such that fixations to the cued locations increased with increasing refreshing cues (see Fig 4A). Furthermore, there was a negative effect of cue duration that just overlapped with 0 (estimate = -0.33 [-0.66, 0.00]), such that fixation rate was lower overall for cues presented for 1s compared to 0.5s. The interaction was not credible (estimate = -0.03 [-0.35, 0.29]).
Fig 4

Research Question 1: Is Looking at Nothing Sensitive to Instructions to Refresh the Memoranda in Experiment 1B?

Fixations to the Memoranda Decreased as the Presented Cues to those Locations Within the RI Increased (Panel A). However, Fixations to the Eventually Tested Target (i.e., the Predictor for Research Question 3), Increased as Cues to its Location Increased (Panel B). The Dark Lines Represent the Posterior Means and the Lighter Individual Lines Represent Individual Participant Means. The Xs Represent the Mean of the Observed Values.

Research Question 1: Is Looking at Nothing Sensitive to Instructions to Refresh the Memoranda in Experiment 1B?

Fixations to the Memoranda Decreased as the Presented Cues to those Locations Within the RI Increased (Panel A). However, Fixations to the Eventually Tested Target (i.e., the Predictor for Research Question 3), Increased as Cues to its Location Increased (Panel B). The Dark Lines Represent the Posterior Means and the Lighter Individual Lines Represent Individual Participant Means. The Xs Represent the Mean of the Observed Values. Our second approach examined whether fixations to the eventually tested target increased with the number of times it was cued at any point during the RI. This allowed us to assess whether the fixations used to predict target recall in the next analyses correlate with the number of times the target was cued. Similar to the previous analysis, we observed a positive effect of cue frequency on fixation rate to the target (estimate = 1.64 [1.24, 2.04]), a clearer negative effect of cue duration (estimate = -0.59 [-1.01, -0.20]), and their interaction was not credible (estimate = 0.01 [-0.27, 0.30]; see Fig 4B).

Conclusions

The fixation rate analyses indicated that participants most often looked toward the locations of the dots during the RI across experiments, initially suggesting looking-at-nothing behavior. However, the fact that this looking behavior was specific to when the dot placeholders were presented in a specific region of the screen in Experiment 2 suggests that many of the fixations in Experiments 1A and 1B may rather reflect random gaze than looking-at-nothing. Notwithstanding, participants’ fixations during the RI of Experiment 1B tended to correlate with the number of refreshing cues to the cued and eventually tested locations. These results overall suggest that looking-at-nothing in a visual WM paradigm may occur more reliably when there is an on-screen scaffold to support it, such as placeholders (Experiment 1A) or refreshing cues (Experiment 2).

2. Does spontaneous looking-at-nothing during the RI of a visual WM task predict recall precision?

We next considered whether spontaneous looking-at-nothing behavior in Experiments 1A and 2 predicted recall precision (i.e., kappa; see Fig 5). Accordingly, we examined the effect of fixation frequency to the target(s) and its interaction with length of the RI (2.5s or 4s; Experiment 1A) or the presence of placeholders (absent or present; Experiment 2). Given that all the items were tested in Experiment 2, test order was also included as a covariate in its analysis.
Fig 5

Research question 2: Does spontaneous looking-at-nothing during the retention interval (RI) predict recall precision?

Our results show no credible relation between fixations and recall precision. Panel A Shows the Posterior Estimates of the Effect of Predictors (Dot = Mean; Thin Line: 95% CI) in Experiment 1A Revealed No Credible Effect of Fixations; Model Predictions (Panel B) Showed No Credible Change in Kappa with Increases in Fixation Proportion or RI Duration. The Posterior Estimates (Panel C) Again Revealed no Credible Effect in Experiment 2; Accordingly, Model Predictions (Panel D) Reveal no Credible Increase in Kappa as Function of Fixation Proportion and Placeholder Presence.

Research question 2: Does spontaneous looking-at-nothing during the retention interval (RI) predict recall precision?

Our results show no credible relation between fixations and recall precision. Panel A Shows the Posterior Estimates of the Effect of Predictors (Dot = Mean; Thin Line: 95% CI) in Experiment 1A Revealed No Credible Effect of Fixations; Model Predictions (Panel B) Showed No Credible Change in Kappa with Increases in Fixation Proportion or RI Duration. The Posterior Estimates (Panel C) Again Revealed no Credible Effect in Experiment 2; Accordingly, Model Predictions (Panel D) Reveal no Credible Increase in Kappa as Function of Fixation Proportion and Placeholder Presence.

Experiment 1A

In Experiment 1A, there was no credible effect of length of RI on precision (estimate = 0.06 [-0.09, 0.21]), congruent with prior work showing no evidence of time-based forgetting in WM for colors [24, 30]. Most importantly, there was no credible effect of fixation frequency (estimate = 0.51 [-0.12, 1.15]), but there was a credible RI x fixations interaction (estimate = -0.77 [-1.46, -0.09]). Despite this credible interaction, follow-up analyses using the emmeans package [31] to assess the slope for each RI showed that the effect of fixations was not credible for either the 2.5s condition (estimate = 0.51 [-0.11, 1.16]) nor the 4s condition (estimate = -0.26 [-0.89, 0.39]). As shown in Fig 5B, the interaction likely occurred due to a crossover of the RI conditions. In Experiment 2, there was a credible effect of test order (estimate = -0.11 [-0.16, -0.05]), congruent with prior work demonstrating detrimental effects of output interference [32]. There was no credible effect of placeholders (estimate = -0.14 [-0.36, 0.09], and, most importantly, there was again no credible effect of fixations (estimate = 0.09 [-0.33, 0.53]) nor a credible placeholders x fixations interaction (estimate = 0.41 [-0.24, 1.10]). As evident in Fig 5D, kappa was largely similar regardless of the fixations to the tested item or placeholders during the RI. Overall, the fixations toward memory locations observed in Experiment 1A did not predict recall and hence they do not seem functional. In Experiment 2, we included a control to separate random looking around from true looking-at-nothing, which only occurred credibly when placeholders were present (as we showed in the previous analysis to address Research Question 1). Even so, fixations on memoranda were not functional to memory with or without placeholders. Furthermore, the relatively low level of fixations to the eventually tested target in Experiment 1A cannot explain these results given that all of the items were tested in Experiment 2, and still there was no impact of fixations on performance. Thus, these findings collectively suggest that fixations on memory locations during a RI are unlikely to indicate acts of spontaneous refreshing in WM.

3. Does looking-at-nothing during instructed refreshing in WM predict recall precision?

Experiment 1B was designed to address our final research question concerning whether looking-at-nothing under instructed refreshing conditions predicted recall precision from WM over and above the previously observed effects of refreshing cue frequency. Accordingly, we once again examined the effect of the proportion of fixation frequency to the target in Experiment 1B, and further considered its interaction with the number of presented refreshing cues (0, 1, or 2) and their duration (0.5s or 1s; see Fig 6).
Fig 6

Research Question 3: Does Looking-at-Nothing During Instructed Refreshing Predict WM Recall Precision in Experiment 1B?

Panel A Shows Posterior Estimates for the Predictors (Dot = Mean; Thin Line = 95% CI) Indicating a Credible Effect of Number of Refreshing Cues, Cue Duration, Fixations, and Interaction of Number of Cues and Fixations. Panel B Presents Model Predictions for the Interaction of Fixation Frequency and Number of Refreshing Cues for Each Cue Duration on Kappa.

Research Question 3: Does Looking-at-Nothing During Instructed Refreshing Predict WM Recall Precision in Experiment 1B?

Panel A Shows Posterior Estimates for the Predictors (Dot = Mean; Thin Line = 95% CI) Indicating a Credible Effect of Number of Refreshing Cues, Cue Duration, Fixations, and Interaction of Number of Cues and Fixations. Panel B Presents Model Predictions for the Interaction of Fixation Frequency and Number of Refreshing Cues for Each Cue Duration on Kappa. As evident in Fig 6, we replicated Souza and colleagues’ (2015, 2018) prior work showing a beneficial effect of refreshing cues, such that recall precision improved as the number of cues to refresh the tested item increased (estimate = 0.33 [0.17, 0.48]). Interestingly, there was a credible negative effect of cue duration (estimate = -0.16 [-0.28, -0.03]), such that presenting refreshing cues for a longer duration yielded worse precision. We note that this effect is unlikely to be due to time-based decay since the overall retention intervals match the ones used in Experiment 1A, wherein there was no credible effect of RI. Most importantly, there was a credible effect of fixation frequency (estimate = 0.59 [0.11, 1.11]) that interacted with the number of refreshing cues (estimate = -0.39 [-0.71, -0.07]). This interaction hinted at a decreasing sensitivity of kappa to fixation frequency as the number of refreshing cues increased. Follow-up analyses showed that the effect of fixations was credible for 0 refreshing cues (estimate = 0.51 [0.09, 0.96]), smaller but still credible for 1 refreshing cue (estimate = 0.31 [0.02, 0.59]), and not credible for 2 refreshing cues (estimate = 0.11 [-0.23, 0.42]). No remaining interactions were credible. In summary, these results indicate that looking-at-nothing behavior under instructed refreshing conditions is functional to WM, and, interestingly, not redundant with the impact of refreshing cues. Fixations appeared to capture additional variance in recall precision that tended to be more pronounced with fewer refreshing cues.

Overall conclusions

Recording eye movements during the RI of a WM task allowed us to assess if participants looked back to memory locations and whether this looking-at-nothing behavior was functionally related to memory recall, therefore indicating an act of refreshing. Although fixations to memory locations were evident in Experiments 1A and 1B, we could not ascertain how much of these fixations reflected random gaze given that the memoranda appeared in an evenly spaced circular array. Experiment 2 added a control condition: memoranda appeared in only half of the screen. In this case, fixations to memory locations were only credibly higher than to other locations if placeholders were onscreen. Although congruent with prior work showing that placeholders increase fixations [19], these findings challenge the notion that participants purposely look at nothing, particularly when literally nothing is on screen, to refresh the memoranda. Prior work concerning looking at nothing typically includes placeholders [9, 10], but given that most WM paradigms do not, we conclude that fixations under spontaneous settings do not reflect refreshing. In support of this disconnect between fixations and intention to refresh memory representations, we found no association between fixations and recall precision when participants spontaneously looked at nothing (Experiments 1A and 2). When their attention was cued to refresh the memoranda (Experiment 1B), fixations occurred most frequently to cued locations. This is consistent with prior work showing that retro-cues biased gaze toward cued memory locations [13], and the current work extends this to include refreshing cues that do not reliably indicate the test target as retro-cues do. Thus, fixations corresponded with direct instructions to refresh the memoranda. Fixations to the target also increased with refreshing cues. There was an overall benefit of fixations in Experiment 1B that was of similar magnitude to Experiment 1A, but this time was credible, perhaps because the range of variability was smaller in Experiment 1B than Experiment 1A. However, the benefit of these fixations decreased as the number of refreshing cues increased, suggesting that fixations may reflect something other than cue use. One possibility is that fixations to non-cued locations indicated some uninstructed refreshing that participants were sneaking in despite the instruction to refresh another item. In that sense, the effect of 0-cued fixations on recall error may be capturing situations in which participants consciously or unconsciously disregarded the cues. Therefore, the previously demonstrated benefit of cue frequency to WM recall [4, 7] cannot be replaced by participants’ visual fixations. One caveat is that we used centrally presented cues, and hence cue use could be associated with increases in center-screen fixations. Future studies should consider using peripheral cues to more properly assess the connection between cue use and fixations. Furthermore, the von Mises model that we used only allowed us to assess the effect of fixations on kappa (i.e., precision of recall), whereas there are other established models of visual working memory that include other parameters underlying recall [14, 33, 34]. It is thus possible that fixations under spontaneous or instructed refreshing conditions may affect parameters other than precision, and thus future work will be required to determine this possibility. Overall, our results suggest that eye movements cannot serve as an online measure of refreshing in WM: Under spontaneous conditions (Experiments 1A and 2), participants did not clearly look at nothing unless placeholders were on screen; but even so, their fixations were not functional to recall. Under instructed refreshing (Experiment 1B), participants tended to look toward cued locations, but the independent benefit of fixations to recall went in the opposite direction of the cues, suggesting that fixations were most beneficial in the relatively rarer instances that participants did not look to the cued locations. These results thus suggest that eye movements are unlikely to indicate acts of refreshing in visual WM, yet may serve as a control to assess cue use in instructed refreshing conditions. 23 Feb 2022
PONE-D-22-00865
Tracking eye movements during spontaneous and instructed refreshing
PLOS ONE Dear Dr. Loaiza, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 09 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, David K Sewell Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf Dear Dr Loaiza, Thank you for submitting your manuscript to PLOS ONE (PONE-D-22-00865). I have now received two expert reviews on your submission, and I have also read through it myself. The first review is a joint review signed by Kyle Hardman and Nelson Cowan. The second reviewer has chosen to remain anonymous. As you’ll see, both reviewers found the manuscript to be clearly written and the studies to be well-designed. I fully agree with their assessments in this regard. Further, the work is on a timely and theoretically interesting topic. There are, however, serious concerns about the interpretation of some of the analyses, which I share. The main issues are as follows: 1 – Conclusions regarding Research Question 3 do not obviously follow from the data. Reviewer 2 highlights the nonmonotonic pattern of effects of fixations on precision (i.e., positive for 1 cue, but null for 2). Reviewer 1 notes there are some issues with coherence of effects across experiments, and identifies apparent discrepancies between the summaries of the analyses and the plots of the effects. 2 – It is sometimes difficult to fully the results due to how this section has been organized around research questions rather than individual experiments. Reviewer 1 notes that some of the analysis details are only implied in places. I agree that more specific information should be included throughout to improve readability. Reviewer 2 comments similarly that it is not always clear which experiment is being referred to with regards to Experiments 1a and 2 (i.e., are they being individuated or lumped together). While I appreciate the purpose of organizing the results as you did, I found it quite difficult to keep abreast of exactly what was being analyzed at any given moment. It may be somewhat less efficient to analyze the study results individually, I think it may be beneficial to assist the reader in following what was done. I am open to keeping the current structure, and it may be the case that more careful attention to detail will solve these issues, but my overall sense is that an experiment-by-experiment structure would be clearer. 3 – There’s a lack of clarity surrounding the data presented in Figure 3. Although you do attempt to address the apparent inconsistency in the results across the two panels in the figure, I didn’t find the explanation especially convincing. Reviewer 1 suggests a potential way to resolve this issue and more meaningfully present the fixation rate results. I think this is an excellent suggestion and encourage you to consider pursuing a “scaled” analysis of these data. In addition to these three main issues, I add a few smaller comments based on my reading of the manuscript below. 4 – For the ABA trials in Experiment 1b, did the cues always alternate between different items, or could the same item be cued twice in succession? That is, did the “ABA” trials include “AAB” and “BAA” sequences too? 5 – On page 15, you note that responses were centered on the target value. However, no data or analysis are presented to corroborate this. Can you include a plot of the aggregated data and/or could you verify the centering by fitting a von Mises with a freely estimated mean and show (for example) that the credible interval overlaps 0? 6 – The reporting of the analyses for Research Question 1 was ambiguous with regards to what parameter was being affected by the various factors outlined in Table 1. You mention that fixation rate was assumed to be ex-Gaussian distributed. Do the reported estimates and effects pertain to the mean, or are the variance and tau parameters also affected? This needs to be more clearly presented. 7 – This point is mentioned in detail by Reviewer 1, but I had similar thoughts regarding fixation number vs. duration. For example, are fixations to cued locations fewer in number, but longer-lasting? Analysis of fixation duration seems more important than just number of fixations for maters related to attention, as noted by Reviewer 1. There is also some precedent in the categorization literature using fixation duration (or proportion of trial time fixating an item/feature) as an assay of attention (e.g., Rehder & Hoffman, 2005, Cognitive Psychology). Rehder, B., & Hoffman, A. B. (2005). Eyetracking and selective attention in category learning. Cognitive Psychology, 51(1), 1–41. https://doi.org/10.1016/j.cogpsych.2004.11.001 Overall, I am rejecting the manuscript in its current form, but invite you to submit a revision that addresses (and/or rebuts) all of the points raised by the reviewers. Please take care when preparing your revision, as I would like to avoid a lengthy review process involving multiple rounds of revisions. I thank the reviewers for their insightful comments, and thank you for submitting your manuscript to PLOS ONE. I very much look forward to receiving your revision. Yours sincerely, David K Sewell [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this study, the authors examined whether fixations to memoranda locations during working memory maintenance could be interpreted as reflective of a memory refreshing process. The task was working memory for colors in which continuously-varying colors were presented at differentiated locations. Eye tracking was used during the task to measure eye fixations to memoranda locations. This study had three main questions and the evidence related to each question is evaluated below. The answers to two questions were reasonable, but there were serious issues with the answer to the third question. In addition, there were some general concerns with the study that are explained in sections below. Overall, the study appears to be well conducted and the method is suited to answering the research questions. The issues are mostly related to analysis. As such, the recommendation is that the authors submit a major revision that addresses the identified issues. Regards, Kyle O Hardman and Nelson Cowan Research Questions: Q1: Do participants look at locations of previously presented memoranda? (I.e. the looking-at-nothing effect.) For Q1, the authors found evidence that participants look at the locations of memoranda during the RI in Exp1A (Fig2). However, in Exp2 when location placeholders were absent, participants had little preference for fixating memoranda locations. In Exp2, fixating on memoranda locations required eye movement to a particular region of the screen, whereas in Exp1A, memoranda locations were in all directions from the initial fixation. Thus, memoranda location fixations in Exp2 are more meaningful than in Exp1A. Based on this evidence, the authors conclusion that participants do not purposefully look-at-nothing is reasonable. A subcomponent of Q1 is the question of whether cues to memoranda locations can cause fixations (cued looking-at-nothing; Exp1B). It seems like the cues do cause looking-at-nothing (Fig3), but the data in Fig3 could be analyzed better as explained in the section Experiment 1B Fixation Rate Scaling below. Spontaneous looking-at-nothing is a much better indication of a refreshing process than cued looking-at-nothing which may be more reflective of participants doing what they are told to do. The best answer to Q1 comes from Exp2, which provides a reasonable answer. Q2: Does spontaneous looking-at-nothing affect recall precision? (Exp1A and Exp2) For Q2, the authors found evidence against spontaneous looking-at-nothing providing a benefit to recall precision (Fig4). Other than an apparently spurious interaction, there was no clear evidence that looking at memoranda locations provided a benefit. The effect of fixations on precision in Exp1A neared significance, which could explain why other authors find such an effect (discussed more later). We assume that this analysis examined fixations during the retention interval only, and not fixations during encoding, but we could not find a clear statement to that effect. In revision, adding such a statement would be helpful. The authors provide a reasonable answer to Q2, but there is the limitation that fixation duration was not included in the analyses (discussed more later). Q3: Does looking-at-nothing when cued to do so affect recall precision? (Exp1B) More refreshing cues improved precision (Fig5B), but that is not reflective of looking-at-nothing per se: When cued to a particular memorandum, people may reallocate WM resources to the cued item without fixating the cued location. More specific to Q3, a main effect of fixations on precision was found and an interaction between the number of cues and fixations on precision was also found (Fig5A and C). There are two issues of different types with these results. Issue 1: The main effect of fixations in Exp1B goes against the results of Exp1A which found no effect of fixations. The effect in Exp1A is in the same direction as Exp1B and of similar magnitude, but the significance tests go in different directions (Exp1A not significant, Exp1B significant). Maybe there is a true effect of fixations on accuracy that was not quite picked up in Exp1A, which would bring part of the answer to Q2 into doubt (although Exp2 provides the better evidence, so Q2 still has a reasonable answer). The authors might benefit from comparing the Exp1A and Exp1B results in terms of similar magnitudes rather than relying on hypothesis testing logic alone. Issue 2: The interaction as plotted in Fig5C shows that fixations provide the most benefit to items that were cued less often. However, the figure seems inconsistent with estimates presented in the text (P20): “This interaction hinted at a decreasing sensitivity of kappa to fixation frequency as the number of refreshing cues increased: Follow-up analyses showed that the effect of fixations was not credible for 2 refreshing cues (estimate = 0.06 [-0.20, 0.33]), positive and credible for 1 refreshing cue (estimate = 0.50 [0.11, 0.90]), and positive but not credible for 0 refreshing cues (estimate = 0.27 [-0.22, 0.77]; see Figure 5C).” The sign for 2 cues disagrees between the text (small positive) and figure (small negative). In the text, the magnitude of the effect on fixations on kappa for 1 cue is larger than 0 cues, but the figure shows the reverse (the slope for 0 cues is steeper than 1 cue). The analyses for this interaction need to be double-checked. The authors do not adequately answer Q3, but it may be possible to address the issues in revision. Experiment 1B Fixation Rate Scaling The pattern in Fig3A seems to be that cues cause participants to fixate 0-cued locations, but that doesn’t make sense and doesn’t agree with Fig3B. One way to address this issue is for the fixation rates in Fig3A to be scaled based on the proportion of items that are cued 0, 1, or 2 times. More explanation: One trial type is ABC and the other ABA. On ABC trials there are three single cued items and N-3 uncued items. On ABA trials there are 1 double cued item, 1 single cued item, and N-2 uncued items. The authors say N is usually 5 or 6 and let’s imagine N=6 for sake of example. On a combination of 1 ABC trial and 1 ABA trial are 12 total items, and in terms of cues a total of 1 double cued, 4 single cued and 7 uncued items. Fixation rate, averaged (by eye) across cue durations are approximately 4 for double cued, 4.5 for single cued, and 5 for uncued. Taking the number of items of each cue type into account, double cued items had a fixation rate of 4 for 1 item (ratio=4), single cued had 4.5 for 4 items (ratio=1.1), and uncued had 5 for 7 items (ratio=0.7). Looking at the ratios shows that participants are fixating cued items at a rate higher than would be expected given the number/proportion of items of each type (vs the assumption of random fixations), but Fig3A seems to suggest the opposite. It is possible to take the proportions of cued item types into account when making Panel A? Or make an additional panel taking proportions into account? Fig3B seems to show the real effect better because the proportions of cue types (double, single, uncued) are “equalized” (in a sense) because there is only 1 target. Even if there are more uncued items than other types, only 1 of those can be selected as a target, so fixations to other nontarget uncued items are not counted in Panel B. The authors attempt to explain the discrepancy between Fig3A and Fig3B on P18 in the paragraph starting “At first glance, the positive effect of cue frequency in this second analysis appears to conflict with the negative effect in the first analysis.” Taking the cue type proportions into account is a better explanation than the one given in this paragraph. Also in the discussion P21: “When their attention was cued to refresh the memoranda (Experiment 1B), fixations occurred most frequently to non-cued locations.” While this statement is true, it is misleading because it doesn’t take the cue type proportions into account. (To be clear: the analyses of recall precision, like in Fig5, should not be affected by the concerns about fixation rate scaling.) Analyzing Fixation Duration No estimate of the duration of a look was provided. The dependent variable for eye movements was “looking rate” defined as fixations per second, but fixation rate doesn’t take into account the total duration of looking at target versus non-target locations. For example, suppose each look to a target location lasts three times as long as each random look elsewhere. That would be an indication of preferential looking at a target location that is not picked up by the looking rate measure. It is not implausible that meaningful looks would last longer than non-meaningful ones. When it is stated that “fixation rate decreased with increasing cues to memoranda locations (estimate = -1.21 [-1.87,-0.54]), such that participants looked more often to never-cued locations compared to once or twice-cued locations (see Figure 3A)” maybe that counterintuitive finding occurs because participants are looking at cued locations longer with each look. It is possible that fixation duration is reflective of resource allocation or trying to remember/reconstruct the item at a location. It could be that analyzing fixation duration along with fixation rate could provide a more complete understanding of the data. In revision, the authors should examine fixation duration. If it is a poor measure, the reason why it is a poor measure should be given. Model choice and purpose of modeling It isn’t clear what working memory model is being used. It sounds like a precision-only model in that nothing is mentioned about any parameters other than precision (kappa). Is the used model like the Bays and Husain (2008) model? Given the array sizes that were used, it is possible that many participants were below capacity, so some of the observed effects on precision could have been due not to reduced precision per se, but fewer items being in working memory (e.g. changes in the Pm parameter in the Zhang and Luck 2008 model). Readers could interpret kappa as memory precision alone with forgetting accounted for by other parameters in the model. The authors should address this point of confusion about the model. It seems like the model is used as a kind of data transformation and that the analyses could be performed on the raw data (degrees of response error) instead of kappa. The authors should provide a reason for using the model instead of raw data. Bays, P.M., & Husian, M. (2008). Dynamic Shifts of Limited Working Memory Resources in Human Vision. Science, 321, 851-854. Exp1B Minor Issues The language Within Trials vs Across Trials in Fig3 is unclear. Presumably, the data are aggregated across all trials for both plots. Also one trial does not affect another trial, so that is not what is meant by across trials. Exp1B method should clearly state whether the cues were informative (increased probability of testing the cued locations vs random test location). In the discussion random testing is suggested: “… but in the current work the refreshing cues of Experiment 1B did not clearly indicate the test target as retro-cues do.” About the Exp1B retention interval duration, the authors say on P20: “… presenting refreshing cues for a longer duration yielded worse precision.” Was the RI longer when cues were longer? The method isn’t clear on this point. Misc Minor Issues The explanation on P13 the threshold radius for classifying fixations is confusing, possibly because the units are wrong. The confusing quotes: “This threshold was defined as a radius of 142°, which is the radius around 4 dots (…) before touching the area around the next dot …” “The threshold for looking at the center of the screen was a radius of 57.5° …” Do you mean the center was defined as a circle with a radius of some length? If so, a length in degrees would make me think visual angle, but 57 degrees of visual angle would be too big. Are the radii pixels rather than degrees? For figures showing parameter intervals (like Fig 5 Panel A): Can you make the vertical line at 0 a little thicker so it stands out better? For Table 1, can you spell out what the factors are in the three-way interaction? The four factors are Phase, RI, Placeholders, and Location. RI is not present in Exp2 and Placeholders are not present in Exp1, so we guess that the 3-way is Phase and Location plus RI (Exp1) or Placeholders (Exp2). Is this right? Bayes factors are mentioned on P7, but it seems like you use credible intervals and not Bayes factors. The graphs of fixation rate should include something like “Rate (looks per second)” on the Y axis so the meaning of “rate” is immediately obvious. Some relevant literature was omitted: e.g., Mall, J.T., Morey, C.C., Wolff, M.J. et al. Visual selective attention is equally functional for individuals with low and high working memory capacity: Evidence from accuracy and eye movements. Atten Percept Psychophys 76, 1998–2014 (2014). https://doi.org/10.3758/s13414-013-0610-2 Further Questions Was there a tendency for locations fixated during encoding to be fixated again during retention? If this answer is yes: Might the benefit (if any) of fixating a location during retention be an artifact of attention returning to stimuli fixated during encoding, which are presumably encoded better than non-fixated stimuli? Specifically, when modeling memory precision, if fixations to the target at encoding are included as a covariate, does that account for some of the effect of fixations during retention on precision? In Exp1B, can you make the effect of fixations on precision go away by including fixations at encoding as a covariate? These questions might be better answered by researchers who find a substantial looking-at-nothing benefit than the current authors who found some benefit only in Exp1B. There is no need to answer these questions in revision, but we think they are good questions. Exp2 RI shows much lower preference for fixating on memoranda locations when placeholders absent vs other experiments. Is this effect is due to the memoranda being presented in a smaller segment of the screen in Exp2 than Exp1? Perhaps there is an effect of the eyes wandering away from the center of the screen that accounts for apparent looking-at-nothing in Exp1 that could be examined with Exp2 data. This could be done by dividing the screen into four sections: stimulus locations used on this trial, stimulus locations not used on this trial (i.e. the stimulus ring around the center, not including this trial’s quadrant), center, and other. Presumably, however, stimulus locations not used on this trial would count as “other” in Fig2. If there was an eyes wandering from center effect, “other” should have a higher fixation rate during RI than was observed. So there does not appear to be an eye wandering effect, but where do participants eyes go during retention in Exp2? It seems like fixation rate is just lower in Exp2 than Exp1A and 1B. Maybe this has to do with fixation rate vs fixation duration? There is no need to address these question in revision, but it is a curious finding. Reviewer #2: This study presents two experiments of which the goal is to verify whether eye movements can be used as an online measure of refreshing. Participants had to remember arrays of n colors. Experiment 1 used spontaneous and instructed refreshing during the retention interval, testing one item in the end. Experiment 2 used trials with and without placeholders to guide fixations during the retention interval, and tested all of the memory items. The conclusion of the study is less clear to me, I have difficulty giving a clear take home message. On the one hand the authors state that “fixations during instructed but not spontaneous refreshing conditions account for additional variance in recall precision.” But then also:” Eye movements however, do not seem suitable as an online measure of refreshing”. IF (!) I understood everything correctly, the conclusion should just be that eye movements do not seem suitable as an online measure of refreshing. (see comment below p 20-21) Is that correct? The abstract is written very clear (except for the conclusion) The research question is very relevant and immediately captured my interest! And even though the answer may be negative in the end, at least we have an answer to a pending question! I found it difficult to follow the manuscript at several moment. I wonder if it might be easier to split the experiment in experiment 1 and then Experiment 2, and for the analysis clearly split those for experiment 1A and 1B. I was often confused as Experiment 1A and 2 are often taken together for the analysis but this is not always clearly indicated so this led to confusion for me on several moments (whether the paragraph concerned only Experiment 1A or Experiment 1 as a whole). I think it may be easier to start by Experiment 1, show it’s shortcomings and then state in what way Experiment 2 can answer those. P 13 There is only 14% of the fixations qualified as looking toward the eventually tested item. It is stated that for that reason Experiment 2 was done. Is there an explicit reason that the instructed refreshing condition is not replicated? As this one seemed potentially to lead to interesting results? P 16 Is there a reason why the fixation rate would be greater during the 2.5 then during the 4 s interval ( and would the fixation rate be similar during the first 2.5 seconds? And then after 2.5 seconds there would be less fixations) P 20-21 “Follow-up analyses showed that the effect of fixations was not credible for 2 refreshing cues (estimate = 0.06 [-0.20, 0.33]), positive and credible for 1 refreshing cue (estimate = 0.50 [0.11, 0.90]), and positive but not credible for 0 refreshing cues (estimate = 0.27 [-0.22, 0.77]; see Figure 5C). All the remaining interactions were not credible. In summary, these results indicate that looking-at-nothing behavior under instructed refreshing conditions is functional to WM, and, interestingly, not redundant with the impact of refreshing cues. Fixations appeared to capture additional variance in recall precision that tended to be more pronounced with fewer refreshing cues.” I have my doubts about this latter conclusion. The authors say that the effect is positive for 1 refreshing cue. That seems good news and in line with the conclusion that looking- at -nothing behavior under instructed refreshing is functional to WM. However, the fact that the effect was not credible for 2 refreshing cues nullifies this conclusion to me. Having 2 refreshing cues should at least have the same effect as 1 refreshing cue (and then maybe more but not per se). Can the authors elaborate more on their point of view as to why the evidence supports this conclusion? In general, I guess the manuscript has a potential that is larger than it shows right now if it would be rewritten in a more straightforward way and with a clear conclusion. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Kyle O Hardman Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 27 May 2022 Please see the attached cover letter. Submitted filename: coverletter_v2.0_response2reviewers.docx Click here for additional data file. 24 Jun 2022 Tracking eye movements during spontaneous and instructed refreshing in working memory PONE-D-22-00865R1 Dear Dr. Loaiza, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, David K Sewell Academic Editor PLOS ONE Additional Editor Comments (optional): Dear Dr Loaiza, Thank you for submitting this careful revision of your manuscript. I have received reviews from the same two reviewers who assessed the original manuscript (Reviewer 1 is Kyle Hardman; Reviewer 2 opted to remain anonymous). Both of the reviewers indicate that all of their concerns have been addressed in your revision. I very much appreciate the care you have taken in revising the manuscript. The manuscript reads very clearly and the refinements to earlier analyses have greatly helped to clarify some of the results that were previously ambiguous. I do note that Figure 1 appears to be missing from the revised manuscript and urge you to take care when uploading the figures for the final submission. Setting aside the omitted figure, I am happy to accept the manuscript for publication. Congratulations! Yours sincerely, David K Sewell Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors responded well to our concerns and have much improved the manuscript in this revision. Our concerns have been addressed in this revision and the article seems ready to publish from our perspective. We appreciate the extra figures that were included in the responses to our comments. Regards, Kyle O. Hardman (Nelson Cowan co-reviewed the original manuscript but not this revision.) Minor notes: In Figure 4, its possible that panels A and B are not both needed. Cues are uninformative and participants don’t know what item will be tested, so there should not be a systematic difference in fixating target and nontarget items. There could be more noise in panel B because the target is only 1 of many items. You may only need to include panel A. Where did Figure 1 go? I assume it’s the same as the first submission. Page 18 last line: “associated” -> “associate” Reviewer #2: All comments have been adressed appropriately, I have no furher comments.------------------------------------ ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Kyle O. Hardman Reviewer #2: No ********** 6 Jul 2022 PONE-D-22-00865R1 The eyes don’t have it: Eye movements are unlikely to reflect refreshing in working memory Dear Dr. Loaiza: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. David Keisuke Sewell Academic Editor PLOS ONE
  28 in total

1.  Second thoughts versus second looks: an age-related deficit in reflectively refreshing just-activated information.

Authors:  Marcia K Johnson; John A Reeder; Carol L Raye; Karen J Mitchell
Journal:  Psychol Sci       Date:  2002-01

2.  Eye movements and serial memory for visual-spatial information: does time spent fixating contribute to recall?

Authors:  Jean Saint-Aubin; Sébastien Tremblay; Annie Jalbert
Journal:  Exp Psychol       Date:  2007

3.  Look here, eye movements play a functional role in memory retrieval.

Authors:  Roger Johansson; Mikael Johansson
Journal:  Psychol Sci       Date:  2013-10-28

Review 4.  Taking a new look at looking at nothing.

Authors:  Fernanda Ferreira; Jens Apel; John M Henderson
Journal:  Trends Cogn Sci       Date:  2008-11       Impact factor: 20.229

5.  Hierarchical Bayesian measurement models for continuous reproduction of visual features from working memory.

Authors:  Klaus Oberauer; Colin Stoneking; Dominik Wabersich; Hsuan-Yu Lin
Journal:  J Vis       Date:  2017-05-01       Impact factor: 2.240

6.  Visual selective attention is equally functional for individuals with low and high working memory capacity: evidence from accuracy and eye movements.

Authors:  Jonathan T Mall; Candice C Morey; Michael J Wolff; Franziska Lehnert
Journal:  Atten Percept Psychophys       Date:  2014-10       Impact factor: 2.199

Review 7.  What is attentional refreshing in working memory?

Authors:  Valérie Camos; Matthew Johnson; Vanessa Loaiza; Sophie Portrat; Alessandra Souza; Evie Vergauwe
Journal:  Ann N Y Acad Sci       Date:  2018-03-15       Impact factor: 5.691

8.  Representation, space and Hollywood Squares: looking at things that aren't there anymore.

Authors:  D C Richardson; M J Spivey
Journal:  Cognition       Date:  2000-09-14

9.  Gaze-based and attention-based rehearsal in spatial working memory.

Authors:  Alessandra S Souza; Stefan Czoschke; Elke B Lange
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2019-10-03       Impact factor: 3.051

10.  Covert shifts of attention can account for the functional role of "eye movements to nothing".

Authors:  Agnes Scholz; Anja Klichowicz; Josef F Krems
Journal:  Mem Cognit       Date:  2018-02
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.