Literature DB >> 36107922

Stop and think: Additional time supports monitoring processes in young children.

Sophie Wacker1, Claudia M Roebers1.   

Abstract

When children evaluate their certainty, monitoring is often inaccurate. Even though young children struggle to estimate their confidence, existing research shows that monitoring skills are developing earlier than expected. Using a paired associates learning task with integrated monitoring, we implemented a time window to-"Stop and Think"-before children generated their answers and evaluated their confidence in the chosen response. Results show that kindergarten and second grade children in the-"Stop and Think"-condition have higher monitoring accuracy than the control group. Implementing a time window thus seems to support children in their evaluation of different certainty levels. Relating individual differences in independently measured inhibitory control skills revealed a correlation between monitoring and inhibition for kindergarteners.

Entities:  

Mesh:

Year:  2022        PMID: 36107922      PMCID: PMC9477363          DOI: 10.1371/journal.pone.0274460

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Metacognition research consistently reveals that young children show inaccurate monitoring skills. That is, they are overly optimistic when evaluating their performance. Accurate monitoring is important for a wide range of cognitive domains, including academic achievement [1]. Although often overoptimistic, young children have been found to be able to accurately monitor their performance (for example, in everyday life asking back when ambiguous information is provided or in play situations hesitating when executing an ambiguous demand [2-4]. The current approach tests the possibility that children’s inaccurate monitoring is—at least in part—due to young children not taking enough time to engage in monitoring processes actively. We will explore this question in two ways. For one, we will experimentally induce a time window during which children are asked to monitor and compare different responses regarding their likelihood of being the correct answer. For another, we will independently quantify participants’ inhibitory control skills and relate them to their monitoring ability. Monitoring is a fundamental part of metacognition [5, 6], describing an individual’s capability to reflect and supervise cognitive processes [4]. There are several methods to measure different monitoring aspects. Monitoring processes can be measured before a memory test and are called prospective judgments. These include, for example, judgments of learning or feelings of knowing [7]. The present study focused on retrospective monitoring processes, measured after a memory test, and described as confidence judgments. Children, adolescents, and adults experience and report their confidence on different levels, ranging from very unsure to very sure. Thereby, kindergarten and primary school children have more difficulties estimating their performance accurately than older individuals [3, 8–10]. That is, they report being really sure, often independent of their response’s accuracy. This overconfidence is partly due to imprecise monitoring skills [11-14]. Even for incorrect answers, young children often give high confidence judgments, suggesting their ability to reflect on certainty is far from fully developed at this age. During the primary school years, monitoring becomes more sophisticated and differentiated and shows an increasing congruency with actual performance [15, 16]. The theoretical background of the present approach is a broader conceptualization of higher-order cognitive self-regulation, entailing metacognition and executive functions [17, 18]. Self-regulatory skills are increasingly recognized to embrace executive functions (EF) and monitoring [19-23]. Executive functions are top-down regulating processes and include updating, shifting, and inhibition aspects [24]. Especially inhibitory control skills are needed for many everyday tasks, including learning and monitoring [25]. In younger children, inhibitory control and metacognition are assumed to be connected interactively. For example, recent results reported by Kälin and Roebers [21] uncovered an association between monitoring and executive functions. The authors showed that better inhibition was related to longer latencies when giving confidence judgments in a paired associates learning and recognition paradigm. This suggests that children with better inhibitory skills took more time to report their confidence in the selected answer. Most interestingly, these longer latencies resulted in more accurate monitoring judgments. In other words, children’s monitoring accuracy was better when they took longer to generate a monitoring judgment—according to the concept of "more time to think". With increasing age and experience, monitoring and EF are thought to differentiate and follow distinct developmental trajectories [18]. Based on their findings, Roebers et al. [22] postulated that well-developed EF are necessary to develop metacognitive skills. Deficits in monitoring processes could result from immature executive functions because a certain level of EF skills is needed to perform metacognition successfully. Therefore, the association might be stronger in younger than older children. This assumption is corroborated by findings showing that in 5- compared to 7-years-olds, monitoring is more closely related to inhibitory control skills [19]. Most recent longitudinal research addressing the interrelation of those constructs revealed that EF at an early age predicts self-regulated learning one year later but not vice versa [26]. The present study included kindergarten children and second graders to confirm previous findings: we aimed to further explore that inhibition may indeed be more critical for younger compared to older children’s monitoring accuracy, with the assumption that school attendance and academic tasks gradually train children’s monitoring skills [4]. From a neuropsychological perspective, inhibition might serve as a monitoring prerequisite. To engage in monitoring, the responsible neural networks need time to loop signals from the anterior cingulate cortex (ACC) to frontal structures [27]. Prefrontal structures, especially the ACC, are considered a neurological correlate for monitoring and cognitive processing [28]. Therefore, neurological monitoring signals need time for transmission [29]. Consequently, immature inhibitory skills may not provide enough time for these signals to be strengthened and passed on to related neural structures [30]. Only if an individual takes enough time to process information, monitoring aspects can come into effect [17, 31, 32]. In other words, if individuals can inhibit their prepotent responses, hesitate and ask themselves: Am I really sure about my answer?, this should benefit their monitoring accuracy [33]. However, for these processes, one must develop and experience a feeling of uncertainty. The engagement with uncertainty (carefully evaluating the own levels of certainty) may trigger metacognitive processing and can result in better performance (due to a more differentiated and conscious evaluation) [17]. Indirect evidence supporting the view that time may play a crucial role for monitoring accuracy stems from research on the delayed JOL-effect. Delayed compared to immediate judgments-of-learning are typically more accurate, both in adults and in children [34, 35]. In cognitive tasks, for example, in a memory test, experimentally inducing a delayed response by providing additional time before responding has repeatedly been found to be an efficient means to increase the accuracy of children’s responses [25, 36–38]. Simpson et al. [39] showed that if a child must wait a set time to generate an answer, this answer was more likely to be correct than answering immediately. Poor task performance can result from a prepotent response. With additional time, reflective processing may result in better performance. None of these studies have yet applied this concept to monitoring. We will build on these findings and explore the extent to which a "Stop and Think" instruction may positively affect children’s monitoring accuracy. Additionally, research focusing on the accuracy of a confidence judgment based on the prior answer showed that information processing does not end after the decision is made [40]. On the contrary, the accumulation of further information processing evolves during the interval between an answer and the corresponding confidence judgment. This accumulation may also profit from more time which is in line with our assumption. Inhibiting the prepotent response and allowing neurological signals to strengthen [41, 42] may also allow the accumulation of additional information, which may be guided by monitoring processes. To our knowledge, no study tried to explore the influence of increased time to monitoring on children’s monitoring accuracy. In an experimental setting, we implemented a delay during which the child should "Stop and Think". In the present study children solved a paired associate learning task. After studying several item pairs, subjects had to choose one out of four answer alternatives that matched the corresponding stimulus picture (recognition phase). The “Stop and Think” delay was inserted after the recognition and before the subsequent monitoring. Afterwards, children had to select a confidence judgment by rating how sure they were that they chose the correct item pair. We hypothesized that being "forced" to take more time to monitor and prevent fast and thus undifferentiated monitoring judgments would positively affect children’s monitoring skills temporarily. More time until monitoring judgments are given may allow the individual to pause and reflect on the ongoing cognitive and metacognitive processes, ideally leading to better monitoring. We expected small benefits from additional time against the background of the above-mentioned findings [19, 43]. To evaluate the impact of additional time on different aspects of monitoring accuracy, we analyzed a relative (monitoring discrimination, i.e., the difference in confidence between correct and incorrect responses [44]) and an absolute score (i.e., overconfidence, the deviation of certainty from performance). We did not expect any effect on recognition as the delay was only inserted after participants had chosen an alternative. From an individual differences perspective and in parallel to the theoretical background outlined above on the relation between inhibition and monitoring, inhibition might be a candidate factor contributing to high confidence in children, irrespective of performance [19]. One might expect that better inhibition allows the child to hesitate instead of jumping on an answer and reporting high confidence, and to reflect on the likelihood of different alternatives to be correct and thus to monitor more accurately. However, more research is needed to understand the relation between monitoring and inhibition. Despite intensive research on metacognition and its development, relatively little attention has been paid to individual differences within homogenous age groups. The preschool and kindergarten age represents a critical time window for executive function development [45, 46]. In cognitive tasks requiring inhibitory control, findings show that younger compared to older children benefit more from a delay [36, 47]. These results indicate that younger children need more support in inhibiting their impulsive behavior. Because of developmental maturation and still relatively immature inhibition functions [19, 48], we hypothesize that kindergarten children would benefit more from a "Stop and Think!" instruction compared to second graders. To address the role of individual differences for monitoring beyond our experimental manipulation, we also assessed inhibitory control skills independently from metacognition. This allows us to explore the relationship between inhibition and monitoring accuracy in the control condition in which no "Stop and Think!" instruction and no delayed monitoring judgments were induced. We expected that individual differences in inhibition would be weakly but positively related to monitoring accuracy in younger but not necessarily in older (school) children in the control condition. In addition, we examined the relationship between inhibition and monitoring accuracy in the experimental group (only children with the "Stop and Think" instruction). Thus, we investigated whether there are individual differences regarding the extent to which a delay can contribute to improving monitoring accuracy. For example, children with poor inhibition might benefit more from a delay than children with already sophisticated inhibitory control skills.

Methods

Participants

Data stems from N = 393 children from rural and urban areas in a mid-European country. For the analysis, we recruited a sample of N = 202 (44.6% female) kindergartners between 4–6 years of age (M = 73.6 months, SD = 7.4 months) and N = 191 second graders (45.5% female) between 7–9 years of age (M = 94.2 months, SD = 7.1 months). Participants represent a sample of middle-class families mostly of Caucasian descent. The Ethics Committee of the Faculty of Human Sciences at the University of Bern approved ethical consent for the study (Approval No: 2002–100005). The parents or legal guardians of all participating children signed an informed consent. Further, all children were asked verbally to participate prior to the testing. They were further explained that they could terminate the task at any time. No child ever did. Data is entirely anonymous. Due to technical problems, we excluded N = 2 participants. Additionally, during one test session, N = 20 children had to quarantine due to COVID-19. These children were also excluded from the analyses reported below because they did not solve the paired associate learning task. Due to the current restrictions, there was no opportunity to retest them. We had to exclude N = 4 participants with an accuracy of 0% or 100% in the recognition block for the paired association task as they did not generate complete monitoring data. Restrictions due to COVID -19 and because several children had to be in quarantine, we could not examine inhibition data of all children. Therefore, the Heart and Flower task analysis is limited to an N = 330.

Procedure and measures

Children performed two different computer-based tasks, running on tablets (Samsung Galaxy S6). During the study, trained investigators were present. Test sessions took place in a group setting in children’s schools, with each participant listening to the pre-recorded instructions through headphones. Children solved a paired associates learning task with integrated monitoring (30 to 40 min.). In this task, children were to log in their answers by touching predefined areas on the screen with their index fingers. The children solved a paired association learning task encased in a cover story of two children to assess the monitoring aspect. Following a familiarization phase, the task was composed of 3 phases. In the first, the learning phase, participants learned different numbers of item pairs (kindergarteners: 16 items, second graders: 22 items). Each item pair was presented for 4s. After the learning phase, participants solved a filler task for about 1 minute, followed by a recognition phase. Participants were shown one constituent from an item pair and had to choose one out of four possible answers as being the matching item. There was no time limit for choosing the matching item. After choosing an answer in the recognition phase, participants were immediately asked to provide a confidence judgment (CJ) for their selection in the final monitoring phase. Participants had to indicate their certainty on a 7-point Likert scale (adapted from [27]). Children were randomly assigned to either the control group (CG; they solved the task as described above) or the experimental group (EG). Participants in the EG had to wait a set time before choosing a CJ. Research suggests that the diffusion of neurological signals to the prefrontal cortex needs about 200–250 ms [20, 30]. Other studies found that implementing a delay of 4 seconds leads to a performance improvement [36]. Therefore, we chose an interval representing a reasonable pause allowing enough time for diffusion and time for reflection. For this purpose, we implemented a fixed delay of 8s before participants could choose a CJ. Throughout this 8s delay, an animation was implemented, during which the pictures became gradually transparent and smaller. At the same time, one out of the two protagonists appeared with a big speech bubble containing the thermometer. This sequence represented the protagonists showing that they are taking time to think about their answers and their certainty following a pattern:—"Stop and Think"—All other procedures did not differ from the control group. We generated 12 item pairs with medium difficulty (index with .57) for the kindergarteners and 15 items (index with .60) for the second graders. Item pairs for kindergarteners with a very high (index below .32, N = 2) and very low (index above .77, N = 2) difficulty and for second graders correspondingly (index below .32, N = 5; index above .77, N = 2), served as anchor items and were not used for the analysis. To address relative aspects of monitoring, we calculated a discrimination score to quantify the ability to discriminate between CJ for correct and CJ for incorrect answers [49, 50]. Additionally, we used the bias index for absolute aspects of monitoring [50]. The bias index maps to a continuous range between underestimation (-1), accurate estimation (0), and overestimation (+1). In another session (15 min.), with a minimum delay of one week, each child solved the Hearts and Flowers task capturing inhibition and cognitive flexibility [51, 52]. For this task, two external response buttons were connected to the computer and placed on the right and left sides of the screen. In the congruent condition (heart block; N = 24 trials), a heart appeared on the right or the left side of the screen. Children had to press the button on the same side where the heart appeared. In the subsequent incongruent condition (flower block; N = 36 trials), children were to press the button on the opposite side of where the flower appeared. In the final mixed block, congruent (heart) and incongruent (flower) trials were combined and appeared in pseudorandomized order (N = 60 trials). The presentation of the stimuli was during 2500 ms, followed by an interstimulus interval of 500 ms. Dependent Variables. We calculated the Rate Correct Score (RCS) for every block [53], reflecting the amount of correctly solved items per second. For the Hearts and Flowers task, we excluded (N = 34) participants because overall accuracy was lower than .50 (below change level). Reaction times under 200 ms were excluded as they typically represent reflexes or second corrective responses to the previous trial. Our primary interest lay in the RCS of the flower block, which is considered to represent mainly inhibition [51].

Statistical analysis

Our study follows a 2 (control vs. experimental group) x 2 (kindergarteners vs. second graders) between-subject design. We used Scipy, Numpy, Pandas, and StatsModel, running on Python for data analysis, and Seaborn and Matplotlib for data visualization. Our dependent variables were examined by between-subject analysis of variances (ANOVAs) concerning monitoring. With partial eta squared (ηp2), we estimated the effect sizes. To explore the relationship between inhibition and monitoring accuracy, we evaluated correlations analysis and reported their corresponding coefficients (r).

Results

Preliminary analysis

We conducted a between-subject ANOVA to rule out that an improvement in monitoring accuracy may be driven by primary differences in performance accuracy between the CG and EG. A significant main effect of age (F(1,392) = 5.08, p = .025, ηp2 = .013) revealed higher performance accuracy scores for second graders (M = .57, SD = .18) -corresponding to 13 out of 22 correctly solved items—than for kindergartners (M = .53, SD = .19)—corresponding to correctly solving 9 out of 16 items. Thus, there was a well-balanced database including an about equal number of correct and incorrect answers and their confidence judgment for the monitoring analyses reported below. The main effect of the condition (F(1,392) = 3.08, p = .08, ηp2 = .008) and the interaction (F(1,392) = .121, p = .728, ηp2 = .00) did not reach significance. Therefore, we can assume that an improvement in monitoring accuracy observed in the EG is not an artifact of better performance accuracy. As a preliminary analysis to evaluate the performance in the inhibition measure, we calculated a between-subject ANOVA. The main effect of age (F(1,325) = 91.03, p < .001, ηp2 = .219) was significant, with higher correctly solved items per second for second graders (M = .45, SD = .12) than kindergarteners (M = .33, SD = .11). The main effect of the condition (F(1,325) = 3.34, p = .068, ηp2 = .01) and the interaction (F(1,325) = .68, p = .41, ηp2 = .002) did not reach significance. These results indicate that performance in inhibitory control skills was comparable across the CG and EG.

Monitoring

To address relative monitoring accuracy, we evaluated the discrimination score. This score taps children’s ability to metacognitively discriminate in their confidence judgments between correctly and incorrectly recognized item pairs by giving substantially higher CJ for correct than for incorrect recognition. Results of the between-subject ANOVA revealed a significant main effect of age (F(1,389) = 14.43, p < .001, ηp2 = .036), with higher discrimination scores for second graders (M = 1.50 SD = 2.53) compared to kindergarteners (M = .48, SD = 2.65). In addition, a significant main effect of condition was identified (F(1,389) = 4.23, p = .04, ηp2 = .011), due to participants in the EG (M = 1.2, SD = 2.6) achieving better discrimination between correct and incorrect items compared to CG (M = .6, SD = 2.6), that is, achieving more accurate monitoring (see Fig 1). The interaction did not reach significance (F(1,389) = .04, p = .842, ηp2 = .00), thus the effect of the delay was similar in the two age groups.
Fig 1

Distribution of the discrimination score separated for condition and age.

Note. Boxplot for the dependent variable discrimination score, separated for Age (Kindergartners vs. Second Graders), and Condition (Control Group (CG) vs. Experimental Group (EG)). Whiskers represent 1.5 * interquartile range.

Distribution of the discrimination score separated for condition and age.

Note. Boxplot for the dependent variable discrimination score, separated for Age (Kindergartners vs. Second Graders), and Condition (Control Group (CG) vs. Experimental Group (EG)). Whiskers represent 1.5 * interquartile range. As the literature offers ample evidence for young children`s performance overestimation, we were also interested in an absolute score of monitoring, the bias index. This score can range from underestimation (negative values), perfect estimation (values around zero) to overestimation (positive values). The ANOVA with age and experimental condition as between-subject factor revealed a significant main effect of age (F(1,389) = 13.71, p < .001, ηp2 = .034). Kindergartners (M = .29, SD = .26) show a stronger overconfidence compared to second graders (M = .19, SD = .255). A main effect of condition (F(1,389) = 5.39, p = .021, ηp2 = .014) was also found, with participants in the EG (M = .21, SD = .26) showing less overconfidence compared to the CG (M = .28, SD = .26) (see Fig 2). Contrary to our hypothesis, this effect was about equally strong in both age groups as the interaction did not reach significance (F(1,389) = .382 p = .537, ηp2 = .001).
Fig 2

Distribution of the bias index separated for condition and age.

Note. Boxplot for the dependent variable bias index, separated for Age (Kindergartners vs. Second Graders), and Condition (Control Group (CG) vs. Experimental Group (EG)). Whiskers represent 1.5 * interquartile range.

Distribution of the bias index separated for condition and age.

Note. Boxplot for the dependent variable bias index, separated for Age (Kindergartners vs. Second Graders), and Condition (Control Group (CG) vs. Experimental Group (EG)). Whiskers represent 1.5 * interquartile range.

Individual differences in inhibition

We will report the correlations separately for the two conditions. In the control group, we will explore whether individual differences in inhibition are related to monitoring accuracy (discrimination and bias index) independently from our—"Stop and Think"–manipulation. The reported results are therefore only based on participants in the CG. For this analysis, we related individual differences of the Rate Correct Score within the flower block of the Hearts and Flowers task to monitoring skills (discriminations score and bias index). For kindergarteners, correlational analysis revealed a significant positive correlation between the discrimination score and the inhibition RCS (r = .225, p = .034, n = 88). Higher values in the discrimination score (representing more accurate monitoring) were related to better performance in the inhibition RCS (more correctly solved items per second within the flower block). This correlation only represents a small effect. Regarding the bias index and the inhibition RCS, no significant correlation was observed (r = -.144, p = .178, n = 88). Regarding second graders, no significant correlations were observed, neither for the discrimination score (r = .20, p = .104, n = 67) nor for the bias index (r = -.061, p = .626, n = 67). In the experimental condition, by correlating inhibition with our monitoring measures, we will address whether inserting the delay between recognition and monitoring has a differential effect on participants depending on their inhibitory control skills. Correlational analysis addressing the discrimination score revealed no significant correlation for kindergarteners (r = .169, p = .108, n = 91) and second graders (r = .133, p = .233, n = 82). Additionally, no significant correlation was found for the bias index for the kindergarteners (r = -.119, p = .261, n = 91) as well as for second graders (r = -.075, p = .502, n = 82).

Discussion

The present study sheds light on young children’s difficulties to accurately monitor memory performance by realizing an experimental approach and addressing individual differences in monitoring to inhibition. For one, we induced a delay between recognition and reporting confidence, and for another, we related performance in inhibition (measured with the Heart and Flower task) to our monitoring measures. In line with previous research, our results confirmed that second graders and kindergarteners already show indications of emerging monitoring skills [10, 11, 54]. In absolute terms, children were able to discriminate substantially between correct and incorrect responses, but their evaluation of incorrect item pairs was still highly overoptimistic [12]. This pattern of results underlines the still undifferentiated monitoring skills in young children [10, 55, 56]. For the relative (discrimination score) and the absolute (bias index) measure of monitoring accuracy, findings pointed into the same direction. Second graders showed a more sophisticated discrimination between CJ for correct and CJ for incorrect items and less overconfidence than kindergartners. Thus, of the age differences reported above concerning discrimination and overconfidence fit nicely into the existing literature [56-59]. As to our experimental manipulation, our results suggested that waiting and reflecting on certainty and uncertainty for the selected answer (participants in the "Stop and Think" condition) did indeed lead to better monitoring discrimination. Moreover, children who were forced to wait and reflect also showed less overconfidence. Implementing a time window thus seemed to support children in their evaluation of confidence and led to more accurate monitoring. Our findings indicate that a brief pause where the child can "Stop and Think" can improve not only performance (as was shown in previous studies: [38, 39, 60]), but also monitoring accuracy. Especially children with difficulties inhibiting a prepotent response may benefit from more time [25]. Further, giving time to enhance monitoring accuracy is also in line with recent findings [40] indicating that information processing is not terminated when a decision is made. More information seems to accumulate between a memory decision and the corresponding monitoring judgment, supporting the idea that additional time may lead to more accurate evaluations due to the accumulation of information supervised by monitoring processes. Even though our experimentally induced manipulation cannot be seen as a congruent and identical method compared to research focusing on a delay, results appear to indicate that the underlying processes are related to each other. In addition to the advantages over "more time to think" known from previous research, the present study discovered an impact on two conceptually different monitoring measures. It is of particular interest that the benefits were not limited to just one aspect of monitoring; instead, our findings might hint at the possibility that monitoring processes overall were affected. This is promising for future research. Nevertheless, the present study revealed only small effects on the "Stop and Think" manipulation. Perhaps, implementing an extended time window is insufficient to reduce this overoptimistic behavior entirely. Therefore, the possible negative side effects resulting from overconfidence [61], such as ending a learning phase too early or not investing enough time in tasks with increased demands, cannot be completely overcome with such an approach [62, 63]. By giving children more time to consider their answers and their confidence, we could only enable one aspect: providing time for transmission and allowing the individual to be prepared at least on a neurological level. This delay may not be sufficient to fully profit from this neurological readiness; an individual must experience the benefits of monitoring in an environment in which advantages can emerge (for example, asking back when unsure to avoid errors). Contrary to our assumption, kindergarteners did not disproportionally benefit more from our manipulation. It may be possible that second graders’ inhibition skills are at this time not as far developed as monitoring processes may require. Research based on neurological studies indicates that inhibitory control skills’ maturation continues until adulthood [64], and the ability to inhibit a prepotent response evolves until adolescence [65]. Therefore, it seems reasonable that second graders equally benefited from our induced support [66]. The spectrum of an individual benefiting from such a "Stop and Think" may be much broader than expected. When it came to our individual differences approach and our attempt to better understand the role of inhibition for monitoring, our results revealed a significant positive correlation between inhibition and the monitoring discrimination score only for kindergarteners. This relation may indicate that better inhibition skills can indeed be associated with more accurate monitoring skills, especially at an earlier age. However, these findings were not confirmed considering the bias index. Therefore, the relation between inhibition and monitoring accuracy seems still not fully understood. Although other research [21] suggests that accurate monitoring may result from better EF, our results do not reflect the strong interrelation we had expected. The results only displayed a significant correlation for kindergarteners, but the correlations from both age groups were very close in their r values. The insignificant correlation within the second graders may be due to a reduction in the statistical power because of the somewhat smaller sample size in the older age group. Therefore, this insignificant correlation should be interpreted with caution [67]. Results from the present study question strong assumptions that accurate monitoring can be supported through a certain level of inhibition skills. Our findings indicate that inhibition is a necessary but not a sufficient prerequisite for accurate monitoring in children. In her review, Roebers [18] noted that methodical differences are likely to contribute to the typically weak connection between monitoring and inhibitory skills. Capturing in detail and with different methods subcomponents of both constructs may enable to compare different subcomponents. For example, Kälin et al. [21] found a relation between inhibition and implicit, but not explicit measures of monitoring. The association between metacognition and EF could thus vary as a function of inhibition and monitoring measures. In fact, a meta-analysis showed that comparing different tasks for measuring inhibition is specific to a given age range [68]. The utilization of a specific measure must be adapted to a precise age range of interest because, over time, the behavioral manifestation of inhibitory skills changes. This finding highlights the complexity of choosing the right measure for the correct age range when relating monitoring and inhibition to each other. Recent results suggest that metacognitive skills are far more present in young children (3 to 4 years old) than previously expected and that EF and metacognition are related to each other, underscoring the importance of pursuing research in this direction [26, 43]. Correlational analysis within the experimental group revealed no significant relation between inhibition and monitoring accuracy. These findings suggest that our manipulation does not affect participants differently depending on their inhibitory control abilities.

Implication

Even though the results yielded only small effects, they shed light on yet not fully understood monitoring processes. Given the theoretical background [18, 69] and the effects of the present study, the evidence supports the idea that additional time for monitoring may indeed result in more accurate monitoring. Also, from a neurological perspective, we would expect that neural signals generated from the ACC transferred to frontal regions need time for transmission [27]. In other words, the metacognitive neurological signal then has time to strengthen and influence monitoring processes. Accurate monitoring is highly relevant for everyday life situations. Not only young children but even adults benefit from more sophisticated monitoring skills. Observing, reviewing, and evaluating the ongoing cognitive processes are essential in the school setting, higher education, and following career [70, 71]. To the best of our knowledge, the present study is the first that tried to increase the time window for children’s monitoring, facilitating a transmission for metacognition within an experimental design.

Limitation

No study is perfect and this one is no exception. Naturally, we cannot be sure that children actively engaged in monitoring during the delay. It is possible that despite our—"Stop and Think"—instruction, the processing of metacognitive signals was not increased. Perhaps for some children, the process of profoundly thinking about their answers can only be achieved if they, for example, have an intrinsic willingness. For future research, an implemented reminder during the animation may trigger cognitive activation for monitoring [72]. Additionally, comparing a delay without any instructions and, therefore, simply allowing more time to reflect in an unguided way would lead to a differentiated understanding of the hidden processes. Based on our and previous findings, the willingness together with additional time to reflect are essential [36, 38].

Conclusion

The present results indicate that giving young children more time to—"Stop and Think"—can improve monitoring accuracy and reduce overconfidence. Additionally, the outcomes suggest that this time window during which children take time to process and generate their answers and evaluate their confidence in the chosen answer can be strengthened with external support. With a more profound understanding of the underlying processes and how they can be supported, we may help facilitate learning activities for students and support teachers in the school setting.

Data monitoring.

(XLSX) Click here for additional data file.

Data inhibition.

(XLSX) Click here for additional data file. 19 Apr 2022
PONE-D-22-05420
Stop and think: Additional time supports monitoring processes in young children
PLOS ONE Dear Dr. Wacker, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
All three reviewers agree that this research is of high quality, but each make clear and succinct suggestions for how to improve the paper. Comments concern the literature review and conceptual motivation in the introduction, some suggest ways to clarify methods and analysis details, and others point to how to improve interpretation of the analyses. I agree with their suggestions. Please address every comment in the revision.
Please submit your revised manuscript by Jun 03 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Micah B. Goldwater, Ph.D Academic Editor PLOS ONE Journal Requirements: 1. When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. You indicated that you had ethical approval for your study. In your Methods section, please ensure you have also stated whether you obtained consent from parents or guardians of the minors included in the study or whether the research ethics committee or IRB specifically waived the need for their consent. 3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 5. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this manuscript Wacker and Roebers investigated the discrepancy between children’s monitoring abilities in day-to-day life, which appear to be pretty good, and their monitoring abilities in research settings, which appear to be poor. They tested if giving children extra time before self-reporting their monitoring (i.e., confidence judgements) increased monitoring accuracy in a research setting and if these self-reports were related to inhibitory control. The manuscript was succinct and straightforward and made it clear that this study was motivated by previous research. Furthermore, the ideas/predictions were easy to follow. I only have a few clarifications or methodological/statistical related comments. 1. One major issue is that the data appear to be all between subjects (based on the methods section) and yet some of the information in the results section makes it seem like some data was being treated as within subjects. I’m assuming this is a typo and that the correct test was run (degrees of freedom numbers in results section suggest a between subjects ANOVA), but it does need to be corrected in the text. • Methods section text: “Children were randomly assigned to either the control group (CG); they solved the task as described above or the experimental group (EG).” • Fig 1 and Fig 2 text: “Note. Discrimination Score, between factors Age (Kindergartners vs. Second Graders), within factors Condition (Control Group (CG) vs. Experimental Group (EG))” Related to this issue, please indicate what type of ANOVA you are running (between-subjects, within-subjects, or mixed) when you describe your ANOVA based analyses (currently the manuscript only says that it was a 2x2 design). 2. The last sentence in the abstract, “The pattern of results revealed an interactive role between monitoring and inhibition only for children in kindergarten and not for second graders”, is not fully supported by the results. There was a correlation done between monitoring and inhibitory control for each age group and so you could say that there was a correlation between monitoring and inhibition for kindergarteners. However, unless you directly test an interaction between monitoring and inhibitory control and include both age groups in this test, you cannot say that there was an interaction between monitoring and inhibitory control only for one age group. • Nieuwenhuis, S., Forstmann, B. & Wagenmakers, EJ. Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci 14, 1105–1107 (2011). https://doi.org/10.1038/nn.2886 Moreover, the correlations between monitoring and inhibitory control were very close in their r values (0.225 for kindergarteners and 0.200 for second graders) despite them only being significant for kindergarteners (p<.05 vs p=.104). An article by Tamar and Jean-Jacques (2019) suggests that a lack of significance can sometimes be due to the sample being underpowered and researchers should be cautious in over interpreting insignificant results. In their article, they do say that a small effect size means an effect is unlikely to be theoretically meaningful. However, I bring up their article because the r values are so close in size between these two groups and I noticed that the kindergarteners are a larger sample than the second graders (n=88 and n=67 respectively). It makes me wonder if the effect for the second graders was non-significant only because the group was smaller and therefore the statistical power was reduced compared to the kindergarten group. • Makin, T. R., & de Xivry, J. J. O. (2019). Science Forum: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript. Elife, 8, e48175. DOI: 10.7554/eLife.48175 3. I also have a minor suggestion concerning how the ACC is discussed in the introduction. “To engage in monitoring, the responsible neural networks need time to loop signals from the anterior cingulate cortex (ACC) to frontal structures [25] and to trigger a feeling of uncertainty which may lead the child to overthink his confidence judgments.” When talking about neural networks and signals, it would be better to keep the description simple and not make conjecture about what those neural signals mean in terms of what the subject is thinking or feeling. 4. The text has minor grammatical errors throughout. Nothing major, but I recommend proofreading again before re-submission. Reviewer #2: Overall, I think the study provides some interesting data concerning the effect of delaying CR on monitoring accuracy. That being said, this is a pretty well-worn effect in the non-developmental metacognition literature and that should be better acknowledged in the paper. Introduction There’s a pretty large literature on the accuracy of delayed metacognitive ratings compared to immediate – see the delayed judgment of learning effect (Nelson & Dunlosky, 1991) - which hasn’t been addressed in the introduction and (at least from a non-developmental perspective) might limit the novelty of the findings The other literature that seems relevant but isn’t included is the idea of confidence ratings being informed by post-decision accumulation of evidence (e.g., Navajas et al., 2016)– so rather than improved accuracy as a result of inhibiting the CR by a short time, the accuracy might improve because the evidence in favour or against the decision has had longer to accumulate Results Could the authors include the performance results as a function of experimental group, it would be good to rule out whether the ‘stop-and-think’ procedure improved performance (which might inflate the monitoring accuracy if kids tend to be overconfident). It seems to indicate in the discussion that stop-and-think can impact performance so better monitoring accuracy due to an artefact of better performance needs to be ruled out Unless I misunderstood both groups completed the inhibition task, so why not include both groups and look at the interaction between group X inhibition predicting accuracy. It would, for example, tell you whether the intervention was more effective for children with low inhibition Discussion A few of the points above should also be incorporated into the discussion, especially when trying to outline possible mechanisms why more time is helpful for metacognition In terms of the limitation you raise, as a future study, it might be interesting to think about the stop-and-think vs some other delay – so rather than explicitly cueing more metacognitive thought simply allowing more time to reflect in an unguided way Minor Line 75 – I would remove the tail end of the sentence “according to the often used instruction ‘stop and think’”, it’s not necessarily a familiar manipulation to people not in this sub-field Report the exact p-value on line 233, line 248, line 264 etc (i.e. don’t write p < .05) For the figures, need a description of the what the boxes indicate. I would add a dotted horizontal line at the zero mark The figure captions indicate that the error bars are standard errors of the mean. Is this correct? – they seem extremely wide from my cursory glance if the SDs reported in paper are correct, the SE should be smaller than the SD, if I remember correctly) Reviewer #3: This paper investigates the role of inhibition on metacognitive monitoring, taking two approaches. The authors manipulated whether participants received a delay before making a confidence judgment, finding that children showed greater monitoring accuracy after the “Stop and Think” manipulation. In addition, the authors found that individual differences in inhibitory control were related to metacognitive monitoring accuracy for younger children but not older children. I believe this study addresses a theoretically important issue - understanding the relation between metacognitive monitoring and executive function, as well as the development of this relation. Overall, the paper is clear and well-written. Below I outline some questions and suggestions to enhance the clarity of the paper: The authors use binary pronouns throughout the paper, with multiple references to “him/his” without any other gender option. Use of binary pronouns can exclude those who are non-binary or who hold other gender identities. I would encourage the authors to use gender neutral language (they/them/theirs) throughout the paper. The APA supports the use of the singular they in academic writing. In the method section, the authors mention that the Hearts and Flowers task was used to measure inhibition and cognitive flexibility. However, no description of this task is offered. I would encourage the authors to include a detailed description of the task protocol. In the results section (line 206-207), the authors mention that participants were excluded if overall accuracy was lower than .50. This is presumably because that would show a response bias in the task. I would encourage the authors to spell that out for the reader. On a theoretical level, it is possible that imposing a greater delay encourages further processing in the base-level task, rather than greater deployment of metacognitive processes. Further processing in the base-level task may make metacognition easier, perhaps reducing the effort required to monitor the base-level task. In other words, the delay may contribute to a stronger signal from the base-level task, which would require less metacognitive effort to detect. I would encourage the authors to consider this possibility in their revisions. There were a couple of sentences where the writing/word choice was unclear: On lines 90-92, the sentence ends with, “... which may lead the child to overthink his confidence judgments.” The use of “overthink” often has negative connotations, but I think the authors are trying to describe a situation in which typical monitoring is occurring. On lines 341-342, the authors write, “A meta-analysis showed that comparing different tasks for measuring inhibitory control is specific to a given age range.” This is unclear and could be easily clarified by including more detail about this finding. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
3 Jun 2022 Response to Reviewer #1 My comments appear in blue Reviewer #1: In this manuscript Wacker and Roebers investigated the discrepancy between children’s monitoring abilities in day-to-day life, which appear to be pretty good, and their monitoring abilities in research settings, which appear to be poor. They tested if giving children extra time before self-reporting their monitoring (i.e., confidence judgements) increased monitoring accuracy in a research setting and if these self-reports were related to inhibitory control. The manuscript was succinct and straightforward and made it clear that this study was motivated by previous research. Furthermore, the ideas/predictions were easy to follow. I only have a few clarifications or methodological/statistical related comments. Thank you for your helpful comments and for giving me the opportunity to revise the manuscript. I have been able to understand your inputs and adapted them accordingly. You’ll find my answers to your comments in blue. 1. One major issue is that the data appear to be all between subjects (based on the methods section) and yet some of the information in the results section makes it seem like some data was being treated as within subjects. I’m assuming this is a typo and that the correct test was run (degrees of freedom numbers in results section suggest a between subjects ANOVA), but it does need to be corrected in the text. • Methods section text: “Children were randomly assigned to either the control group (CG); they solved the task as described above or the experimental group (EG).” • Fig 1 and Fig 2 text: “Note. Discrimination Score, between factors Age (Kindergartners vs. Second Graders), within factors Condition (Control Group (CG) vs. Experimental Group (EG))” Related to this issue, please indicate what type of ANOVA you are running (between-subjects, within-subjects, or mixed) when you describe your ANOVA based analyses (currently the manuscript only says that it was a 2x2 design). Thank you for your comments. I have added some more details about the design and the type of the measures (ANOVA) and adjusted the descriptions of the figures (statistical analysis and result section). 2. The last sentence in the abstract, “The pattern of results revealed an interactive role between monitoring and inhibition only for children in kindergarten and not for second graders”, is not fully supported by the results. There was a correlation done between monitoring and inhibitory control for each age group and so you could say that there was a correlation between monitoring and inhibition for kindergarteners. However, unless you directly test an interaction between monitoring and inhibitory control and include both age groups in this test, you cannot say that there was an interaction between monitoring and inhibitory control only for one age group. • Nieuwenhuis, S., Forstmann, B. & Wagenmakers, EJ. Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci 14, 1105–1107 (2011). https://doi.org/10.1038/nn.2886 Moreover, the correlations between monitoring and inhibitory control were very close in their r values (0.225 for kindergarteners and 0.200 for second graders) despite them only being significant for kindergarteners (p<.05 vs p=.104). An article by Tamar and Jean-Jacques (2019) suggests that a lack of significance can sometimes be due to the sample being underpowered and researchers should be cautious in over interpreting insignificant results. In their article, they do say that a small effect size means an effect is unlikely to be theoretically meaningful. However, I bring up their article because the r values are so close in size between these two groups and I noticed that the kindergarteners are a larger sample than the second graders (n=88 and n=67 respectively). It makes me wonder if the effect for the second graders was non-significant only because the group was smaller and therefore the statistical power was reduced compared to the kindergarten group. • Makin, T. R., & de Xivry, J. J. O. (2019). Science Forum: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript. Elife, 8, e48175. DOI: 10.7554/eLife.48175 Because our data collection was carried out during Covid-19, due to children in quarantine and the corresponding dropout, we could not capture the H&F task from everyone. Therefore, these considerations addressing unequal sample size and the interpretation of insignificant results are interesting, and I have included these reflections in the discussion (lines: 392 - 396). Thank you for this input. I have also adjusted the abstract section (Lines: 34-36). 3. I also have a minor suggestion concerning how the ACC is discussed in the introduction. “To engage in monitoring, the responsible neural networks need time to loop signals from the anterior cingulate cortex (ACC) to frontal structures [25] and to trigger a feeling of uncertainty which may lead the child to overthink his confidence judgments.” When talking about neural networks and signals, it would be better to keep the description simple and not make conjecture about what those neural signals mean in terms of what the subject is thinking or feeling. Good point, thank you. Mainly because we could not measure cognitive signals during our task, we must be careful in interpreting such potential neurological signals and their relation to other cognitive concepts. Nevertheless, ample evidence shows that frontal structures and the ACC may be seen as a neurological correlate of monitoring. We also never tried to describe this relation and neurological transmission as a fixed process for the feeling of uncertainty; instead, we used the words like "may" to underline this possible neurological transmission under consideration of the neurological studies. However, I agree that the application and the description should be discussed more carefully, and this association may be expressed simpler. I have adjusted the corresponding part (Lines: 91-96). 4. The text has minor grammatical errors throughout. Nothing major, but I recommend proofreading again before re-submission. Thank you for your advice. The re-submission has now been proofread again. Response to Reviewer #2 My comments appear in blue Reviewer #2: Overall, I think the study provides some interesting data concerning the effect of delaying CR on monitoring accuracy. That being said, this is a pretty well-worn effect in the non-developmental metacognition literature and that should be better acknowledged in the paper. Thank you for your comments and the possibility to review this paper. Introduction There’s a pretty large literature on the accuracy of delayed metacognitive ratings compared to immediate – see the delayed judgment of learning effect (Nelson & Dunlosky, 1991) - which hasn’t been addressed in the introduction and (at least from a non-developmental perspective) might limit the novelty of the findings Even though Nelsons and Dunlosky's study differs from our design, their results support indirect evidence that time may play a crucial role in monitoring accuracy. I have included their results in the introduction section. Thank you for this comment. (Lines: 103 - 106). The other literature that seems relevant but isn’t included is the idea of confidence ratings being informed by post-decision accumulation of evidence (e.g., Navajas et al., 2016)– so rather than improved accuracy as a result of inhibiting the CR by a short time, the accuracy might improve because the evidence in favour or against the decision has had longer to accumulate The study from Navajas et al. (2019) revealed exciting insights between giving an answer and rating a confidence judgment on this answer. I would argue that it may also be an interactive role - while inhibition is needed to take more time to allow further processing and accumulation, monitoring processes may supervise these procedures. Because monitoring processes have to be active to detect and make use of the added information during this interval because without - there may be a change, this potential of further processing may not even be detected. I have integrated their findings in the introduction and the discussion section. (Lines: 114 – 121, 352 – 359) Results Could the authors include the performance results as a function of experimental group, it would be good to rule out whether the ‘stop-and-think’ procedure improved performance (which might inflate the monitoring accuracy if kids tend to be overconfident). It seems to indicate in the discussion that stop-and-think can impact performance so better monitoring accuracy due to an artefact of better performance needs to be ruled out Thank you for this comment. In the result section, I added the corresponding analysis addressing performance accuracy as a post hoc test. (Lines: 252 – 259). Unless I misunderstood both groups completed the inhibition task, so why not include both groups and look at the interaction between group X inhibition predicting accuracy. It would, for example, tell you whether the intervention was more effective for children with low inhibition Both the EG and the CG have completed the H&F task. In our comparison, however, we intentionally compared only the inhibition of the CG with the MC task because we were interested in exploring whether individual differences in inhibition are related to monitoring accuracy independently from our manipulation. However, I agree that the inclusion of the EG may reveal interesting additional information. This allows us to address whether inserting the delay between recognition and monitoring has a differential effect on participants depending on their inhibitory control skills. I included the analysis in the results section (Lines: 259 – 269, 319 – 325). Discussion A few of the points above should also be incorporated into the discussion, especially when trying to outline possible mechanisms why more time is helpful for metacognition Information added. In terms of the limitation you raise, as a future study, it might be interesting to think about the stop-and-think vs some other delay – so rather than explicitly cueing more metacognitive thought simply allowing more time to reflect in an unguided way Information added. (Lines: 434 – 436) Minor Line 75 – I would remove the tail end of the sentence “according to the often used instruction ‘stop and think’”, it’s not necessarily a familiar manipulation to people not in this sub-field I have adjusted the corresponding part. Report the exact p-value on line 233, line 248, line 264 etc (i.e. don’t write p < .05) The exact p values have been added. For the figures, need a description of the what the boxes indicate. I would add a dotted horizontal line at the zero mark I have added the information. The figure captions indicate that the error bars are standard errors of the mean. Is this correct? – they seem extremely wide from my cursory glance if the SDs reported in paper are correct, the SE should be smaller than the SD, if I remember correctly). Thank you for bringing this to my attention. I have adjusted the description of the legend. The visualization displays a boxplot with corresponding whiskers. The whiskers indicate the min/max value within the definition of a whisker's length (1.5* interquartile range) for the dependent variables. Data points outside of this interquartile range are defined as outliers. Response to Reviewer #3 My comments appear in blue Reviewer #3: This paper investigates the role of inhibition on metacognitive monitoring, taking two approaches. The authors manipulated whether participants received a delay before making a confidence judgment, finding that children showed greater monitoring accuracy after the “Stop and Think” manipulation. In addition, the authors found that individual differences in inhibitory control were related to metacognitive monitoring accuracy for younger children but not older children. I believe this study addresses a theoretically important issue - understanding the relation between metacognitive monitoring and executive function, as well as the development of this relation. Thank you for your positive feedback addressing the importance of our study. Overall, the paper is clear and well-written. Below I outline some questions and suggestions to enhance the clarity of the paper: The authors use binary pronouns throughout the paper, with multiple references to “him/his” without any other gender option. Use of binary pronouns can exclude those who are non-binary or who hold other gender identities. I would encourage the authors to use gender neutral language (they/them/theirs) throughout the paper. The APA supports the use of the singular they in academic writing. Thank you for bringing this to my attention. I have changed the pronouns to a gender-neutral language. In the method section, the authors mention that the Hearts and Flowers task was used to measure inhibition and cognitive flexibility. However, no description of this task is offered. I would encourage the authors to include a detailed description of the task protocol. Because our primary interest lies in the measurement of monitoring, and we created a new task capturing monitoring skills, we wanted to ensure that the procedure of this novel task was described in detail. The Hearts and Flowers Task was already applied several times in other studies. Because of the frequent use in the research field of executive functions, we diminished the Hearts and Flowers task's description in our method section. For replication, we indicated the primary literature to provide the connection to the studies using the Hearts and Flowers task. However, I agree that it may confuse the reader why one task is deeply explained and the other not. I, therefore, inserted a short description of the Hearts and Flowers task, especially with additional information about technical and methodological details. (Lines: 229 – 235). In the results section (line 206-207), the authors mention that participants were excluded if overall accuracy was lower than .50. This is presumably because that would show a response bias in the task. I would encourage the authors to spell that out for the reader. Information added. On a theoretical level, it is possible that imposing a greater delay encourages further processing in the base-level task, rather than greater deployment of metacognitive processes. Further processing in the base-level task may make metacognition easier, perhaps reducing the effort required to monitor the base-level task. In other words, the delay may contribute to a stronger signal from the base-level task, which would require less metacognitive effort to detect. I would encourage the authors to consider this possibility in their revisions. Thank you for this interesting comment. I have integrated your input in the introduction and discussion section. (Lines: 114-121, 352-359). There were a couple of sentences where the writing/word choice was unclear: On lines 90-92, the sentence ends with, “... which may lead the child to overthink his confidence judgments.” The use of “overthink” often has negative connotations, but I think the authors are trying to describe a situation in which typical monitoring is occurring. Good point! We do not associate "overthinking" with a negative context. We wanted to describe how this "stop and think" condition may trigger a feeling of uncertainty where children may hesitate and "overthink" or "reassess" their given confidence judgment. The term "overthink" describes monitoring processes where children evaluate their answers and give them a second thought. On lines 341-342, the authors write, “A meta-analysis showed that comparing different tasks for measuring inhibitory control is specific to a given age range.” This is unclear and could be easily clarified by including more detail about this finding. I added some information for clarification. Submitted filename: Response_to_Reviewers.docx Click here for additional data file. 15 Jul 2022
PONE-D-22-05420R1
Stop and think: Additional time supports monitoring processes in young children
PLOS ONE Dear Dr. Wacker, Thank you for submitting your manuscript to PLOS ONE. All the reviewers and I agree you have improved the manuscript from the first version. Two of three think the manuscript is ready to be accepted, but one reviewer has outlined how to further clarify your arguments. Therefore, we invite you to submit a revised version of the manuscript that addresses these points (see below). Please submit your revised manuscript by Aug 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Micah B. Goldwater, Ph.D Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: All of my previous comments/suggestions have been addressed. Thank you to the authors for their edits and replies to my comments/suggestions. I believe the manuscript should be accepted for publication. Reviewer #2: The authors have addressed all of the comments from my earlier review. The paper should make an interesting contribution to the literature Reviewer #3: I think that the authors responded to all of the reviewers' comments from the original submission. The research is sound, but the manuscript still has some issues in terms of clarity and organization. Below, I outline some suggestions for improvement: In lines 52-56, the authors introduce the process of monitoring, and equate it with confidence and performance estimation. The authors should be clear that these are just 2 examples, and that there are other forms of monitoring (e.g., judgments of learning, feeling of knowing, difficulty estimations, etc.). In the paragraph that spans lines 91-121, the argument the authors are trying to make is unclear. They discuss neurological underpinnings of monitoring, indirect evidence of this, the role of increased time on performance and accuracy, and metacognitive processing post-decision. It seems that the paragraph should be organized to make a more direct argument, or it may be split into several paragraphs to make several different arguments In lines 99-101, the authors suggest that part of the monitoring process involves asking "Am I really sure about my answer?" Previous work has investigated both explicit and implicit forms of metacognitive monitoring, so the authors should clarify their viewpoint on the role of conscious processing in metacognitive monitoring in young children In addition, in the discussion of previous research on what happens during the "time to loop signals from the ACC to frontal structures", I suggest the authors clarify what is happening functionally during this process, and what the average processing time is, as these seem relevant to the manipulation in the current study The authors write, "The engagement with uncertainty may trigger cognitive processing." This is unclear, did the authors potentially mean "may trigger metacognitive processing"? Later in the paper, the authors write, "Inhibiting the prepotent response allowing neurological signals to spread out…" What is meant by "spread out"? I noticed that there were no citations next to this claim - what is the previous research that suggests this? In line 124, the authors write, "stop and think! About the answer profoundly". As they mention later in the manuscript, it is not guaranteed that participants thought about the answer profoundly, so I would encourage the authors to reconsider the wording here. In line 127, the authors should clarify whether they believe the manipulation will positively affect children's monitoring skills (implying the acquisition of new strategies and potentially stable improvement) or their monitoring performance (implying temporary, in the moment improvements). In line 134, the authors write, "We did not expect any effect on recognition". At this point in the manuscript the recognition task has not been described. I would suggest a slightly more detailed description of the base-level task in this paragraph. In lines 177-178, the authors write "Additionally, during one test session, N = 20 children had to quarantine due to COVID-19." I would encourage the authors to explain why these children needed to be excluded. Was it due to incomplete data/ Which portions of the experiment did they complete or fail to complete? One interesting finding was that there was a correlation between monitoring and inhibition in kindergarteners only in the control condition. I suggest that the authors provide a potential explanation for this finding in the Discussion section. In lines 336-338, the authors write, "In absolute terms, children were able to discriminate substantially between correct and incorrect responses". Was there an analysis described in the Results section that directly tested this? Were the discrimination scores significantly different than 0? In the Implications paragraph, I think the authors could include a more detailed and compelling description of the real-world implications of "stopping to think." Where might this come into play for both children and adults? In addition, the sentences "Also, from a neurological perspective, we would expect that neural signals generated from the ACC transferred to frontal regions need time for transmission. In other words, the metacognitive neurological signal then has time to spread out and influence monitoring processes." do not seem to be an implication of the current study, but speculation about the neurological processes involved. While appropriate for the discussion section, I do not think this belongs in the implications section. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Kit Spenser Double Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
10 Aug 2022 Thank you for the input addressing my paper in order to clarify and to improve the quality of the present study. My Comments appear in blue. Response to Reviewer #1 and #2 Reviewer #1: All of my previous comments/suggestions have been addressed. Thank you to the authors for their edits and replies to my comments/suggestions. I believe the manuscript should be accepted for publication. Reviewer #2: The authors have addressed all of the comments from my earlier review. The paper should make an interesting contribution to the literature. Thank you for your positive feedback! Response to Reviewer #3 Reviewer #3: I think that the authors responded to all of the reviewers' comments from the original submission. The research is sound, but the manuscript still has some issues in terms of clarity and organization. Below, I outline some suggestions for improvement: Thank you for the acknowledgment of my improvements. I thank you for your further recommendations to improve the quality of my paper. In lines 52-56, the authors introduce the process of monitoring, and equate it with confidence and performance estimation. The authors should be clear that these are just 2 examples, and that there are other forms of monitoring (e.g., judgments of learning, feeling of knowing, difficulty estimations, etc.). I have included a paragraph to draw the reader's attention to the fact that there are also some other measures of monitoring processes (lines: 53-57). In the paragraph that spans lines 91-121, the argument the authors are trying to make is unclear. They discuss neurological underpinnings of monitoring, indirect evidence of this, the role of increased time on performance and accuracy, and metacognitive processing post-decision. It seems that the paragraph should be organized to make a more direct argument, or it may be split into several paragraphs to make several different arguments Thank you for your comment. I organized and split the part into several paragraphs to enhance clearness (lines: 95 – 128). In lines 99-101, the authors suggest that part of the monitoring process involves asking "Am I really sure about my answer?" Previous work has investigated both explicit and implicit forms of metacognitive monitoring, so the authors should clarify their viewpoint on the role of conscious processing in metacognitive monitoring in young children I have added some more information (line: 106 - 109). In addition, in the discussion of previous research on what happens during the "time to loop signals from the ACC to frontal structures", I suggest the authors clarify what is happening functionally during this process, and what the average processing time is, as these seem relevant to the manipulation in the current study The procedure and measure section explains how long we set our delay (lines: 217 - 223). Because this is a novel approach, and to our knowledge, no studies so fare exist addressing this issue, we do not have any additional references. Our delay builds on the idea of neurological diffusion (about 200-250 ms) as well on studies implementing a delay in other cognitive tasks. We argue that firstly additional time for diffusion of neurological signals and secondly time for monitoring processes result in a reasonable delay. Further research is needed to identify the best processing time range. Nevertheless, our study contributes the first insight regarding a delay for monitoring and may be a guiding reference point for further studies. The authors write, "The engagement with uncertainty may trigger cognitive processing." This is unclear, did the authors potentially mean "may trigger metacognitive processing"? I have added some information for clarification (lines: 108 – 109). Later in the paper, the authors write, "Inhibiting the prepotent response allowing neurological signals to spread out…" What is meant by "spread out"? I noticed that there were no citations next to this claim - what is the previous research that suggests this? By using the term "spread out" we are addressing the diffusion of neurological signals. These neurological signals emerge at a point x and subsequently diffuse or, as we call it, "spread out". I have changed the wording and I also added citations (lines: 126 – 128). In line 124, the authors write, "stop and think! About the answer profoundly". As they mention later in the manuscript, it is not guaranteed that participants thought about the answer profoundly, so I would encourage the authors to reconsider the wording here. Thank you for your comment. I have adapted the corresponding part (line: 131). In line 127, the authors should clarify whether they believe the manipulation will positively affect children's monitoring skills (implying the acquisition of new strategies and potentially stable improvement) or their monitoring performance (implying temporary, in the moment improvements). I added that we are only expecting temporary and, therefore, in the moment improvements (line: 138). However, further research is needed to evaluate if more practice with a "Stop and Think" condition would also lead to an acquisition of a “Stop and Think” as a metacognitive strategy and be also transferable to other domains. In line 134, the authors write, "We did not expect any effect on recognition". At this point in the manuscript the recognition task has not been described. I would suggest a slightly more detailed description of the base-level task in this paragraph. I have added some information regarding the task (lines: 131 –136). In lines 177-178, the authors write "Additionally, during one test session, N = 20 children had to quarantine due to COVID-19." I would encourage the authors to explain why these children needed to be excluded. Was it due to incomplete data/ Which portions of the experiment did they complete or fail to complete? These children did not solve the paired associate learning task. Due to the restrictions in the school setting we were not allowed to retest them. I have added some information (lines: 189 – 191). One interesting finding was that there was a correlation between monitoring and inhibition in kindergarteners only in the control condition. I suggest that the authors provide a potential explanation for this finding in the Discussion section. These results are discussed in the discussion section (lines: 398 - 430). In lines 336-338, the authors write, "In absolute terms, children were able to discriminate substantially between correct and incorrect responses". Was there an analysis described in the Results section that directly tested this? Were the discrimination scores significantly different than 0? Yes, this marked sentence is one of our main findings regarding monitoring. The corresponding analysis is in the result section (ANOVA with dependent variable: discrimination score). Discrimination of confidence judgments between correctly and incorrectly solved items) and the bias index (under- and overestimation). In the Implications paragraph, I think the authors could include a more detailed and compelling description of the real-world implications of "stopping to think." Where might this come into play for both children and adults? In addition, the sentences "Also, from a neurological perspective, we would expect that neural signals generated from the ACC transferred to frontal regions need time for transmission. In other words, the metacognitive neurological signal then has time to spread out and influence monitoring processes." do not seem to be an implication of the current study, but speculation about the neurological processes involved. While appropriate for the discussion section, I do not think this belongs in the implications section. I have added some information (lines: 438 - 441). Submitted filename: Response to Reviewers.docx Click here for additional data file. 30 Aug 2022 Stop and think: Additional time supports monitoring processes in young children PONE-D-22-05420R2 Dear Dr. Wacker, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Micah B. Goldwater, Ph.D Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: A big thank you to the authors for addressing my comments. I believe the manuscript should be accepted for publication. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No ********** 5 Sep 2022 PONE-D-22-05420R2 Stop and think: Additional time supports monitoring processes in young children Dear Dr. Wacker: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Micah B. Goldwater Academic Editor PLOS ONE
  46 in total

1.  Availability of related long-term memory during and after attention focus in working memory.

Authors:  Dan J Woltz; Christopher A Was
Journal:  Mem Cognit       Date:  2006-04

2.  I don't want to pick! Introspection on uncertainty supports early strategic behavior.

Authors:  Kristen E Lyons; Simona Ghetti
Journal:  Child Dev       Date:  2012-12-22

3.  N2 amplitude as a neural marker of executive function in young children: an ERP study of children who switch versus perseverate on the Dimensional Change Card Sort.

Authors:  Stacey D Espinet; Jacob E Anderson; Philip David Zelazo
Journal:  Dev Cogn Neurosci       Date:  2011-12-13       Impact factor: 6.464

4.  The credibility of children's testimony: can children control the accuracy of their memory reports?

Authors:  A Koriat; M Goldsmith; W Schneider; M Nakash-Dura
Journal:  J Exp Child Psychol       Date:  2001-08

5.  Introspection on uncertainty and judicious help-seeking during the preschool years.

Authors:  Christine Coughlin; Emily Hembacher; Kristen E Lyons; Simona Ghetti
Journal:  Dev Sci       Date:  2014-12-07

6.  Metacognition in Later Adulthood: Spared Monitoring Can Benefit Older Adults' Self-regulation.

Authors:  Christopher Hertzog; John Dunlosky
Journal:  Curr Dir Psychol Sci       Date:  2011-06

7.  The development of inhibitory control: an averaged and single-trial Lateralized Readiness Potential study.

Authors:  Donna Bryce; Dénes Szũcs; Fruzsina Soltész; David Whitebread
Journal:  Neuroimage       Date:  2010-12-10       Impact factor: 6.556

8.  Developmental Improvements and Persisting Difficulties in Children's Metacognitive Monitoring and Control Skills: Cross-Sectional and Longitudinal Perspectives.

Authors:  Natalie S Bayard; Mariëtte H van Loon; Martina Steiner; Claudia M Roebers
Journal:  Child Dev       Date:  2021-02-02

Review 9.  The neural basis of metacognitive ability.

Authors:  Stephen M Fleming; Raymond J Dolan
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2012-05-19       Impact factor: 6.237

10.  Relating introspective accuracy to individual differences in brain structure.

Authors:  Stephen M Fleming; Rimona S Weil; Zoltan Nagy; Raymond J Dolan; Geraint Rees
Journal:  Science       Date:  2010-09-17       Impact factor: 47.728

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.