| Literature DB >> 25657629 |
Abstract
In recent years, cognitive scientists and commercial interests (e.g., Fit Brains, Lumosity) have focused research attention and financial resources on cognitive tasks, especially working memory tasks, to explore and exploit possible transfer effects to general cognitive abilities, such as fluid intelligence. The increased research attention has produced mixed findings, as well as contention about the disposition of the evidence base. To address this contention, Au et al. (2014) recently conducted a meta-analysis of extant controlled experimental studies of n-back task training transfer effects on measures of fluid intelligence in healthy adults; the results of which showed a small training transfer effect. Using several approaches, the current review evaluated and re-analyzed the meta-analytic data for the presence of two different forms of small-study effects: (1) publication bias in the presence of low power and; (2) low power in the absence of publication bias. The results of these approaches showed no evidence of selection bias in the working memory training literature, but did show evidence of small-study effects related to low power in the absence of publication bias. While the effect size estimate identified by Au et al. (2014) provided the most precise estimate to date, it should be interpreted in the context of a uniformly low-powered base of evidence. The present work concludes with a brief set of considerations for assessing the adequacy of a body of research findings for the application of meta-analytic techniques.Entities:
Keywords: cognitive training intervention; fluid intelligence; meta-analysis; small-study effects; statistical power; transfer effects
Year: 2015 PMID: 25657629 PMCID: PMC4302828 DOI: 10.3389/fpsyg.2014.01589
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Results from fixed effects, random effects, PET, and PEESE models.
| Conventional meta-analytic models | WLS PET model | WLS PEESE model | ||||
|---|---|---|---|---|---|---|
| Fixed effects | Random effects | Q/ | Intercept | Slope | Intercept | Slope |
| 0.24 (0.11, 0.37) | 0.24 (0.11, 0.38) | 24.69/6.8% | 0.11 (-0.56, 0.78) | 0.42 (-1.64, 2.48) | 0.17 (-0.16, 0.49) | 0.67 (-2.08, 3.42) |
Recommendations for supplementary meta-analytic planning metrics.
| Additional tasks for evaluating literature suitability for meta-analytic treatment |
|---|
| ✓ Calculate |
| ✓ Conduct an |
| ✓ Assess the prevalence of direct replication attempts in the literature. |
| ✓ Assess the prevalence of null findings in the published literature. |
| ✓ Examine and quantify design and measurement quality and commensurability across studies (via expert coding rubric). |
| ✓ Survey the prevalence of study and/or data registration of the primary studies, especially for randomized trials and interventions. |