Literature DB >> 20407435

Putting brain training to the test.

Adrian M Owen1, Adam Hampshire, Jessica A Grahn, Robert Stenton, Said Dajani, Alistair S Burns, Robert J Howard, Clive G Ballard.   

Abstract

'Brain training', or the goal of improved cognitive function through the regular use of computerized tests, is a multimillion-pound industry, yet in our view scientific evidence to support its efficacy is lacking. Modest effects have been reported in some studies of older individuals and preschool children, and video-game players outperform non-players on some tests of visual attention. However, the widely held belief that commercially available computerized brain-training programs improve general cognitive function in the wider population in our opinion lacks empirical support. The central question is not whether performance on cognitive tests can be improved by training, but rather, whether those benefits transfer to other untrained tasks or lead to any general improvement in the level of cognitive functioning. Here we report the results of a six-week online study in which 11,430 participants trained several times each week on cognitive tasks designed to improve reasoning, memory, planning, visuospatial skills and attention. Although improvements were observed in every one of the cognitive tasks that were trained, no evidence was found for transfer effects to untrained tasks, even when those tasks were cognitively closely related.

Entities:  

Mesh:

Year:  2010        PMID: 20407435      PMCID: PMC2884087          DOI: 10.1038/nature09042

Source DB:  PubMed          Journal:  Nature        ISSN: 0028-0836            Impact factor:   49.962


To investigate whether regular brain training leads to any improvement in cognitive function, viewers of the BBC popular science programme ‘Bang Goes The Theory’ participated in a six-week online study of brain training. An initial ‘benchmarking’ assessment included a broad neuropsychological battery of four tests that are sensitive to changes in cognitive function in health and disease6-12. Specifically, baseline measures of reasoning6, verbal short-term memory (VSTM)7,12, spatial working memory (SWM)8-10, and paired-associates learning (PAL)11,13, were acquired. Participants were then randomly assigned to one of two experimental groups or a third control group and logged on to the BBC website to practise six training tasks for a minimum of 10 minutes a day, three times a week. In Experimental group 1, the six training tasks emphasised reasoning, planning and problem-solving abilities. In Experimental group 2, a broader range of cognitive functions was trained using tests of short-term memory, attention, visuospatial processing and mathematics similar to those commonly found in commercially available brain training devices. The difficulty of the training tasks increased as the participants improved to continuously challenge their cognitive performance and maximise any benefits of training. The control group did not formally practise any specific cognitive tasks during their ‘training’ sessions, but answered obscure questions from six different categories using any available online resource. At six weeks, the benchmarking assessment was repeated and the pre- and post-training scores were compared. The difference in benchmarking scores provided the measure of generalised cognitive improvement resulting from training. Similarly, for each training task, the first and last scores were compared to give a measure of specific improvement on that task. Of 52,617 participants aged 18 to 60 who initially registered, 11,430 completed both benchmarking assessments and at least two full training sessions during the six-week period. On average, participants completed 24.47 (sd=16.95) training sessions (range = 1 to 188 sessions). The three groups were well matched in age (39.14 [11.91], 39.65 [11.83], 40.51 [11.79], respectively), and gender (F/M=5.5:1, 5.6:1 and 4.3:1, respectively). Numerically, Experimental group 1 improved on four benchmarking tests and Experimental group 2 improved on three benchmarking tests (Figure 1), with standardised effect sizes varying from small (e.g. 0.35; 99% Confidence Interval [CI], 0.29-0.41) to very small (e.g. 0.01; 99% CI, −0.05-0.07). However, the control group also improved numerically on all four tests with similar effect sizes (Table 1). When the three groups were compared directly, effect sizes across all four benchmarking tests were very small (e.g. 0.01; 99% CI, −0.05-0.07 to 0.22; 99% CI, 0.15-0.28) (Table 2). In fact, for VSTM and PAL, the difference between benchmarking sessions was numerically greatest for the control group (Figure 1, Table 1 and Table 2). These results suggest an equivalent and marginal test-retest practice effect in all groups across all four tasks (Table 1). In contrast, the improvement on the tests that were actually trained was convincing across all tasks for both experimental groups. For example, for the tasks practised by Experimental group 1, differences were observed with large effect sizes of between 0.73; 99% CI, 0.68-0.79 and 1.63; 99% CI, 1.57-1.7 (Table 3 and Figure 2). Using Cohen’s14 notion that 0.2 represents a small effect, 0.5 a medium effect and 0.8 a large effect, even the smallest of these improvements would be considered large. Similarly, for Experimental group 2, large improvements were observed on all training tasks, with effect sizes of between 0.72; 99% CI, 0.67-0.78 and 0.97; 99% CI, 0.91-1.03 (Table 3 and Figure 2). Numerically, the control group also improved in their ability to answer obscure knowledge questions, although the effect size was small (0.33; 99% CI, 0.26-0.4) (Table 3 and Figure 2). In all three groups, whether these improvements reflected the simple effects of task repetition (i.e. practise), the adoption of new task strategies, or a combination of the two is unclear, but whatever the process effecting change, it did not generalise to the untrained benchmarking tests.
Figure 1

Benchmarking scores at baseline and after six weeks of training across the three groups of participants. VSTM = Verbal short-term memory, SWM = Spatial working memory, PAL = Paired-associates learning. Bars represent standard deviations.

Table 1

Within-test standardised effect sizes for changes in performance between pre-training and post-training benchmarking sessions

Exp group 1Exp group 2Control group
Reasoning Mean difference1.731.970.90
Effect size0.310.350.16
99% CI(0.26 - 0.36)(0.29 - 0.41)(0.09 - 0.23)
VSTM Mean difference0.150.030.22
Effect size0.160.030.21
99% CI(0.11 - 0.21)(−0.02 - 0.09)(0.14 - 0.28)
SWM Mean difference0.330.350.27
Effect size0.240.270.19
99% CI(0.19 - 0.29)(0.21 - 0.33)(0.12 - 0.26)
PAL Mean difference0.06−0.010.07
Effect size0.100.010.11
99% CI(0.05 - 0.16)(−0.05 - 0.07)(0.04 - 0.18)

VSTM = Verbal short-term memory, SWM = Spatial working memory, PAL = Paired-associates learning. CI = confidence interval.

Table 2

Between-group standardised effect sizes for differences in performance between pre-training and post-training benchmarking sessions

Exp group 1 vsExp group 2Exp group 1 vsControl groupExp group 2 vsControl group
Reasoning Mean difference−0.2310.8311.062
Effect size0.050.170.22
99% CI(−0.01 – 0.1)(0.1 – 0.23)(0.15 – 0.28)
VSTM Mean difference0.130−0.056−0.186
Effect size0.130.050.18
99% CI(0.07 – 0.18)(−0.01 – 0.12)(0.11 – 0.24)
SWM Mean difference−0.0280.0570.085
Effect size0.020.040.06
99% CI(−0.04 – 0.07)(−0.03 – 0.1)(−0.01 – 0.12)
PAL Mean difference0.117−0.012−0.129
Effect size0.100.010.11
99% CI(0.04 – 0.15)(−0.05 – 0.07)(0.04 – 0.17)

VSTM = Verbal short-term Memory, SWM = Spatial working memory, PAL = Paired-associates learning. CI = confidence interval.

Table 3

Within-test standardised effect sizes for differences in performance between the first and the last training or control sessions

TestMean differenceEffect size99% CI
Experimental group 1 Reasoning 133.961.63(1.57 - 1.7)
Reasoning 213.451.03(0.98 - 1.09)
Reasoning 311.451.25(1.19 - 1.31)
Planning 115.171.28(1.23 - 1.34)
Planning 214.421.10(1.05 - 1.16)
Planning 310.410.73(0.68 - 0.79)
Experimental group 2 Maths18.150.90(0.84 - 0.96)
Visuospatial8.620.95(0.89 - 1.02)
Attention 19.710.93(0.87 - 0.99)
Attention 28.480.84(0.78 - 0.9)
Memory 17.290.72(0.67 - 0.78)
Memory 25.300.97(0.91 - 1.03)
Control group Questions3.620.33(0.26 - 0.40)

For description of tests, see Methods.

Figure 2

First and last training scores for the six tests used to train Experimental group 1 and Experimental group 2. The first and last scores for the control group are also shown. Bars represent standard deviations.

The relationship between the number of training sessions and changes in benchmark performance was negligible in all groups for all tests (largest Spearman’s rho = 0.059; Supplementary Figure 1). The effect of age was also negligible (largest Spearman’s rho = −0.073). Only two tests showed a significant effect of gender (PAL in Experimental group 1 and VSTM in Experimental group 2), but the effect sizes were very small (0.09; 99% CI, −0.01-0.2 and 0.09; 99% CI, −0.03-0.2, respectively). These results provide no evidence for any generalised improvements in cognitive function following ‘brain training’ in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (Experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (Experimental group 1) who practised tests of reasoning, planning and problem solving. Indeed, both groups provided evidence that training-related improvements may not even generalise to other tasks that tap similar cognitive functions. For example, three of the tests practised by Experimental group 1 (Reasoning 1, 2 & 3) specifically emphasised abstract reasoning abilities, yet numerically larger changes on the benchmarking test that also required abstract reasoning were observed in Experimental group 2, who were not trained on any test that specifically emphasised reasoning. Similarly, of all the trained tasks, Memory 2 (based on the classic parlour game in which players have to remember the locations of objects on cards), is most closely related to the PAL benchmarking task (in which participants also have to remember the locations of objects), yet numerically, PAL performance actually deteriorated in the experimental group that trained on the Memory 2 task (Figure 1). Could it be that no generalised effects of brain training were observed because the wrong types of cognitive tasks were used? This is unlikely because twelve different tests, covering a broad range of cognitive functions, were trained in this study. In addition, the six training tasks that emphasised abstract reasoning, planning and problem solving were included specifically because such tasks are known to correlate highly with measures of general fluid intelligence or ‘g’15-17, and were therefore most likely to produce an improvement in the general level of cognitive functioning. Indeed, functional neuroimaging studies have revealed clear overlap in frontal and parietal regions between similar tests of reasoning and planning to those used here15,17-19 and tests that are specifically designed to measure ‘g’15,20, while damage to the frontal lobe impairs performance on both types of task10,16,21. Could it be that the benchmarking tests were insensitive to the generalised effects of brain training? This is also unlikely because the benchmarking tests were chosen for their known sensitivity to small changes in cognitive function in disease or following low-dose neuropharmacological interventions in healthy volunteers. For example, the SWM task is sensitive to damage to the frontal cortex10,22 and impairments are observed in patients with Parkinson’s disease23. On the other hand, low dose methylphenidate improves performance on the same task in healthy volunteers8,9. Similarly, the PAL task is highly sensitive to various neuropathological conditions, including Alzheimer’s disease11, Parkinson’s disease13 and schizophrenia24, while the alpha 2-agonists guanfacine and clonidine improve performance in healthy volunteers25. Could it be that improvements in the experimental groups were ‘masked’ by the direct comparison with the control group, who were, arguably, also exercising attention, planning and visuospatial processes? This seems unlikely because there was a clear difference between the substantial improvements in both experimental groups across all trained tasks and the very modest improvement observed in the control group on their obscure knowledge test, suggesting that the experimental groups did benefit more from their training programmes, albeit only on the tasks that were actually being trained. In any case, in all three groups the standardised effect sizes of the transfer effects were, at best, small (Table 1), suggesting that any comparison (even with a control group who did nothing) would have yielded a negligible brain training effect in the Experimental groups. Could it be that the amount of practise was insufficient to produce a measureable transfer effect of brain training? Given the known sensitivity of the benchmarking tests8-11,13,22-26, it seems reasonable to expect that 25 training sessions would yield a measurable group effect if one was present. More directly however, there was a negligible correlation between the number of training sessions and improvement in benchmarking scores (despite a strong correlation with improvement on training tasks - Supplementary Figure 2), confirming that the amount of practise was unrelated to any generalised brain training effect. That said, the possibility that an even more extensive training regime may have eventually produced an effect cannot be excluded. To illustrate the size of the transfer effects observed in this study, consider the following representative example from the data. The increase in the number of digits that could be remembered following training on tests designed, at least in part, to improve memory (e.g., in Experimental group 2) was three hundredths of a digit. Assuming a linear relationship between time spent training and improvement, it would take almost four years of training to remember one extra digit. Moreover, the control group improved by two tenths of a digit, with no formal memory training at all. In short, these results provide no evidence to support the widely held belief that the regular use of computerised brain trainers improves general cognitive functioning in healthy participants beyond those tasks that are actually being trained. Although we cannot exclude the possibility that more focused approaches, such as face-to-face cognitive training2, may be beneficial in some circumstances, these results confirm that six weeks of regular computerised brain training confers no greater benefit than simply answering general knowledge questions using the internet. Supplementary Figure 1. The strongest relationship between number of sessions spent training over six weeks and change in benchmarking performance was observed in Experimental group 2 for the Reasoning test (Spearman’s rho = 0.059). This test is known to correlate with measures of general intelligence or g6, confirming that, at best, six weeks of brain training has a negligible transfer effect on general measures of cognitive function. Supplementary Figure 2. An illustration of the relationship between number of sessions spent training over six weeks and change in performance (last training score – first training score) on one of the trained tasks (Reasoning 1) in Experimental group 1. A significant correlation was observed (Spearman’s rho = 0.52), confirming that the degree of improvement on this test correlates highly with the number of sessions spent training.
  25 in total

1.  A neural basis for general intelligence.

Authors:  J Duncan; R J Seitz; J Kolodny; D Bor; H Herzog; A Ahmed; F N Newell; H Emslie
Journal:  Science       Date:  2000-07-21       Impact factor: 47.728

2.  Double dissociations of memory and executive functions in working memory tasks following frontal lobe excisions, temporal lobe excisions or amygdalo-hippocampectomy in man.

Authors:  A M Owen; R G Morris; B J Sahakian; C E Polkey; T W Robbins
Journal:  Brain       Date:  1996-10       Impact factor: 13.501

3.  Planning and spatial working memory following frontal lobe lesions in man.

Authors:  A M Owen; J J Downes; B J Sahakian; C E Polkey; T W Robbins
Journal:  Neuropsychologia       Date:  1990       Impact factor: 3.139

4.  Planning and spatial working memory: a positron emission tomography study in humans.

Authors:  A M Owen; J Doyon; M Petrides; A C Evans
Journal:  Eur J Neurosci       Date:  1996-02       Impact factor: 3.386

5.  Guanfacine and clonidine, alpha 2-agonists, improve paired associates learning, but not delayed matching to sample, in humans.

Authors:  P Jäkälä; J Sirviö; M Riekkinen; E Koivisto; K Kejonen; M Vanhanen; P Riekkinen
Journal:  Neuropsychopharmacology       Date:  1999-02       Impact factor: 7.853

6.  Fronto-striatal cognitive deficits at different stages of Parkinson's disease.

Authors:  A M Owen; M James; P N Leigh; B A Summers; C D Marsden; N P Quinn; K W Lange; T W Robbins
Journal:  Brain       Date:  1992-12       Impact factor: 13.501

Review 7.  Immediate and delayed effects of cognitive interventions in healthy elderly: a review of current literature and future directions.

Authors:  Kathryn V Papp; Stephen J Walsh; Peter J Snyder
Journal:  Alzheimers Dement       Date:  2009-01       Impact factor: 21.566

8.  A comparative study of visuospatial memory and learning in Alzheimer-type dementia and Parkinson's disease.

Authors:  B J Sahakian; R G Morris; J L Evenden; A Heald; R Levy; M Philpot; T W Robbins
Journal:  Brain       Date:  1988-06       Impact factor: 13.501

9.  Frontal lobe involvement in spatial span: converging studies of normal and impaired function.

Authors:  Daniel Bor; John Duncan; Andy C H Lee; Alice Parr; Adrian M Owen
Journal:  Neuropsychologia       Date:  2005-06-23       Impact factor: 3.139

10.  Executive function and fluid intelligence after frontal lobe lesions.

Authors:  María Roca; Alice Parr; Russell Thompson; Alexandra Woolgar; Teresa Torralva; Nagui Antoun; Facundo Manes; John Duncan
Journal:  Brain       Date:  2009-11-10       Impact factor: 13.501

View more
  252 in total

1.  Do We Really Become Smarter When Our Fluid-Intelligence Test Scores Improve?

Authors:  Taylor R Hayes; Alexander A Petrov; Per B Sederberg
Journal:  Intelligence       Date:  2015-01

2.  Treating schizophrenia: Game on.

Authors:  Erika Check Hayden
Journal:  Nature       Date:  2012-02-29       Impact factor: 49.962

3.  Boosting your brain, part 1: the couch potato.

Authors:  M Castillo
Journal:  AJNR Am J Neuroradiol       Date:  2012-06-14       Impact factor: 3.825

4.  Practice effects in the developing brain: a pilot study.

Authors:  Dietsje D Jolles; Mark A van Buchem; Serge A R B Rombouts; Eveline A Crone
Journal:  Dev Cogn Neurosci       Date:  2011-09-09       Impact factor: 6.464

5.  Cognitive remediation in severe mental illness.

Authors:  Philip D Harvey; Christopher R Bowie
Journal:  Innov Clin Neurosci       Date:  2012-04

6.  Brain bases of recovery following cognitive rehabilitation for traumatic brain injury: a preliminary study.

Authors:  Sarah I Gimbel; Mark L Ettenhofer; Evelyn Cordero; Michael Roy; Leighton Chan
Journal:  Brain Imaging Behav       Date:  2021-02       Impact factor: 3.978

7.  Transfer of cognitive training across magnitude dimensions achieved with concurrent brain stimulation of the parietal lobe.

Authors:  Marinella Cappelletti; Erica Gessaroli; Rosalyn Hithersay; Micaela Mitolo; Daniele Didino; Ryota Kanai; Roi Cohen Kadosh; Vincent Walsh
Journal:  J Neurosci       Date:  2013-09-11       Impact factor: 6.167

Review 8.  Homeostatic disinhibition in the aging brain and Alzheimer's disease.

Authors:  Marc Gleichmann; Vivian W Chow; Mark P Mattson
Journal:  J Alzheimers Dis       Date:  2011       Impact factor: 4.472

9.  Grammatical Constraints on Language Switching: Language Control is not Just Executive Control.

Authors:  Tamar H Gollan; Matthew Goldrick
Journal:  J Mem Lang       Date:  2016-05-18       Impact factor: 3.059

10.  Auditory Training: Evidence for Neural Plasticity in Older Adults.

Authors:  Samira Anderson; Nina Kraus
Journal:  Perspect Hear Hear Disord Res Res Diagn       Date:  2013-05
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.