Literature DB >> 27239496

Practice and retest effects in longitudinal studies of cognitive functioning.

Richard N Jones1.   

Abstract

Entities:  

Year:  2015        PMID: 27239496      PMCID: PMC4876890          DOI: 10.1016/j.dadm.2015.02.002

Source DB:  PubMed          Journal:  Alzheimers Dement (Amst)        ISSN: 2352-8729


× No keyword cloud information.
In this issue, Goldberg et al. [1] draw attention to a critically important aspect of studies of cognitive change: practice and retest effects in repeatedly observed cognitive test performance. Practice and retest effects are large, pervasive, and underappreciated. By large, I mean that average gains on repeat administration are often much greater than normative cognitive change over a similar interval [2]. By pervasive, I mean that practice and retest effects are seen for a wide variety of cognitive tests assessing different domains of functioning [3]. By underappreciated, I mean that despite a long-standing and enduring literature [4], [5], investigators continue to develop protocols that reveal a failure to consider the full range of impact of practice and retest effects [6]. Goldberg et al. focus their review on the potential impact of practice and retest effects in randomized controlled trials in preclinical Alzheimer's disease. Of the range of potential uses of serial cognitive assessment—including clinical practice and observational and natural history studies—one might assume that randomized controlled trials may be a context in which practice and retest effects are of the least concern. This is because, as the thinking goes, even if practice and retest effects are observed, they should be present in equal measure in our randomly assigned control and active treatment groups and cancel out in group comparisons of treatment effects. Goldberg et al. argue that the assumption that practice and retest effects are equivalent in treated and controls may not be justified, and moreover that practice and retest effects may result from cognitive processes that are potentially distinct from the one intended to be measured with a given test. This is potentially important for emerging studies of disease-modifying therapies for Alzheimer's disease. Perhaps, one of the reasons why some studies fail to plan for practice and retest effects in design and/or analysis is the lack of consensus on best methods. Thorndike [4] suggested practice and retest effects could be eliminated with 10 minutes of practice (similar to the massed practice approach by Goldberg et al.), but this does not eliminate practice and retest effects from performance, only from collected data. If practice and retest effects have inferential value (c.f., [7]), then this approach is not useful. Goldberg et al. recommend the use of alternate forms but also note critical limitations of this approach. The literature suggests that alternate forms may attenuate but do not eliminate practice and retest effects [8]. If forms are not psychometrically equivalent and are not administered in a counterbalanced and randomized order, the use of alternate forms can introduce as much construct-irrelevant variance as practice and retest effects [6], [9]. As Goldberg et al. mention, reliable change indices (RCIs), with correction for practice [10], have a certain appeal. However, early proponents of RCIs now favor regression-based approaches [11], and not all authors have found practice-corrected RCIs to be useful in the context of randomized controlled trials [12]. Another approach to deal with practice and retest effects not mentioned by Goldberg et al. is statistical modeling in repeated-measures designs with linear mixed effect or related data analysis approaches [13], [14]. Hoffman et al. [15] offer the important caveat that in typical fixed-interval designs involving age-heterogeneous samples, such approaches are in general not informative about retest effects because of the confounding of age differences and retest-related gains [15]. These authors suggest instead a seldom-used approach to assessment and modeling of cognitive performance data: the “measurement burst” design [16]. The goal of such designs is to model individual variability in repeat administration over a short enough interval to render aging effects negligible and model aging trends with repeated short-interval bursts of measurement over longer intervals. Salthouse [17] has used such approaches in the study of cognitive performance and revealed important and deeper complexities associated with the practice and retest effect. Not only can we expect age differences in the magnitude of retest-related gains (which may be of clinical relevance [7]), but the retention of retest-related gains over longer intervals is also revealed as a potentially important individual difference factor and one that confounds but is not conceptually equivalent to aging-related change in cognitive ability. Despite a century of research, there is no clear consensus on the best methods to address the (still-emerging) impact of practice and retest effects. Nevertheless, recommendation by Goldberg et al. to use tests designed to minimize practice and retest effects without changing the fundamental construct of interest is a good one. Investigators should be wary of parallel-but-not-equivalent alternate forms and designs that do not include counterbalancing [6]. The development of cognitive tests calibrated with item response theory methods and administered in an adaptive fashion, such as the NIH Toolbox [18], offers great potential in this regard. In computerized adaptive testing, psychometric equivalence and effective counterbalancing can be incorporated into the adaptive testing procedure. A plan for using design and analysis features (e.g., control or comparison groups, measurement burst design) to adjust and/or model the impact of practice and retest effects is now something that reviewers should expect to see in research proposals.
  16 in total

1.  On the confounds among retest gains and age-cohort differences in the estimation of within-person change in longitudinal studies: a simulation study.

Authors:  Lesa Hoffman; Scott M Hofer; Martin J Sliwinski
Journal:  Psychol Aging       Date:  2011-05-30

2.  Practice and drop-out effects during a 17-year longitudinal study of cognitive aging.

Authors:  Patrick Rabbitt; Peter Diggle; Fiona Holland; Lynn McInnes
Journal:  J Gerontol B Psychol Sci Soc Sci       Date:  2004-03       Impact factor: 4.077

3.  Using regression equations built from summary data in the psychological assessment of the individual case: extension to multiple regression.

Authors:  John R Crawford; Paul H Garthwaite; Annie K Denham; Gordon J Chelune
Journal:  Psychol Assess       Date:  2012-03-26

4.  Cognitive decline in old age: separating retest effects from the effects of growing older.

Authors:  Robert S Wilson; Yan Li; Julia L Bienias; David A Bennett
Journal:  Psychol Aging       Date:  2006-12

5.  Evaluation and correction for a 'training effect' in the cognitive assessment of older adults.

Authors:  M Di Bari; M Pahor; M Barnard; N Gades; M Graney; L V Franse; B W J H Penninx; N Marchionni; W B Applegate
Journal:  Neuroepidemiology       Date:  2002 Mar-Apr       Impact factor: 3.282

6.  Parallel but not equivalent: challenges and solutions for repeated assessment of cognition over time.

Authors:  Alden L Gross; Sharon K Inouye; George W Rebok; Jason Brandt; Paul K Crane; Jeanine M Parisi; Doug Tommet; Karen Bandeen-Roche; Michelle C Carlson; Richard N Jones
Journal:  J Clin Exp Neuropsychol       Date:  2012-04-30       Impact factor: 2.475

7.  Practice effects and the use of alternate forms in serial neuropsychological testing.

Authors:  Leigh J Beglinger; Brenda Gaydos; Oranee Tangphao-Daniels; Kevin Duff; David A Kareken; Jane Crawford; Philip S Fastenau; Eric R Siemers
Journal:  Arch Clin Neuropsychol       Date:  2005-06       Impact factor: 2.813

8.  Interpreting change on the WAIS-III/WMS-III in clinical samples.

Authors:  G L Iverson
Journal:  Arch Clin Neuropsychol       Date:  2001-02       Impact factor: 2.813

9.  Cognition assessment using the NIH Toolbox.

Authors:  Sandra Weintraub; Sureyya S Dikmen; Robert K Heaton; David S Tulsky; Philip D Zelazo; Patricia J Bauer; Noelle E Carlozzi; Jerry Slotkin; David Blitz; Kathleen Wallner-Allen; Nathan A Fox; Jennifer L Beaumont; Dan Mungas; Cindy J Nowinski; Jennifer Richler; Joanne A Deocampo; Jacob E Anderson; Jennifer J Manly; Beth Borosh; Richard Havlik; Kevin Conway; Emmeline Edwards; Lisa Freund; Jonathan W King; Claudia Moy; Ellen Witt; Richard C Gershon
Journal:  Neurology       Date:  2013-03-12       Impact factor: 9.910

10.  Practice effects in healthy adults: a longitudinal study on frequent repetitive cognitive testing.

Authors:  Claudia Bartels; Martin Wegrzyn; Anne Wiedl; Verena Ackermann; Hannelore Ehrenreich
Journal:  BMC Neurosci       Date:  2010-09-16       Impact factor: 3.288

View more
  7 in total

1.  Cognition, Health, and Well-Being in a Rural Sub-Saharan African Population.

Authors:  Collin F Payne; Iliana V Kohler; Chiwoza Bandawe; Kathy Lawler; Hans-Peter Kohler
Journal:  Eur J Popul       Date:  2017-11-07

2.  You Say Tomato, I Say Radish: Can Brief Cognitive Assessments in the U.S. Health Retirement Study Be Harmonized With Its International Partner Studies?

Authors:  Lindsay C Kobayashi; Alden L Gross; Laura E Gibbons; Doug Tommet; R Elizabeth Sanders; Seo-Eun Choi; Shubhabrata Mukherjee; Maria Glymour; Jennifer J Manly; Lisa F Berkman; Paul K Crane; Dan M Mungas; Richard N Jones
Journal:  J Gerontol B Psychol Sci Soc Sci       Date:  2021-10-30       Impact factor: 4.942

3.  Parameterizing Practice in a Longitudinal Measurement Burst Design to Dissociate Retest Effects From Developmental Change: Implications for Aging Neuroscience.

Authors:  Nicholas Tamburri; Cynthia McDowell; Stuart W S MacDonald
Journal:  Front Aging Neurosci       Date:  2022-06-03       Impact factor: 5.702

Review 4.  Practice effects in performance outcome measures in patients living with neurologic disorders - A systematic review.

Authors:  Sven P Holm; Arnaud M Wolfer; Grégoire H S Pointeau; Florian Lipsmeier; Michael Lindemann
Journal:  Heliyon       Date:  2022-08-17

5.  Accounting for retest effects in cognitive testing with the Bayesian double exponential model via intensive measurement burst designs.

Authors:  Zita Oravecz; Karra D Harrington; Jonathan G Hakun; Mindy J Katz; Cuiling Wang; Ruixue Zhaoyang; Martin J Sliwinski
Journal:  Front Aging Neurosci       Date:  2022-09-26       Impact factor: 5.702

6.  Repeated systemic inflammation was associated with cognitive deficits in older Britons.

Authors:  Gindo Tampubolon
Journal:  Alzheimers Dement (Amst)       Date:  2015-12-28

7.  Correction for retest effects across repeated measures of cognitive functioning: a longitudinal cohort study of postoperative delirium.

Authors:  Annie M Racine; Yun Gou; Tamara G Fong; Edward R Marcantonio; Eva M Schmitt; Thomas G Travison; Sharon K Inouye; Richard N Jones
Journal:  BMC Med Res Methodol       Date:  2018-07-03       Impact factor: 4.615

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.