January Durant1, Kevin Duff2, Justin B Miller1. 1. a Neuropsychology , Cleveland Clinic Lou Ruvo Center for Brain Health , Las Vegas , NV , USA. 2. b Department of Neurology , Center for Alzheimer's Care, Imaging, and Research University of Utah , Salt Lake City , UT , USA.
Abstract
INTRODUCTION: Standardized regression based (SRB) methods can be used to determine whether meaningful changes in performance on cognitive assessments occur over time. Both raw and standardized scores have been used in SRB models but it is unclear which score metric is most appropriate for predicting follow-up performance. The aim of the present study was to examine differences in SRB prediction formulas using raw versus standard scores on two memory tests commonly used in assessment of older adults. METHOD: The sample consisted of 135 healthy older adults who underwent baseline and 1-year follow-up neuropsychological assessment including the Hopkins Verbal Learning Test-Revised and Brief Visuospatial Memory Test-Revised. Regression models were fit to predict Time 2 scores from Time 1 scores and demographic variables. Separate models were fit using raw scores and standardized scores. Akaike's information criterion (AIC) was used to determine whether models using raw or standardized scores resulted in best fit. Pearson correlation and intraclass correlation coefficients were calculated between observed and predicted scores. Mean differences between observed and predicted scores were examined using pairwise t tests. To investigate whether a similar pattern of results would be evident using prediction formulas for nonmemory tests, all analyses were also conducted for nonmemory tests. RESULTS: All regression models were significant, and R2 values for memory test raw score models were larger than those generated by standardized score models. Memory test raw score models were also a better fit based on smaller AIC values. For nonmemory tests, raw score models did not consistently outperform standardized score models. All correlations between observed and predicted Time 2 scores were significant, and none of the predicted scores significantly differed from their respective observed score. CONCLUSION: For each memory measure, raw score models outperformed standardized score models. For nonmemory tests, neither score metric model consistently outperformed the other.
INTRODUCTION: Standardized regression based (SRB) methods can be used to determine whether meaningful changes in performance on cognitive assessments occur over time. Both raw and standardized scores have been used in SRB models but it is unclear which score metric is most appropriate for predicting follow-up performance. The aim of the present study was to examine differences in SRB prediction formulas using raw versus standard scores on two memory tests commonly used in assessment of older adults. METHOD: The sample consisted of 135 healthy older adults who underwent baseline and 1-year follow-up neuropsychological assessment including the Hopkins Verbal Learning Test-Revised and Brief Visuospatial Memory Test-Revised. Regression models were fit to predict Time 2 scores from Time 1 scores and demographic variables. Separate models were fit using raw scores and standardized scores. Akaike's information criterion (AIC) was used to determine whether models using raw or standardized scores resulted in best fit. Pearson correlation and intraclass correlation coefficients were calculated between observed and predicted scores. Mean differences between observed and predicted scores were examined using pairwise t tests. To investigate whether a similar pattern of results would be evident using prediction formulas for nonmemory tests, all analyses were also conducted for nonmemory tests. RESULTS: All regression models were significant, and R2 values for memory test raw score models were larger than those generated by standardized score models. Memory test raw score models were also a better fit based on smaller AIC values. For nonmemory tests, raw score models did not consistently outperform standardized score models. All correlations between observed and predicted Time 2 scores were significant, and none of the predicted scores significantly differed from their respective observed score. CONCLUSION: For each memory measure, raw score models outperformed standardized score models. For nonmemory tests, neither score metric model consistently outperformed the other.
Authors: Roy Martin; Stephen Sawrie; Frank Gilliam; Melissa Mackey; Edward Faught; Robert Knowlton; Ruben Kuzniekcy Journal: Epilepsia Date: 2002-12 Impact factor: 5.864
Authors: Kevin Duff; Mike R Schoenberg; Doyle Patton; Jane S Paulsen; John D Bayless; James Mold; James G Scott; Russell L Adams Journal: Arch Clin Neuropsychol Date: 2005-05 Impact factor: 2.813
Authors: Emilio Portaccio; Benedetta Goretti; Valentina Zipoli; Alfonso Iudice; Dario Della Pina; Gian Michele Malentacchi; Simonetta Sabatini; Pasquale Annunziata; Mario Falcini; Monica Mazzoni; Maria Pia Amato Journal: Mult Scler Date: 2010-03-05 Impact factor: 6.312
Authors: Dino Muslimović; Bart Post; Johannes D Speelman; Rob J De Haan; Ben Schmand Journal: J Int Neuropsychol Soc Date: 2009-05 Impact factor: 2.892