Literature DB >> 35754618

Robustness of Adaptive Measurement of Change to Item Parameter Estimation Error.

Allison W Cooperman1, David J Weiss1, Chun Wang2.   

Abstract

Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests-a Z test, likelihood ratio test, and score ratio index-have demonstrated desirable statistical properties in this context, including low false positive rates and high true positive rates. However, the extant AMC research has assumed that the item parameter values in the simulated item banks were devoid of estimation error. This assumption is unrealistic for applied testing settings, where item parameters are estimated from a calibration sample before test administration. Using Monte Carlo simulation, this study evaluated the robustness of the common AMC hypothesis tests to the presence of item parameter estimation error when measuring omnibus change across four testing occasions. Results indicated that item parameter estimation error had at most a small effect on false positive rates and latent trait change recovery, and these effects were largely explained by the computerized adaptive testing item bank information functions. Differences in AMC performance as a function of item parameter estimation error and choice of hypothesis test were generally limited to simulees with particularly low or high latent trait values, where the item bank provided relatively lower information. These simulations highlight how AMC can accurately measure intra-individual change in the presence of item parameter estimation error when paired with an informative item bank. Limitations and future directions for AMC research are discussed.
© The Author(s) 2021.

Entities:  

Keywords:  adaptive measurement of change; computerized adaptive testing; item parameter estimation error

Year:  2021        PMID: 35754618      PMCID: PMC9228691          DOI: 10.1177/00131644211033902

Source DB:  PubMed          Journal:  Educ Psychol Meas        ISSN: 0013-1644            Impact factor:   3.088


  9 in total

1.  A New Stopping Rule for Computerized Adaptive Testing.

Authors:  Seung W Choi; Matthew W Grady; Barbara G Dodd
Journal:  Educ Psychol Meas       Date:  2010-12-01       Impact factor: 2.821

2.  Computerized adaptive testing: the capitalization on chance problem.

Authors:  Julio Olea; Juan Ramón Barrada; Francisco J Abad; Vicente Ponsoda; Lara Cuevas
Journal:  Span J Psychol       Date:  2012-03       Impact factor: 1.264

3.  Clinical significance: a statistical approach to defining meaningful change in psychotherapy research.

Authors:  N S Jacobson; P Truax
Journal:  J Consult Clin Psychol       Date:  1991-02

4.  THE IMPACT OF FALLIBLE ITEM PARAMETER ESTIMATES ON LATENT TRAIT RECOVERY.

Authors:  Ying Cheng; Ke-Hai Yuan
Journal:  Psychometrika       Date:  2010-06       Impact factor: 2.500

5.  Variable-Length Stopping Rules for Multidimensional Computerized Adaptive Testing.

Authors:  Chun Wang; David J Weiss; Zhuoran Shang
Journal:  Psychometrika       Date:  2018-12-03       Impact factor: 2.500

Review 6.  Multivariate Hypothesis Testing Methods for Evaluating Significant Individual Change.

Authors:  Chun Wang; David J Weiss
Journal:  Appl Psychol Meas       Date:  2017-10-13

7.  Sources of Error in IRT Trait Estimation.

Authors:  Leah M Feuerstahler
Journal:  Appl Psychol Meas       Date:  2017-10-06

8.  Hypothesis Testing Methods for Multivariate Multi-Occasion Intra-Individual Change.

Authors:  Chun Wang; David J Weiss; King Yiu Suen
Journal:  Multivariate Behav Res       Date:  2020-03-03       Impact factor: 5.923

9.  The Impact of Item Calibration Error on Variable-Length Cognitive Diagnostic Computerized Adaptive Testing.

Authors:  Xiaojian Sun; Yanlou Liu; Tao Xin; Naiqing Song
Journal:  Front Psychol       Date:  2020-12-02
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.