Literature DB >> 29880997

Comparing Two Algorithms for Calibrating the Restricted Non-Compensatory Multidimensional IRT Model.

Chun Wang1, Steven W Nydick2.   

Abstract

The non-compensatory class of multidimensional item response theory (MIRT) models frequently represents the cognitive processes underlying a series of test items better than the compensatory class of MIRT models. Nevertheless, few researchers have used non-compensatory MIRT in modeling psychological data. One reason for this lack of use is because non-compensatory MIRT item parameters are notoriously difficult to accurately estimate. In this article, we propose methods to improve the estimability of a specific non-compensatory model. To initiate the discussion, we address the non-identifiability of the explored non-compensatory MIRT model by suggesting that practitioners use an item-dimension constraint matrix (namely, a Q-matrix) that results in model identifiability. We then compare two promising algorithms for high-dimensional model calibration, Markov chain Monte Carlo (MCMC) and Metropolis-Hastings Robbins-Monro (MH-RM), and discuss, via analytical demonstrations, the challenges in estimating model parameters. Based on simulation studies, we show that when the dimensions are not highly correlated, and when the Q-matrix displays appropriate structure, the non-compensatory MIRT model can be accurately calibrated (using the aforementioned methods) with as few as 1,000 people. Based on the simulations, we conclude that the MCMC algorithm is better able to estimate model parameters across a variety of conditions, whereas the MH-RM algorithm should be used with caution when a test displays complex structure and when the latent dimensions are highly correlated.

Entities:  

Keywords:  MCMC; Metropolis–Hastings Robbins–Monro; multidimensional IRT

Year:  2014        PMID: 29880997      PMCID: PMC5978509          DOI: 10.1177/0146621614545983

Source DB:  PubMed          Journal:  Appl Psychol Meas        ISSN: 0146-6216


  4 in total

1.  Combining computer adaptive testing technology with cognitively diagnostic assessment.

Authors:  Meghan McGlohen; Hua-Hua Chang
Journal:  Behav Res Methods       Date:  2008-08

2.  A multicomponent latent trait model for diagnosis.

Authors:  Susan E Embretson; Xiangdong Yang
Journal:  Psychometrika       Date:  2012-12-06       Impact factor: 2.500

3.  Multidimensional Adaptive Testing with Optimal Design Criteria for Item Selection.

Authors:  Joris Mulder; Wim J van der Linden
Journal:  Psychometrika       Date:  2008-12-23       Impact factor: 2.500

4.  Data-Driven Learning of Q-Matrix.

Authors:  Jingchen Liu; Gongjun Xu; Zhiliang Ying
Journal:  Appl Psychol Meas       Date:  2012-10
  4 in total
  6 in total

1.  Correction for Item Response Theory Latent Trait Measurement Error in Linear Mixed Effects Models.

Authors:  Chun Wang; Gongjun Xu; Xue Zhang
Journal:  Psychometrika       Date:  2019-06-10       Impact factor: 2.500

2.  Using Penalized EM Algorithm to Infer Learning Trajectories in Latent Transition CDM.

Authors:  Chun Wang
Journal:  Psychometrika       Date:  2021-01-15       Impact factor: 2.500

3.  Measuring Response Style Stability Across Constructs With Item Response Trees.

Authors:  Allison J Ames
Journal:  Educ Psychol Meas       Date:  2021-06-02       Impact factor: 2.821

4.  Regularized Variational Estimation for Exploratory Item Factor Analysis.

Authors:  April E Cho; Jiaying Xiao; Chun Wang; Gongjun Xu
Journal:  Psychometrika       Date:  2022-07-13       Impact factor: 2.290

5.  Explaining Variability in Response Style Traits: A Covariate-Adjusted IRTree.

Authors:  Allison J Ames; Aaron J Myers
Journal:  Educ Psychol Meas       Date:  2020-11-04       Impact factor: 3.088

6.  Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model.

Authors:  Shengyu Jiang; Chun Wang; David J Weiss
Journal:  Front Psychol       Date:  2016-02-09
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.