Literature DB >> 34992309

Assessing the Accuracy of Parameter Estimates in the Presence of Rapid Guessing Misclassifications.

Joseph A Rios1.   

Abstract

The presence of rapid guessing (RG) presents a challenge to practitioners in obtaining accurate estimates of measurement properties and examinee ability. In response to this concern, researchers have utilized response times as a proxy of RG and have attempted to improve parameter estimation accuracy by filtering RG responses using popular scoring approaches, such as the effort-moderated item response theory (EM-IRT) model. However, such an approach assumes that RG can be correctly identified based on an indirect proxy of examinee behavior. A failure to meet this assumption leads to the inclusion of distortive and psychometrically uninformative information in parameter estimates. To address this issue, a simulation study was conducted to examine how violations to the assumption of correct RG classification influences EM-IRT item and ability parameter estimation accuracy and compares these results with parameter estimates from the three-parameter logistic (3PL) model, which includes RG responses in scoring. Two RG misclassification factors were manipulated: type (underclassification vs. overclassification) and rate (10%, 30%, and 50%). Results indicated that the EM-IRT model provided improved item parameter estimation over the 3PL model regardless of misclassification type and rate. Furthermore, under most conditions, increased rates of RG underclassification were associated with the greatest bias in ability parameter estimates from the EM-IRT model. In spite of this, the EM-IRT model with RG misclassifications demonstrated more accurate ability parameter estimation than the 3PL model when the mean ability of RG subgroups did not differ. This suggests that in certain situations it may be better for practitioners to (a) imperfectly identify RG than to ignore the presence of such invalid responses and (b) select liberal over conservative response time thresholds to mitigate bias from underclassified RG.
© The Author(s) 2021.

Entities:  

Keywords:  IRT; item response theory; noneffortful responding; parameter estimation; rapid guessing; response times

Year:  2021        PMID: 34992309      PMCID: PMC8725050          DOI: 10.1177/00131644211003640

Source DB:  PubMed          Journal:  Educ Psychol Meas        ISSN: 0013-1644            Impact factor:   2.821


  6 in total

1.  A mixture hierarchical model for response times and response accuracy.

Authors:  Chun Wang; Gongjun Xu
Journal:  Br J Math Stat Psychol       Date:  2015-04-15       Impact factor: 3.380

2.  Random Responding from Participants is a Threat to the Validity of Social Science Research Results.

Authors:  Jason W Osborne; Margaret R Blanchard
Journal:  Front Psychol       Date:  2011-01-21

3.  Parameter Estimation Accuracy of the Effort-Moderated Item Response Theory Model Under Multiple Assumption Violations.

Authors:  Joseph A Rios; James Soland
Journal:  Educ Psychol Meas       Date:  2020-09-02       Impact factor: 3.088

4.  Modeling Test-Taking Non-effort in MIRT Models.

Authors:  Yue Liu; Zhen Li; Hongyun Liu; Fang Luo
Journal:  Front Psychol       Date:  2019-02-04

5.  An Overview of Models for Response Times and Processes in Cognitive Tests.

Authors:  Paul De Boeck; Minjeong Jeon
Journal:  Front Psychol       Date:  2019-02-06

6.  Measuring Ability, Speed, or Both? Challenges, Psychometric Solutions, and What Can Be Gained From Experimental Control.

Authors:  Frank Goldhammer
Journal:  Measurement ( Mahwah N J)       Date:  2015-12-07
  6 in total
  1 in total

1.  Investigating the Effect of Differential Rapid Guessing on Population Invariance in Equating.

Authors:  Jiayi Deng; Joseph A Rios
Journal:  Appl Psychol Meas       Date:  2022-06-16
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.