Literature DB >> 35528268

A Comparison of Robust Likelihood Estimators to Mitigate Bias From Rapid Guessing.

Joseph A Rios1.   

Abstract

Rapid guessing (RG) behavior can undermine measurement property and score-based inferences. To mitigate this potential bias, practitioners have relied on response time information to identify and filter RG responses. However, response times may be unavailable in many testing contexts, such as paper-and-pencil administrations. When this is the case, self-report measures of effort and person-fit statistics have been used. These methods are limited in that inferences concerning motivation and aberrant responding are made at the examinee level. As test takers can engage in a mixture of solution and RG behavior throughout a test administration, there is a need to limit the influence of potential aberrant responses at the item level. This can be done by employing robust estimation procedures. Since these estimators have received limited attention in the RG literature, the objective of this simulation study was to evaluate ability parameter estimation accuracy in the presence of RG by comparing maximum likelihood estimation (MLE) to two robust variants, the bisquare and Huber estimators. Two RG conditions were manipulated, RG percentage (10%, 20%, and 40%) and pattern (difficulty-based and changing state). Contrasted to the MLE procedure, results demonstrated that both the bisquare and Huber estimators reduced bias in ability parameter estimates by as much as 94%. Given that the Huber estimator showed smaller standard deviations of error and performed equally as well as the bisquare approach under most conditions, it is recommended as a promising approach to mitigating bias from RG when response time information is unavailable.
© The Author(s) 2022.

Entities:  

Keywords:  item response theory; low-stakes testing; noneffortful responding; rapid guessing; robust likelihood estimation; validity

Year:  2022        PMID: 35528268      PMCID: PMC9073634          DOI: 10.1177/01466216221084371

Source DB:  PubMed          Journal:  Appl Psychol Meas        ISSN: 0146-6216


  6 in total

1.  Identifying careless responses in survey data.

Authors:  Adam W Meade; S Bartholomew Craig
Journal:  Psychol Methods       Date:  2012-04-16

2.  A mixture hierarchical model for response times and response accuracy.

Authors:  Chun Wang; Gongjun Xu
Journal:  Br J Math Stat Psychol       Date:  2015-04-15       Impact factor: 3.380

3.  A change-point analysis procedure based on weighted residuals to detect back random responding.

Authors:  Xiaofeng Yu; Ying Cheng
Journal:  Psychol Methods       Date:  2019-02-14

4.  Is Differential Noneffortful Responding Associated With Type I Error in Measurement Invariance Testing?

Authors:  Joseph A Rios
Journal:  Educ Psychol Meas       Date:  2021-02-12       Impact factor: 3.088

5.  Random Responding from Participants is a Threat to the Validity of Social Science Research Results.

Authors:  Jason W Osborne; Margaret R Blanchard
Journal:  Front Psychol       Date:  2011-01-21

6.  Parameter Estimation Accuracy of the Effort-Moderated Item Response Theory Model Under Multiple Assumption Violations.

Authors:  Joseph A Rios; James Soland
Journal:  Educ Psychol Meas       Date:  2020-09-02       Impact factor: 3.088

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.