Literature DB >> 23148475

How IRT can solve problems of ipsative data in forced-choice questionnaires.

Anna Brown1, Alberto Maydeu-Olivares.   

Abstract

In multidimensional forced-choice (MFC) questionnaires, items measuring different attributes are presented in blocks, and participants have to rank order the items within each block (fully or partially). Such comparative formats can reduce the impact of numerous response biases often affecting single-stimulus items (aka rating or Likert scales). However, if scored with traditional methodology, MFC instruments produce ipsative data, whereby all individuals have a common total test score. Ipsative scoring distorts individual profiles (it is impossible to achieve all high or all low scale scores), construct validity (covariances between scales must sum to zero), criterion-related validity (validity coefficients must sum to zero), and reliability estimates. We argue that these problems are caused by inadequate scoring of forced-choice items and advocate the use of item response theory (IRT) models based on an appropriate response process for comparative data, such as Thurstone's law of comparative judgment. We show that when Thurstonian IRT modeling is applied (Brown & Maydeu-Olivares, 2011), even existing forced-choice questionnaires with challenging features can be scored adequately and that the IRT-estimated scores are free from the problems of ipsative data. PsycINFO Database Record (c) 2013 APA, all rights reserved.

Entities:  

Mesh:

Year:  2012        PMID: 23148475     DOI: 10.1037/a0030641

Source DB:  PubMed          Journal:  Psychol Methods        ISSN: 1082-989X


  16 in total

1.  Comparing Traditional and IRT Scoring of Forced-Choice Tests.

Authors:  Pedro M Hontangas; Jimmy de la Torre; Vicente Ponsoda; Iwin Leenen; Daniel Morillo; Francisco J Abad
Journal:  Appl Psychol Meas       Date:  2015-05-19

2.  Item Response Models for Forced-Choice Questionnaires: A Common Framework.

Authors:  Anna Brown
Journal:  Psychometrika       Date:  2014-12-10       Impact factor: 2.500

3.  Linking Methods for the Zinnes-Griggs Pairwise Preference IRT Model.

Authors:  Philseok Lee; Seang-Hwane Joo; Stephen Stark
Journal:  Appl Psychol Meas       Date:  2016-11-04

4.  Advancing the Bayesian Approach for Multidimensional Polytomous and Nominal IRT Models: Model Formulations and Fit Measures.

Authors:  Jinsong Chen
Journal:  Appl Psychol Meas       Date:  2016-09-24

5.  Influence of Context on Item Parameters in Forced-Choice Personality Assessments.

Authors:  Yin Lin; Anna Brown
Journal:  Educ Psychol Meas       Date:  2016-04-28       Impact factor: 2.821

Review 6.  Constructing validity: New developments in creating objective measuring instruments.

Authors:  Lee Anna Clark; David Watson
Journal:  Psychol Assess       Date:  2019-03-21

7.  Fit Indices for Measurement Invariance Tests in the Thurstonian IRT Model.

Authors:  HyeSun Lee; Weldon Z Smith
Journal:  Appl Psychol Meas       Date:  2019-12-26

8.  Assessment of Differential Statement Functioning in Ipsative Tests With Multidimensional Forced-Choice Items.

Authors:  Xue-Lan Qiu; Wen-Chung Wang
Journal:  Appl Psychol Meas       Date:  2020-10-21

9.  Study Protocol on Intentional Distortion in Personality Assessment: Relationship with Test Format, Culture, and Cognitive Ability.

Authors:  Eline Van Geert; Altan Orhon; Iulia A Cioca; Rui Mamede; Slobodan Golušin; Barbora Hubená; Daniel Morillo
Journal:  Front Psychol       Date:  2016-06-28

10.  Development and Validation of the Behavioral Tendencies Questionnaire.

Authors:  Nicholas T Van Dam; Anna Brown; Tom B Mole; Jake H Davis; Willoughby B Britton; Judson A Brewer
Journal:  PLoS One       Date:  2015-11-04       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.