Literature DB >> 35185164

Evidence That Selecting an Appropriate Item Response Theory-Based Approach to Scoring Surveys Can Help Avoid Biased Treatment Effect Estimates.

James Soland1,2.   

Abstract

Considerable thought is often put into designing randomized control trials (RCTs). From power analyses and complex sampling designs implemented preintervention to nuanced quasi-experimental models used to estimate treatment effects postintervention, RCT design can be quite complicated. Yet when psychological constructs measured using survey scales are the outcome of interest, measurement is often an afterthought, even in RCTs. The purpose of this study is to examine how choices about scoring and calibration of survey item responses affect recovery of true treatment effects. Specifically, simulation and empirical studies are used to compare the performance of sum scores, which are frequently used in RCTs in psychology and education, to that of approaches rooted in item response theory (IRT) that better account for the longitudinal, multigroup nature of the data. The results from this study indicate that selecting an IRT model that matches the nature of the data can significantly reduce bias in treatment effect estimates and reduce standard errors.
© The Author(s) 2021.

Entities:  

Keywords:  item response theory (IRT); measurement; randomized control trials; statistical power; survey scales

Year:  2021        PMID: 35185164      PMCID: PMC8850769          DOI: 10.1177/00131644211007551

Source DB:  PubMed          Journal:  Educ Psychol Meas        ISSN: 0013-1644            Impact factor:   2.821


  20 in total

1.  The study designed by a committee: design of the Multisite Violence Prevention Project.

Authors:  David B Henry; Albert D Farrell
Journal:  Am J Prev Med       Date:  2004-01       Impact factor: 5.043

2.  A return potential measure of setting norms for aggression.

Authors:  David B Henry; Jennifer Cartland; Holly Ruchross; Kathleen Monahan
Journal:  Am J Community Psychol       Date:  2004-06

3.  A Hierarchical Multi-Unidimensional IRT Approach for Analyzing Sparse, Multi-Group Data for Integrative Data Analysis.

Authors:  Yan Huo; Jimmy de la Torre; Eun-Young Mun; Su-Young Kim; Anne E Ray; Yang Jiao; Helene R White
Journal:  Psychometrika       Date:  2014-09-30       Impact factor: 2.500

4.  Measurement model choice influenced randomized controlled trial results.

Authors:  Rosalie Gorter; Jean-Paul Fox; Adri Apeldoorn; Jos Twisk
Journal:  J Clin Epidemiol       Date:  2016-07-07       Impact factor: 6.437

5.  Specifying Ability Growth Models Using a Multidimensional Item Response Model for Repeated Measures Categorical Ordinal Item Response Data.

Authors:  Insu Paek; Zhen Li; Hyun-Jeong Park
Journal:  Multivariate Behav Res       Date:  2016-06-20       Impact factor: 5.923

6.  Risk and direct protective factors for youth violence: results from the Centers for Disease Control and Prevention's Multisite Violence Prevention Project.

Authors:  David B Henry; Patrick H Tolan; Deborah Gorman-Smith; Michael E Schoeny
Journal:  Am J Prev Med       Date:  2012-08       Impact factor: 5.043

7.  Supporting families in a high-risk setting: proximal effects of the SAFEChildren preventive intervention.

Authors:  Patrick Tolan; Deborah Gorman-Smith; David Henry
Journal:  J Consult Clin Psychol       Date:  2004-10

8.  Response shifts in mental health interventions: an illustration of longitudinal measurement invariance.

Authors:  Marjolein Fokkema; Niels Smits; Henk Kelderman; Pim Cuijpers
Journal:  Psychol Assess       Date:  2013-01-21

9.  The ecological effects of universal and selective violence prevention programs for middle school students: a randomized trial.

Authors: 
Journal:  J Consult Clin Psychol       Date:  2009-06

Review 10.  Thinking twice about sum scores.

Authors:  Daniel McNeish; Melissa Gordon Wolf
Journal:  Behav Res Methods       Date:  2020-12
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.