Literature DB >> 29881127

On the Performance of the Marginal Homogeneity Test to Detect Rater Drift.

Adrienne Sgammato1, John R Donoghue1.   

Abstract

When constructed response items are administered repeatedly, "trend scoring" can be used to test for rater drift. In trend scoring, raters rescore responses from the previous administration. Two simulation studies evaluated the utility of Stuart's Q measure of marginal homogeneity as a way of evaluating rater drift when monitoring trend scoring. In the first study, data were generated based on trend scoring tables obtained from an operational assessment. The second study tightly controlled table margins to disentangle certain features present in the empirical data. In addition to Q, the paired t test was included as a comparison, because of its widespread use in monitoring trend scoring. Sample size, number of score categories, interrater agreement, and symmetry/asymmetry of the margins were manipulated. For identical margins, both statistics had good Type I error control. For a unidirectional shift in margins, both statistics had good power. As expected, when shifts in the margins were balanced across categories, the t test had little power. Q demonstrated good power for all conditions and identified almost all items identified by the t test. Q shows substantial promise for monitoring of trend scoring.

Entities:  

Keywords:  constructed response scoring; monitoring scoring; rater agreement; rater effects; trend scoring

Year:  2017        PMID: 29881127      PMCID: PMC5978607          DOI: 10.1177/0146621617730390

Source DB:  PubMed          Journal:  Appl Psychol Meas        ISSN: 0146-6216


  5 in total

1.  Note on the sampling error of the difference between correlated proportions or percentages.

Authors:  Q McNEMAR
Journal:  Psychometrika       Date:  1947-06       Impact factor: 2.500

2.  High agreement but low kappa: I. The problems of two paradoxes.

Authors:  A R Feinstein; D V Cicchetti
Journal:  J Clin Epidemiol       Date:  1990       Impact factor: 6.437

3.  Another look at interrater agreement.

Authors:  R Zwick
Journal:  Psychol Bull       Date:  1988-05       Impact factor: 17.737

4.  The measurement of observer agreement for categorical data.

Authors:  J R Landis; G G Koch
Journal:  Biometrics       Date:  1977-03       Impact factor: 2.571

5.  Bias, prevalence and kappa.

Authors:  T Byrt; J Bishop; J B Carlin
Journal:  J Clin Epidemiol       Date:  1993-05       Impact factor: 6.437

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.