| Literature DB >> 29795880 |
Abstract
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling techniques. In this article, we propose a technique similar to the classical pairwise t test for means, which is based on a large-sample linear approximation of the agreement coefficient. We illustrate the use of this technique with several known agreement coefficients including Cohen's kappa, Gwet's AC1, Fleiss's generalized kappa, Conger's generalized kappa, Krippendorff's alpha, and the Brenann-Prediger coefficient. The proposed method is very flexible, can accommodate several types of correlation structures between coefficients, and requires neither advanced statistical modeling skills nor considerable computer programming experience. The validity of this method is tested with a Monte Carlo simulation.Entities:
Keywords: Gwet’s AC1; agreement coefficients; correlated agreement coefficients; correlated kappas; kappa significance test; raters’ agreement; testing correlated kappas
Year: 2015 PMID: 29795880 PMCID: PMC5965565 DOI: 10.1177/0013164415596420
Source DB: PubMed Journal: Educ Psychol Meas ISSN: 0013-1644 Impact factor: 2.821