Literature DB >> 7724915

Modelling covariate effects in observer agreement studies: the case of nominal scale agreement.

P Graham1.   

Abstract

Correction for chance-expected agreement has become an accepted technique in the analysis of observer agreement data and may be particularly useful when the level of agreement achieved in different populations is compared. However, formal methods for making comparisons of chance-corrected agreement or, more generally, for studying the effects of covariates on chance-corrected agreement have not received much attention. For nominal scale agreement data we show how Tanner and Young's model for observer agreement can be applied to this problem. The models discussed can be fitted using existing software and certain model parameters have interpretations in terms of positive and negative agreement odds ratios. The proposed methodology facilitates investigation of issues such as confounding of covariate effects and interaction between covariates in their effect on chance-corrected agreement. The methods outlined therefore allow observer agreement data to be analyzed in a manner strongly analogous to the logistic modelling of the association between disease and suspected risk factors. The methods are illustrated using data on the comparability of primary and proxy respondent reports of the primary respondents participation in physically vigorous leisure time activity.

Mesh:

Year:  1995        PMID: 7724915     DOI: 10.1002/sim.4780140308

Source DB:  PubMed          Journal:  Stat Med        ISSN: 0277-6715            Impact factor:   2.373


  5 in total

1.  Inter-rater agreement for diagnoses of epilepsy in pregnant women.

Authors:  Shahram Khoshbin; Amy Herring; Gregory L Holmes; Donald Schomer; Daniel Hoch; Elizabeth C Dooling; Eileen P G Vining; Lewis B Holmes
Journal:  Epilepsy Behav       Date:  2013-02-15       Impact factor: 2.937

2.  Concordance and consistency of answers to the self-delivered ESPAD questionnaire on use of psychoactive substances.

Authors:  Sabrina Molinaro; Valeria Siciliano; Olivia Curzio; Francesca Denoth; Fabio Mariani
Journal:  Int J Methods Psychiatr Res       Date:  2012-02-23       Impact factor: 4.035

3.  Improving the reliability of diagnostic tests in population-based agreement studies.

Authors:  Kerrie P Nelson; Don Edwards
Journal:  Stat Med       Date:  2010-03-15       Impact factor: 2.373

4.  Statistical evaluations of the reproducibility and reliability of 3-tesla high resolution magnetization transfer brain images: a pilot study on healthy subjects.

Authors:  Kelly H Zou; Hongyan Du; Shawn Sidharthan; Lisa M Detora; Yunmei Chen; Ann B Ragin; Robert R Edelman; Ying Wu
Journal:  Int J Biomed Imaging       Date:  2010-02-09

5.  Cohen's Kappa Coefficient as a Measure to Assess Classification Improvement following the Addition of a New Marker to a Regression Model.

Authors:  Barbara Więckowska; Katarzyna B Kubiak; Paulina Jóźwiak; Wacław Moryson; Barbara Stawińska-Witoszyńska
Journal:  Int J Environ Res Public Health       Date:  2022-08-17       Impact factor: 4.614

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.