Literature DB >> 28768319

Evaluating Random Error in Clinician-Administered Surveys: Theoretical Considerations and Clinical Applications of Interobserver Reliability and Agreement.

Rebecca J Bennett1,2, Dunay S Taljaard2,3, Michelle Olaithe4, Chris Brennan-Jones1,2, Robert H Eikelboom1,2,5.   

Abstract

PURPOSE: The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen's kappa, Krippendorff's alpha, or interclass correlation).
METHODS: In this prospective cohort study, 3 clinical audiologists, who were masked to each other's scores, administered the Practical Hearing Aid Skills Test-Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices.
RESULTS: The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from "no agreement" to "almost perfect agreement" depending on data types and statistical index selected.
CONCLUSIONS: This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.

Entities:  

Mesh:

Year:  2017        PMID: 28768319     DOI: 10.1044/2017_AJA-16-0100

Source DB:  PubMed          Journal:  Am J Audiol        ISSN: 1059-0889            Impact factor:   1.493


  1 in total

1.  A multilinguistic analysis of spelling among children with cochlear implants.

Authors:  Nancy Quick; Melody Harrison; Karen Erickson
Journal:  J Deaf Stud Deaf Educ       Date:  2019-01-01
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.