Literature DB >> 32623312

Pathologists should probably forget about kappa. Percent agreement, diagnostic specificity and related metrics provide more clinically applicable measures of interobserver variability.

Alberto M Marchevsky1, Ann E Walts2, Birgit I Lissenberg-Witte3, Erik Thunnissen4.   

Abstract

Kappa statistics have been widely used in the pathology literature to compare interobserver diagnostic variability (IOV) among different pathologists but there has been limited discussion about the clinical significance of kappa scores. Five representative and recent pathology papers were queried using clinically relevant specific questions to learn how IOV was evaluated and how the clinical applicability of results was interpreted. The papers supported our anecdotal impression that pathologists usually assess IOV using Cohen's or Fleiss' kappa statistics and interpret the results using some variation of the scale proposed by Landis and Koch. The papers did not cite or propose specific guidelines to comment on the clinical applicability of results. The solutions proposed to decrease IOV included the development of better diagnostic criteria and additional educational efforts, but the possibility that the entities themselves represented a continuum of morphologic findings rather than distinct diagnostic categories was not considered in any of the studies. A dataset from a previous study of IOV reported by Thunnissen et al. was recalculated to estimate percent agreement among 19 international lung pathologists for the diagnosis of 74 challenging lung neuroendocrine neoplasms. Kappa scores and diagnostic sensitivity, specificity, positive and negative predictive values were calculated using the majority consensus diagnosis for each case as the gold reference diagnosis for that case. Diagnostic specificity estimates among multiple pathologists were > 90%, although kappa scores were considerably more variable. We explain why kappa scores are of limited clinical applicability in pathology and propose the use of positive and negative percent agreement and diagnostic specificity against a gold reference diagnosis to evaluate IOV among two and multiple raters, respectively.
Copyright © 2020 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Diagnostic accuracy; Evidence-based pathology; Interobserver variability; Kappa statistics

Mesh:

Year:  2020        PMID: 32623312     DOI: 10.1016/j.anndiagpath.2020.151561

Source DB:  PubMed          Journal:  Ann Diagn Pathol        ISSN: 1092-9134            Impact factor:   2.090


  3 in total

1.  Accuracy of intraoral digital radiography in assessing maxillary Sinus-Root relationship compared to CBCT.

Authors:  Esraa Ahmed Eid; Fatma Mostafa El-Badawy; Walaa Mohamed Hamed
Journal:  Saudi Dent J       Date:  2022-04-28

2.  Reliability of histopathologic diagnosis of fibrotic interstitial lung disease: an international collaborative standardization project.

Authors:  Robert Camp; Maxwell L Smith; Brandon T Larsen; Anja C Roden; Carol Farver; Andre L Moreira; Richard Attanoos; Raghavendra Pillappa; Irene Sansano; Alexandre Todorovic Fabro; Robert J Homer
Journal:  BMC Pulm Med       Date:  2021-06-01       Impact factor: 3.317

3.  Histological interpretation of differentiated vulvar intraepithelial neoplasia (dVIN) remains challenging-observations from a bi-national ring-study.

Authors:  Elf de Jonge; Mieke R Van Bockstal; Luthy S M Wong-Alcala; Suzanne Wilhelmus; Lex A C F Makkus; Katrien Schelfout; Koen K Van de Vijver; Sander Smits; Etienne Marbaix; Shatavisha Dasgupta; Senada Koljenović; Folkert J van Kemenade; Patricia C Ewing-Graham
Journal:  Virchows Arch       Date:  2021-03-08       Impact factor: 4.064

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.