M H Stoler1, M Schiffman. 1. University of Virginia Health System, Division of Surgical Pathology and Cytopathology, Box 800214, Charlottesville, VA 22908, USA. mhs2e@virginia.edu
Abstract
CONTEXT: Despite a critical presumption of reliability, standards of interpathologist agreement have not been well defined for interpretation of cervical pathology specimens. OBJECTIVE: To determine the reproducibility of cytologic, colposcopic histologic, and loop electrosurgical excision procedure (LEEP) histologic cervical specimen interpretations among multiple well-trained observers. DESIGN AND SETTING: The Atypical Squamous Cells of Undetermined Significance-Low-grade Squamous Intraepithelial Lesion (ASCUS-LSIL) Triage Study (ALTS), an ongoing US multicenter clinical trial. SUBJECTS: From women enrolled in ALTS during 1996-1998, 4948 monolayer cytologic slides, 2237 colposcopic biopsies, and 535 LEEP specimens were interpreted by 7 clinical center and 4 Pathology Quality Control Group (QC) pathologists. MAIN OUTCOME MEASURES: kappa Values calculated for comparison of the original clinical center interpretation and the first QC reviewer's masked interpretation of specimens. RESULTS: For all 3 specimen types, the clinical center pathologists rendered significantly more severe interpretations than did reviewing QC pathologists. The reproducibility of monolayer cytologic interpretations was moderate (kappa = 0.46; 95% confidence interval [CI], 0.44-0.48) and equivalent to the reproducibility of punch biopsy histopathologic interpretations (kappa = 0.46; 95% CI, 0.43-0.49) and LEEP histopathologic interpretations (kappa = 0.49; 95% CI, 0.44-0.55). The lack of reproducibility of histopathology was most evident for less severe interpretations. CONCLUSIONS: Interpretive variability is substantial for all types of cervical specimens. Histopathology of cervical biopsies is not more reproducible than monolayer cytology, and even the interpretation of LEEP results is variable. Given the degree of irreproducibility that exists among well-trained pathologists, realistic performance expectations should guide use of their interpretations.
CONTEXT: Despite a critical presumption of reliability, standards of interpathologist agreement have not been well defined for interpretation of cervical pathology specimens. OBJECTIVE: To determine the reproducibility of cytologic, colposcopic histologic, and loop electrosurgical excision procedure (LEEP) histologic cervical specimen interpretations among multiple well-trained observers. DESIGN AND SETTING: The Atypical Squamous Cells of Undetermined Significance-Low-grade Squamous Intraepithelial Lesion (ASCUS-LSIL) Triage Study (ALTS), an ongoing US multicenter clinical trial. SUBJECTS: From women enrolled in ALTS during 1996-1998, 4948 monolayer cytologic slides, 2237 colposcopic biopsies, and 535 LEEP specimens were interpreted by 7 clinical center and 4 Pathology Quality Control Group (QC) pathologists. MAIN OUTCOME MEASURES: kappa Values calculated for comparison of the original clinical center interpretation and the first QC reviewer's masked interpretation of specimens. RESULTS: For all 3 specimen types, the clinical center pathologists rendered significantly more severe interpretations than did reviewing QC pathologists. The reproducibility of monolayer cytologic interpretations was moderate (kappa = 0.46; 95% confidence interval [CI], 0.44-0.48) and equivalent to the reproducibility of punch biopsy histopathologic interpretations (kappa = 0.46; 95% CI, 0.43-0.49) and LEEP histopathologic interpretations (kappa = 0.49; 95% CI, 0.44-0.55). The lack of reproducibility of histopathology was most evident for less severe interpretations. CONCLUSIONS: Interpretive variability is substantial for all types of cervical specimens. Histopathology of cervical biopsies is not more reproducible than monolayer cytology, and even the interpretation of LEEP results is variable. Given the degree of irreproducibility that exists among well-trained pathologists, realistic performance expectations should guide use of their interpretations.
Authors: Philip E Castle; Attila T Lorincz; Iwona Mielzynska-Lohnas; David R Scott; Andrew G Glass; Mark E Sherman; John E Schussler; Mark Schiffman Journal: J Clin Microbiol Date: 2002-03 Impact factor: 5.948
Authors: Johannes Schweizer; Peter S Lu; Charles W Mahoney; Marthe Berard-Bergery; Minh Ho; Valli Ramasamy; Jon E Silver; Arnima Bisht; Yassine Labiad; Roger B Peck; Jeanette Lim; Jose Jeronimo; Roslyn Howard; Patti E Gravitt; Philip E Castle Journal: J Clin Microbiol Date: 2010-10-06 Impact factor: 5.948
Authors: Nancy E Joste; Brigitte M Ronnett; William C Hunt; Amanda Pearse; Erika Langsfeld; Thomas Leete; MaryAnn Jaramillo; Mark H Stoler; Philip E Castle; Cosette M Wheeler Journal: Cancer Epidemiol Biomarkers Prev Date: 2014-11-02 Impact factor: 4.254
Authors: Walter Kinney; Barbara Fetterman; J Thomas Cox; Thomas Lorey; Tracy Flanagan; Philip E Castle Journal: Gynecol Oncol Date: 2011-01-26 Impact factor: 5.482
Authors: Hilary K Whitham; Stephen E Hawes; Haitao Chu; J Michael Oakes; Alan R Lifson; Nancy B Kiviat; Papa Salif Sow; Geoffrey S Gottlieb; Selly Ba; Marie P Sy; Shalini L Kulasingam Journal: Cancer Epidemiol Biomarkers Prev Date: 2017-05-17 Impact factor: 4.254
Authors: Jason Soh; Anne F Rositch; Laura Koutsky; Brandon L Guthrie; Robert Y Choi; Rose K Bosire; Ann Gatuguta; Jennifer S Smith; James Kiarie; Barbara Lohman-Payne; Carey Farquhar Journal: Int J STD AIDS Date: 2013-09-18 Impact factor: 1.359
Authors: Mark Schiffman; Kai Yu; Rosemary Zuna; S Terence Dunn; Han Zhang; Joan Walker; Michael Gold; Noorie Hyun; Greg Rydzak; Hormuzd A Katki; Nicolas Wentzensen Journal: Int J Cancer Date: 2016-10-17 Impact factor: 7.396