| Literature DB >> 35878384 |
Emily Jones1, Solomon Woldeyohannes1, Fernanda Castillo-Alcala2, Brandon N Lillie3, Mee-Ja M Sula4, Helen Owen5, John Alawneh6, Rachel Allavena1.
Abstract
Inter-pathologist variation is widely recognized across human and veterinary pathology and is often compounded by missing animal or clinical information on pathology submission forms. Variation in pathologist threshold levels of resident inflammatory cells in the tissue of interest can further decrease inter-pathologist agreement. This study applied a predictive modeling tool to bladder histology slides that were assessed by four pathologists: first without animal and clinical information, then with this information, and finally using the predictive tool. All three assessments were performed twice, using digital whole-slide images (WSI) and then glass slides. Results showed marked variation in pathologists' interpretation of bladder slides, with kappa agreement values of 7-37% without any animal or clinical information, 23-37% with animal signalment and history, and 31-42% when our predictive tool was applied, for digital WSI and glass slides. The concurrence of test pathologists to the reference diagnosis was 60% overall. This study provides a starting point for the use of predictive modeling in standardizing pathologist agreement in veterinary pathology. It also highlights the importance of high-quality whole-slide imaging to limit the effect of digitization on inter-pathologist agreement and the benefit of continued standardization of tissue assessment in veterinary pathology.Entities:
Keywords: bladder disease; canine; concurrence; feline; glass slides; inter-pathologist agreement; predictive modeling; veterinary pathology; whole-slide images
Year: 2022 PMID: 35878384 PMCID: PMC9323256 DOI: 10.3390/vetsci9070367
Source DB: PubMed Journal: Vet Sci ISSN: 2306-7381
Figure 1Sequence of assessment of the slide set by each pathologist.
Histological criteria to be assessed by the pathologists in worksheets one and two (without and with signalment and clinical history).
| Column Heading | Potential Answers * |
|---|---|
| Slide code | Provided |
| Ulceration | Yes, No |
| SM_oedema | Yes, No |
| SM_haem | Yes, No |
| SM_inflamm | Yes, No |
| SM_inflamm_type | Lymphocytic |
| Det_inflamm | Yes, No |
| Det_inflamm_type | Lymphocytic |
| Organisms | Yes, No |
| Morphological diagnosis | Free form box |
| Etiological diagnosis | Normal |
| Comments | Free form box |
Det detrusor muscle/muscularis; haem hemorrhage; inflamm inflammation; SM submucosal. * Potential answers provided from a drop-down box; no free text allowed unless otherwise stated.
Histological criteria to be assessed by the pathologists in worksheets three and four (canine and feline), using the predictive tool.
| Column Heading | Potential Answers * |
|---|---|
| Slide code | Provided |
| Urothelial ulceration | Yes, No |
| Submucosal lymphoid aggregates | Yes, No |
| Neutrophilic submucosal inflammation | Yes, No |
| Urothelial inflammation | Yes, No |
| Amount of submucosal hemorrhage | Mild |
| Your diagnosis | Normal |
| Comments | Free form box |
* Potential answers provided from a drop-down box; no free text allowed unless otherwise stated.
Digital whole-slide image count data from all study pathologists, P1–P4.
| No Animal Information | Signalment and History | With Predictive Tool | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Diagnosis | Reference | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 |
| Cystitis | 7 | 7 | 14 | 17 | 6 | 6 | 14 | 11 | 5 | 7 | 14 | 11 | 4 |
| Neoplasia | 6 | 4 | 4 | 3 | 3 | 4 | 4 | 3 | 3 | 4 | 4 | 3 | 3 |
| Urolithiasis | 6 | 9 | 0 | 1 | 6 | 9 | 0 | 8 | 5 | 7 | 0 | 8 | 3 |
| Normal | 6 | 5 | 2 | 2 | 1 | 6 | 2 | 2 | 5 | 5 | 2 | 2 | 5 |
| Other | 0 | 0 | 5 | 2 | 3 | 0 | 5 | 1 | 3 | 2 | 5 | 1 | 4 |
| Total | 25 | 25 | 25 | 25 | 19 * | 25 | 25 | 25 | 21 * | 25 | 25 | 25 | 17 * |
* Technical issues prevented P4 from viewing some slides.
Glass slide count data from all study pathologists, P1–P4.
| No Animal Information | Signalment and History | With Predictive Tool | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Diagnosis | Reference | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 | P1 | P2 | P3 | P4 |
| Cystitis | 7 | 6 | 15 | 15 | 10 | 6 | 15 | 9 | 10 | 6 | 14 | 11 | 11 |
| Neoplasia | 6 | 3 | 3 | 2 | 3 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 |
| Urolithiasis | 6 | 8 | 0 | 1 | 3 | 9 | 0 | 4 | 3 | 8 | 0 | 4 | 1 |
| Normal | 6 | 5 | 2 | 4 | 4 | 4 | 0 | 4 | 4 | 5 | 2 | 4 | 4 |
| Other | 0 | 0 | 2 | 0 | 2 | 0 | 0 | 2 | 2 | 0 | 3 | 0 | 3 |
| Total | 25 * | 22 | 22 | 22 | 22 | 21 ** | 22 | 22 | 22 | 22 | 22 | 22 | 22 |
* Three blocks from the WSI part of the study were unavailable for sectioning glass slides. ** No data recorded for one case in the first spreadsheet.
Inter-pathologist agreement for the three slide-reading conditions, diagnosing digital whole-slide images of canine and feline bladder tissue.
| Inter-Pathologist Agreement: Fleiss Kappa Statistics | ||||||
|---|---|---|---|---|---|---|
| Overall Kappa | Detailed Kappa for Each Diagnosis | |||||
| Kappa | Z-Value | Kappa | Z-Value | |||
| No animal information | ||||||
| overall | 0.074 | 1.5 | 0.134 | |||
| cystitis | 0.01 | 0.118 | 0.906 | |||
| neoplasia | 0.558 | 6.833 | <0.001 | |||
| normal | 0.204 | 2.501 | 0.012 | |||
| other | −0.02 | −0.25 | 0.803 | |||
| urolithiasis | −0.159 | −1.943 | 0.052 | |||
| Signalment and history | ||||||
| overall | 0.227 | 4.668 | <0.001 | |||
| cystitis | 0.558 | 6.833 | <0.001 | |||
| neoplasia | 0.765 | 9.366 | <0.001 | |||
| normal | 0.268 | 3.278 | 0.001 | |||
| other | −0.01 | −0.124 | 0.902 | |||
| urolithiasis | 0.049 | 0.604 | 0.546 | |||
| Predictive tool probabilities | ||||||
| overall | 0.311 | 6.873 | <0.001 | |||
| cystitis | 0.204 | 2.501 | 0.012 | |||
| neoplasia | 0.551 | 6.75 | <0.001 | |||
| normal | 0.391 | 4.788 | <0.001 | |||
| other | 0.054 | 0.666 | 0.505 | |||
| urolithiasis | 0.307 | 3.755 | <0.001 | |||
Z = standard normal score.
Figure 2Bar plot showing the inter-pathologist agreement kappa statistics with 95% CI for the three slide-reading conditions, diagnosing digital slides of canine and feline bladder tissue.
Inter-pathologist agreement for the three slide-reading conditions, diagnosing glass slides of canine and feline bladder tissue.
| Inter-Pathologist Agreement: Fleiss Kappa Statistics | ||||||
|---|---|---|---|---|---|---|
| Overall Kappa | Detailed Kappa for Each Diagnosis | |||||
| Kappa | Z-Value | Kappa | Z-Value | |||
| No animal information | ||||||
| overall | 0.369 | 6.813 | <0.001 | |||
| cystitis | 0.362 | 4.163 | <0.001 | |||
| neoplasia | 0.688 | 7.908 | <0.001 | |||
| normal | 0.604 | 6.935 | <0.001 | |||
| urolithiasis | −0.045 | −0.517 | 0.605 | |||
| Signalment and history | ||||||
| overall | 0.371 | 6.901 | <0.001 | |||
| cystitis | 0.688 | 7.908 | <0.001 | |||
| neoplasia | 0.688 | 7.908 | <0.001 | |||
| normal | 0.545 | 6.257 | <0.001 | |||
| urolithiasis | 0.152 | 1.741 | 0.082 | |||
| Predictive tool probabilities | ||||||
| overall | 0.419 | 7.84 | <0.001 | |||
| cystitis | 0.604 | 6.935 | <0.001 | |||
| neoplasia | 0.678 | 7.794 | <0.001 | |||
| normal | 0.652 | 7.488 | <0.001 | |||
| urolithiasis | 0.127 | 1.464 | 0.143 | |||
Z = standard normal score.
Figure 3Bar plot showing the inter-pathologist agreement kappa statistics with 95% CI for the three slide-reading conditions, diagnosing glass slides of canine and feline bladder tissue.
Concurrence of the four pathologists’ diagnoses of canine and feline bladder tissues compared with the reference diagnosis.
| Concurrence and Kappa Statistics | |||||
|---|---|---|---|---|---|
| Concurrence | Agreement | ||||
| Concurrence | LCL | UCL | Kappa | ||
| All data | |||||
| 0.604 | 0.562 | 0.645 | <0.001 | 0.460 | <0.001 |
| No animal information | |||||
| 0.548 | 0.474 | 0.621 | <0.001 | 0.384 | <0.001 |
| Signalment and history | |||||
| 0.610 | 0.536 | 0.680 | <0.001 | 0.470 | 0.002 |
| Predictive tool probabilities | |||||
| 0.654 | 0.580 | 0.723 | <0.001 | 0.528 | 0.001 |
| Glass | |||||
| 0.629 | 0.567 | 0.687 | <0.001 | 0.486 | <0.001 |
| Digital | |||||
| 0.581 | 0.522 | 0.638 | <0.001 | 0.436 | <0.001 |
LCL lower confidence limit; UCL upper confidence limit.
Figure 4Concurrence of the four pathologists’ diagnoses of canine and feline bladder tissues compared with the reference diagnosis.