| Literature DB >> 31238983 |
Kazuhiro Tabata1,2, Naohiro Uraoka3, Jamal Benhamida3, Matthew G Hanna3, Sahussapont Joseph Sirintrapun3, Brandon D Gallas4, Qi Gong4, Rania G Aly3,5, Katsura Emoto3,6, Kant M Matsuda3, Meera R Hameed3, David S Klimstra3, Yukako Yagi3.
Abstract
BACKGROUND: The establishment of whole-slide imaging (WSI) as a medical diagnostic device allows that pathologists may evaluate mitotic activity with this new technology. Furthermore, the image digitalization provides an opportunity to develop algorithms for automatic quantifications, ideally leading to improved reproducibility as compared to the naked eye examination by pathologists. In order to implement them effectively, accuracy of mitotic figure detection using WSI should be investigated. In this study, we aimed to measure pathologist performance in detecting mitotic figures (MFs) using multiple platforms (multiple scanners) and compare the results with those obtained using a brightfield microscope.Entities:
Keywords: Microscopy; Mitotic cell quantification; Multiple whole slide scanner; Validation study; Whole slide imaging; eeDAP
Year: 2019 PMID: 31238983 PMCID: PMC6593538 DOI: 10.1186/s13000-019-0839-8
Source DB: PubMed Journal: Diagn Pathol ISSN: 1746-1596 Impact factor: 2.644
Enumeration of candidate mitotic figures (number and percentage in total 155 candidate mitotic figures)
| Scanner A | Scanner B | Scanner C | Scanner D | Microscope | |
|---|---|---|---|---|---|
| Observer 1 | 43 (28%) | 57 (37%) | 35 (23%) | 42 (27%) | 41 (26%) |
| Observer 2 | 64 (41%) | 49 (32%) | 28 (18%) | 41 (26%) | 66 (43%) |
| Observer 3 | 55 (35%) | 39 (25%) | 36 (23%) | 60 (39%) | 51 (33%) |
| Observer 4 | 34 (22%) | 43 (28%) | 39 (25%) | 34 (22%) | 64 (41%) |
| Observer 5 | 35 (23%) | 48 (31%) | 39 (25%) | 46 (30%) | 60 (39%) |
| Ground truth | 74 (47.1%) |
Fig. 1Examples of ground truth in mitotic cell imaging. Three examples of ground truth for mitotic cell imaging are shown. Example 1: images of mitotic cells obtained by all observers using all observation methods included microscopy; example 2: no participant detected mitotic cells, using scanner C; example 3: only one observer detected mitotic cells upon microscopic examination
Inter-observer agreement in each observation method (Kappa coefficient)
| Scanner A | Scanner B | Scanner C | Scanner D | Microscope | |
|---|---|---|---|---|---|
| Observer 1 vs 2 | 0.677 | 0.770 | 0.814 | 0.815 | 0.735 |
| Observer 1 vs 3 | 0.775 | 0.790 | 0.872 | 0.745 | 0.833 |
| Observer 1 vs 4 | 0.885 | 0.788 | 0.852 | 0.865 | 0.779 |
| Observer 1 vs 5 | 0.864 | 0.799 | 0.879 | 0.820 | 0.781 |
| Observer 2 vs 3 | 0.621 | 0.807 | 0.834 | 0.667 | 0.742 |
| Observer 2 vs 4 | 0.693 | 0.763 | 0.840 | 0.791 | 0.723 |
| Observer 2 vs 5 | 0.699 | 0.802 | 0.840 | 0.786 | 0.666 |
| Observer 3 vs 4 | 0.773 | 0.850 | 0.818 | 0.819 | 0.831 |
| Observer 3 vs 5 | 0.765 | 0.862 | 0.845 | 0.785 | 0.789 |
| Observer 4 vs 5 | 0.886 | 0.833 | 0.864 | 0.905 | 0.743 |
Intra-observer agreement in each observation method (Kappa coefficient)
| Observer 1 | Observer 2 | Observer 3 | Observer 4 | Observer 5 | |
|---|---|---|---|---|---|
| Scanner A vs Microscope | 0.835 | 0.677 | 0.784 | 0.748 | 0.830 |
| Scanner B vs Microscope | 0.775 | 0.729 | 0.861 | 0.763 | 0.797 |
| Scanner C vs Microscope | 0.824 | 0.717 | 0.814 | 0.738 | 0.726 |
| Scanner D vs Microscope | 0.815 | 0.664 | 0.804 | 0.776 | 0.799 |
| Scanner A vs B | 0.816 | 0.745 | 0.777 | 0.858 | 0.808 |
| Scanner A vs C | 0.851 | 0.704 | 0.744 | 0.859 | 0.892 |
| Scanner A vs D | 0.856 | 0.594 | 0.728 | 0.880 | 0.742 |
| Scanner B vs C | 0.792 | 0.784 | 0.939 | 0.891 | 0.848 |
| Scanner B vs D | 0.810 | 0.709 | 0.852 | 0.871 | 0.761 |
| Scanner C vs D | 0.831 | 0.772 | 0.832 | 0.872 | 0.754 |
Fig. 2Bland-Altman plots of within-reader differences in log (base 10) counts between each scanner (a, b, c, d) and the microscope. Each symbol corresponds to a different reader. The dotted line in each plot is b, the mean difference in the log counts. The dashed lines show the 95% MRMC confidence interval for b. The solid lines show the MRMC limits of agreement (LA). We map b and LA to ratios of counts with the inverse log (base 10) transformation, or 10ˆb, 10ˆLA
Accuracy for all readers and observation methods
| Scanner A | Scanner B | Scanner C | Scanner D | Microscope | |
|---|---|---|---|---|---|
| Observer.1 | 0.713 | 0.743 | 0.685 | 0.706 | 0.764 |
| Observer.2 | 0.700 | 0.715 | 0.631 | 0.648 | 0.778 |
| Observer.3 | 0.704 | 0.738 | 0.717 | 0.802 | 0.806 |
| Observer.4 | 0.691 | 0.726 | 0.699 | 0.717 | 0.842 |
| Observer.5 | 0.698 | 0.754 | 0.738 | 0.785 | 0.802 |
| Average | 0.701 | 0.735 | 0.694 | 0.732 | 0.798 |
| SE | 0.021 | 0.023 | 0.028 | 0.035 | 0.021 |
| 95% CI | (0.659, 0.743) | (0.689, 0.780) | (0.636, 0.752) | (0.653, 0.810) | (0.754, 0.842) |
| 0.001* | 0.009* | 0.001* | 0.062 |
Accuracy refers to the average of sensitivity and specificity. SE, standard error; CI, confidence interval. The p-value corresponds to a two-sided hypothesis test comparing reader-averaged accuracy with each scanner viewing mode to the accuracy of the microscope. The p-values of the four hypotheses are compared following the sequentially rejective Bonferroni test with alpha = 0.05 [33]. Statistical significance is indicated with an asterisk *. All analyses account for the correlations and variability from the readers reading the same ROIs, and the correlations arising from MFs contained within the same slides
Fig. 3Accuracy (average of sensitivity and specificity) for each viewing mode averaged over all the readers with 95% confidence intervals. The asterisks indicate that the difference in accuracy of the viewing mode compared to that of microscopy is statistically significant. All analyses account for the correlations and variability from the readers reading the same ROIs