| Literature DB >> 32471471 |
Min Feng1,2,3, Yang Deng1, Libo Yang1,3, Qiuyang Jing2, Zhang Zhang3, Lian Xu2,3, Xiaoxia Wei1,3,4, Yanyan Zhou1, Diwei Wu1, Fei Xiang5, Yizhe Wang5, Ji Bao6, Hong Bu7,8.
Abstract
BACKGROUND: The scoring of Ki-67 is highly relevant for the diagnosis, classification, prognosis, and treatment in breast invasive ductal carcinoma (IDC). Traditional scoring method of Ki-67 staining followed by manual counting, is time-consumption and inter-/intra observer variability, which may limit its clinical value. Although more and more algorithms and individual platforms have been developed for the assessment of Ki-67 stained images to improve its accuracy level, most of them lack of accurate registration of immunohistochemical (IHC) images and their matched hematoxylin-eosin (HE) images, or did not accurately labelled each positive and negative cell with Ki-67 staining based on whole tissue sections (WTS). In view of this, we introduce an accurate image registration method and an automatic identification and counting software of Ki-67 based on WTS by deep learning.Entities:
Keywords: Automatic recognition; Breast invasive ductal carcinoma; Convolutional neural network; Ki-67 counting; Whole tissue sections
Mesh:
Substances:
Year: 2020 PMID: 32471471 PMCID: PMC7257511 DOI: 10.1186/s13000-020-00957-5
Source DB: PubMed Journal: Diagn Pathol ISSN: 1746-1596 Impact factor: 2.644
Fig. 1The flow chart of Ki-67 Automatic Counting Software in breast IDC on whole tissue sections
The results of segmentation and extraction based on 1017 HE slices labelled information
| Type of sets | training sets | verification sets | test sets | Sum |
|---|---|---|---|---|
| WSI number | 677 | 153 | 187 | 1017 |
| Patch number | 11,628,208 | 2,973,384 | 2,419,032 | 17,020,624 |
Fig. 2Comparative pathological analysis of breast tissue regions. Regions related to breast IDC (red), ductal carcinoma in situ (DCIS) (green), and normal breast tissue (blue) are shown
Fig. 3Comparison of the test system and the standard. a, Black box with red fields indicates the heat map, which was obtained by GoogLeNet Inception V1. Red lasso region relates to the breast IDC region marked by the pathology team (considered it as “gold standard”). b, ROC curve of the breast IDC identification based on WSI, the area under curve is 0.959
Test Results of breast IDC identification based on whole slide imaging
| Test Indicators | Test Results |
|---|---|
| Sensitivity | 0.8505 |
| Specificity | 0.9523 |
| Balance accuracy | 0.9014 |
| Accuracy | 0.8944 |
| Positive result likehood ratio (PRLR) | 17.84 |
| Negative result likehood ratio (LR) | 0.16 |
| Positive predictive values | 0.9592 |
| Negative predictive values | 0.8286 |
| Diagnostic index | 1.8028 |
| Youden index | 0.8028 |
| False positive rate | 0.0477 |
| False negative rate | 0.1495 |
Fig. 4Ki-67 staining and corresponding registration results of IDC regions. The figure illustrates contiguous HE slides and Ki-67 stained slides that were perfectly registered (in most cases). a, Contiguous HE slides and Ki-67 stained slides. b, Registering. c, Registration results of IDC region in the Ki-67 slides
Fig. 5Manual labelling of “gold standard” results for Ki-67 positive cells. a, Selected regions of breast IDC on HE slides. b, Corresponding regions of breast IDC on Ki-67 stained slides. c, Tumour cells in IDC regions on Ki-67 stained slides (red for positive cells, green for negative cells)
Manual labelling of “gold standard” results for Ki-67 positive cells in the human-machine challenge
| number | Positive nuclei count | Negative nuclei count | Total number of cells | Ki-67 Index score (%) | Standard score |
|---|---|---|---|---|---|
| 1 | 29,961 | 299,428 | 329,389 | 9.10 | 9 or 10 |
| 2 | 50,073 | 270,593 | 320,666 | 15.62 | 15 or 16 |
| 3 | 31,119 | 73,719 | 104,838 | 29.68 | 29 or 30 |
| 4 | 20,272 | 109,013 | 129,285 | 15.68 | 15 or 16 |
| 5 | 9854 | 79,026 | 88,880 | 11.09 | 11 or 12 |
| 6 | 11,939 | 122,641 | 134,580 | 8.87 | 8 or 9 |
| 7 | 9332 | 100,608 | 109,940 | 8.49 | 8 or 9 |
| 8 | 232,515 | 86,582 | 319,097 | 72.87 | 72 or 73 |
| 9 | 30,003 | 270,266 | 300,266 | 9.99 | 9 or 10 |
| 10 | 85,036 | 69,088 | 154,124 | 55.17 | 55 or 56 |
| Sum | 510,104 | 1,480,964 | 1,991,068 | – | |
| Average | 51,010 | 148,096 | 199,107 | 25.62 |
Details for the competition results of all contestants
| No. | Dr. NO.1 score | Dr. NO.2 score | Dr. NO.3 score | Dr. NO.4 score | Dr. NO.5 score | Dr. NO.6 score | Dr. NO.7 score | Dr. NO.8 score | Dr. NO.9 score | Dr.NO.10 | Average score | AI score |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 94 | 84 | 91 | 99 | 94 | 96 | 100 | 99 | 92 | 80 | 92.9 | 100 |
| 2 | 96 | 86 | 91 | 89 | 94 | 95 | 99 | 94 | 96 | 95 | 93.5 | 98 |
| 3 | 90 | 98 | 90 | 85 | 84 | 69 | 100 | 65 | 70 | 88 | 83.9 | 97 |
| 4 | 98 | 92 | 92 | 88 | 69 | 89 | 99 | 99 | 74 | 99 | 89.9 | 99 |
| 5 | 99 | 91 | 86 | 92 | 97 | 90 | 98 | 97 | 98 | 93 | 94.1 | 99 |
| 6 | 96 | 93 | 96 | 92 | 96 | 99 | 99 | 97 | 96 | 97 | 96.1 | 100 |
| 7 | 86 | 93 | 52 | 97 | 68 | 88 | 98 | 100 | 88 | 78 | 84.8 | 99 |
| 8 | 92 | 98 | 98 | 78 | 88 | 88 | 100 | 78 | 83 | 98 | 90.1 | 98 |
| 9 | 97 | 100 | 95 | 96 | 92 | 96 | 98 | 95 | 90 | 90 | 93.9 | 99 |
| 10 | 100 | 70 | 70 | 75 | 65 | 93 | 99 | 65 | 90 | 68 | 79.5 | 100 |
| Average score | 94.8 | 90.5 | 86.1 | 89.1 | 84.7 | 89.3 | 99.0 | 88.9 | 87.7 | 88.6 | 98.9 | |
| Total time | 28′27″ | 25′54″ | 28′56″ | 28′25″ | 20′33″ | 23′40″ | 18′46″ | 20′23″ | 26′55″ | 27′42″ | 25′31″ | 23′19″ |