| Literature DB >> 30914690 |
Young-Gon Kim1,2,3, Gyuheon Choi4, Heounjeong Go5, Yongwon Cho1,2,3, Hyunna Lee2,3, A-Reum Lee2,3, Beomhee Park1,2,3, Namkug Kim6,7.
Abstract
Pathologic diagnoses mainly depend on visual scoring by pathologists, a process that can be time-consuming, laborious, and susceptible to inter- and/or intra-observer variations. This study proposes a novel method to enhance pathologic scoring of renal allograft rejection. A fully automated system using a convolutional neural network (CNN) was developed to identify regions of interest (ROIs) and to detect C4d positive and negative peritubular capillaries (PTCs) in giga-pixel immunostained slides. The performance of faster R-CNN was evaluated using optimal parameters of the novel method to enlarge the size of labeled masks. Fifty and forty pixels of the enlarged size images showed the best performance in detecting C4d positive and negative PTCs, respectively. Additionally, the feasibility of deep-learning-assisted labeling as independent dataset to enhance detection in this model was evaluated. Based on these two CNN methods, a fully automated system for renal allograft rejection was developed. This system was highly reliable, efficient, and effective, making it applicable to real clinical workflow.Entities:
Mesh:
Year: 2019 PMID: 30914690 PMCID: PMC6435691 DOI: 10.1038/s41598-019-41479-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Overall procedure of our proposed method.
Figure 2Decision criteria to classify feasible and non-feasible ROIs. (a) Feasible ROI, (b–d) non-feasible ROIs from dominant ambiguous regions including scar, glomerulus, and vessels.
Parameters used for training CNN classification model and CNN detection model.
| Classification model | Detection model | ||
|---|---|---|---|
| Optimizer | SGD | Optimizer | Adam |
| Learning rate | 1e-5 | Learning rate | 1e-5 |
| Weight decay | 1e-6 | Weight decay | 0.0 |
| Epochs | 2000 | Epochs | 150 |
| Momentum | 0.9 | 0.9, 0.999 | |
| Epsilon | 1e-4 | ||
Figure 3Gold standard examples of C4d negative and positive in PTC. Blue and red rectangles show the positive and negative PTC in (a) and (b), respectively.
Figure 4Example of labeled C4d positive PTC with various margin sizes. Margin sizes of (a) 0, (b) 10, (c) 20, (d) 30, (e) 40 pixels.
Figure 5Sequence for deep-learning-assisted labeling. All slides are randomly divided into 6:2:2 as training, test, and validation set in subset 1 and 2. (a) Training classification model with feasible ROIs in subset 1. (b) Training detection model with manual labeled masks in the feasible ROIs (c). Extracting candidate feasible ROIs in subset 2 by the classification model. (c) Extracting candidate PTCs by the detection model and confirming results of (d) as deep-learning-assisted labeling.
Figure 6Feasible and non-feasible ROI classification results. Tissues including feasible ROIs are colored red.
Figure 7FROC comparisons at different size of margin on manual labeled data. Results for detection of (a) C4d positive and (b) negative PTC.
The sensitivities and FROC scores for faster R-CNN detection of C4d positive and negative PTC with various margin sizes (0 to 70) at different mean number of false positives per feasible ROI.
| Mean of FPs | Margin size for detection of C4d positive PTC | |||||||
|---|---|---|---|---|---|---|---|---|
| 0 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | |
| 0.125 | 0.4767 | 0.4862 | 0.7627 | 0.7200 | 0.6575 | 0.8667 | 0.7571 | 0.8851 |
| 0.250 | 0.5397 | 0.5345 | 0.7740 | 0.9045 | 0.8883 | 0.9167 | 0.8136 | 0.8851 |
| 0.500 | 0.5587 | 0.8092 | 0.7910 | 0.9045 | 0.9106 | 0.9167 | 0.9148 | 0.8851 |
| 1.000 | 0.6854 | 0.8092 | 0.8192 | 0.9045 | 0.9274 | 0.9167 | 0.9148 | 0.8851 |
| 2.000 | 0.6854 | 0.8092 | 0.8192 | 0.9045 | 0.9385 | 0.9167 | 0.9148 | 0.8851 |
| 4.000 | 0.6854 | 0.8092 | 0.8192 | 0.9045 | 0.9385 | 0.9167 | 0.9148 | 0.8851 |
| 8.000 | 0.6854 | 0.8092 | 0.8192 | 0.9045 | 0.9385 | 0.9167 | 0.9148 | 0.8851 |
| Score | 0.6166 | 0.7238 | 0.8006 | 0.8781 | 0.8856 |
| 0.8778 | 0.8851 |
| 0.125 | 0.0919 | 0.0935 | 0.0976 | 0.1519 | 0.2306 | 0.0944 | 0.0918 | 0.4087 |
| 0.250 | 0.1258 | 0.2019 | 0.1453 | 0.1706 | 0.4663 | 0.4485 | 0.4248 | 0.5549 |
| 0.500 | 0.2733 | 0.2783 | 0.3541 | 0.3200 | 0.5388 | 0.5272 | 0.5465 | 0.5720 |
| 1.000 | 0.2952 | 0.4367 | 0.4978 | 0.5774 | 0.6789 | 0.6969 | 0.6571 | 0.7293 |
| 2.000 | 0.4655 | 0.5553 | 0.6792 | 0.7080 | 0.7920 | 0.7412 | 0.7611 | 0.7876 |
| 4.000 | 0.6217 | 0.6195 | 0.7743 | 0.7633 | 0.8274 | 0.7412 | 0.7633 | 0.7876 |
| 8.000 | 0.7257 | 0.6195 | 0.7743 | 0.7655 | 0.9004 | 0.7412 | 0.7633 | 0.7876 |
|
| 0.3713 | 0.4006 | 0.4746 | 0.4938 |
| 0.5700 | 0.5725 | 0.6611 |
Figure 8FROC comparisons for validation of feasibility of using deep-learning-assisted labeling. FROC comparisons to show inter- and intra-observer variation between different validation set for detection of (a) C4d positive and (b) negative PTC with faster R-CNN detection algorithm. FROC comparisons to validate effectiveness of deep-learning-assisted labeling for detection of (c) C4d positive and (d) negative PTC with faster R-CNN and YOLO v2 detection algorithms.
The sensitivities and FROC scores for faster R-CNN and YOLO v2 detections of C4d positive and negative PTC with different detection models trained by different dataset at different mean number of false positives per feasible ROIs (0 to 2 and 0 to 8 for detection of positive and negative PTC, respectively). Model 1: trained by subset 1, Model 2: trained by subset 2, Model 3: trained by fusion of subset 1 and 2.
| Mean of FPs | Detection model for C4d positive PTC | Detection model for C4d negative PTC | ||||
|---|---|---|---|---|---|---|
| Model 1 | Model 2 | Model 3 | Model 1 | Model 2 | Model 3 | |
|
| ||||||
| 0.125 | 0.6970 | 0.5495 | 0.5768 | 0.1387 | 0.0863 | 0.1148 |
| 0.250 | 0.7803 | 0.6923 | 0.7510 | 0.3333 | 0.1969 | 0.2405 |
| 0.500 | 0.7886 | 0.8791 | 0.8817 | 0.3966 | 0.4579 | 0.4343 |
| 1.000 | 0.9024 | 0.9451 | 0.9253 | 0.5615 | 0.6969 | 0.6644 |
| 2.000 | 0.9187 | 0.9478 | 0.9585 | 0.7082 | 0.8424 | 0.8131 |
| 4.000 | 0.9187 | 0.9478 | 0.9647 | 0.8075 | 0.9294 | 0.8887 |
| 8.000 | 0.9187 | 0.9478 | 0.9647 | 0.8563 | 0.9294 | 0.8910 |
|
| 0.8463 | 0.8442 |
| 0.5431 |
| 0.5781 |
|
| ||||||
| 0.125 | 0.3864 | 0.7009 | 0.6736 | 0.0058 | 0.2034 | 0.1928 |
| 0.250 | 0.6284 | 0.7479 | 0.6795 | 0.0032 | 0.3644 | 0.2240 |
| 0.500 | 0.6817 | 0.8333 | 0.7329 | 0.2945 | 0.5512 | 0.4494 |
| 1.000 | 0.7124 | 0.8462 | 0.7567 | 0.5394 | 0.6617 | 0.4761 |
| 2.000 | 0.7221 | 0.8547 | 0.7565 | 0.5423 | 0.7480 | 0.6121 |
| 4.000 | 0.7444 | 0. 8761 | 0.7864 | 0.5423 | 0.8100 | 0.7106 |
| 8.000 | 0.7444 | 0.8846 | 0.7864 | 0.5423 | 0.8100 | 0.7345 |
|
| 0.6599 |
| 0.7388 | 0.3528 |
| 0.4856 |
Figure 9Relative sensitivities comparisons of detection models trained with different amount of training data for detecting (a) C4d positive and (b) negative PTC with faster R-CNN detection algorithm.