| Literature DB >> 34868946 |
Teng Zuo1, Yanhua Zheng1, Lingfeng He2, Tao Chen3, Bin Zheng4, Song Zheng1, Jinghang You5, Xiaoyan Li6, Rong Liu1, Junjie Bai1, Shuxin Si1, Yingying Wang7, Shuyi Zhang8, Lili Wang4, Jianhui Chen1.
Abstract
OBJECTIVES: This study was conducted in order to design and develop a framework utilizing deep learning (DL) to differentiate papillary renal cell carcinoma (PRCC) from chromophobe renal cell carcinoma (ChRCC) using convolutional neural networks (CNNs) on a small set of computed tomography (CT) images and provide a feasible method that can be applied to light devices.Entities:
Keywords: CNN—convolutional neural network; ChRCC,·chromophobe-primary renal cell carcinoma; PRCC; cancer image classification; papillary renal cell carcinoma
Year: 2021 PMID: 34868946 PMCID: PMC8637858 DOI: 10.3389/fonc.2021.746750
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1Flowchart of automated PRCC/ChRCC classification using computer vision.
Figure 2An example of data augmentation processing. Based on the geometric transformations (rotation and flipping), the Gaussian blur, brighter, and darker were applied, which finally achieved 15× amplification.
The results of CNN-based networks for classification task training and validation and the testing results of the models.
| Models | Parameters | Best validation accuracy | Testing results (case) |
|---|---|---|---|
| MobileNetV2 | Total: 2,261,827 | 96.8640% | Accuracy: 100% |
| ShuffleNet | Total: 1,272,859 | 97.3074% | Accuracy: 83.3334% |
| EfficientNet | Total: 4,053,414 | Cannot converge | NA |
| ResNet-34 | Total: 21,829,058 | 93.6404% | Accuracy: 91.6667% |
| ResNet-50 | Total: 25,662,403 | Cannot converge | NA |
| ResNet-101 | Total: 44,706,755 | Cannot converge | NA |
NA, not available.
Information of test sets, comparison result of automated model prediction, and the result of model performance in the validation dataset.
| Case | Source | Subtypes | Gender | Age | Sample | Automated prediction | Manual prediction |
|---|---|---|---|---|---|---|---|
| 1 | Union Hospital of FJMU | PRCC | Female | 60 |
| Matched | 1. Matched |
| 2 | Union Hospital of FJMU | PRCC | Male | 58 |
| Matched | 1. Matched |
| 3 | Union Hospital of FJMU | PRCC | Male | 57 |
| Matched | 1. Matched |
| 4 | Union Hospital of FJMU | ChRCC | Male | 62 |
| Matched | 1. Matched |
| 5 | Union Hospital of FJMU | ChRCC | Female | 41 |
| Matched | 1. Matched |
| 6 | Union Hospital of FJMU | ChRCC | Female | 62 |
| Matched | 1. Matched |
| 7 | TCGA-KIRP | PRCC | – | – |
| Matched | 1. Matched |
| 8 | TCGA-KIRP | PRCC | – | – |
| Matched | 1. Matched |
| 9 | TCGA-KICH | ChRCC | – | – |
| Matched | 1. Matched |
| 10 | TCGA-KICH | ChRCC | – | – |
| Matched | 1. Mismatched |
| Validation accuracy | 96.8640% | ||||||
| Validation sensitivity | 99.3794% | ||||||
| Validation specificity | 94.0271% | ||||||
| Test accuracy (case) | 100% | ||||||
| Test sensitivity (case) | 100% | ||||||
| Test specificity (case) | 100% | ||||||
| Test accuracy (image) | 93.3333% | ||||||
| Test sensitivity (image) | 88.2353% | ||||||
| Test specificity (image) | 86.6667% | ||||||
| Manual accuracy | 85% (90% and 80%) | ||||||
| Manual sensitivity | 100% | ||||||
| Manual specificity | 70% (80% and 60%) | ||||||
The structure of MobileNetV2.
| Layer (functions) | Output shape | Stride | Filter shape |
|---|---|---|---|
| Input layer | None, 256, 256, 3 | / | / |
| Conv1 (Conv+BN+ReLU6) | None, 128, 128, 32 | 2 | 3 * 3 * 32 |
| Inverted_residual (linear) | None, 128, 128, 16 | 1 | 1 * 1 * 32 * 16 |
| Inverted_residual_1 (ReLU6) | None, 64, 64, 24 | 2 | 3 * 3 * 16 * 24 |
| Inverted_residual_2 (linear) | None, 64, 64, 24 | 1 | 1 * 1 * 24 |
| Inverted_residual_3 (ReLU6) | None, 32, 32, 32 | 2 | 3 * 3 * 24 * 32 |
| Inverted_residual_4 (linear) | None, 32, 32, 32 | 1 | 1 * 1 * 32 |
| Inverted_residual_5 (linear) | None, 32, 32, 32 | 1 | 1 * 1 * 32 |
| Inverted_residual_6 (ReLU6) | None, 16, 16, 64 | 2 | 3 * 3 * 32 * 64 |
| Inverted_residual_7 (linear) | None, 16, 16, 64 | 1 | 1 * 1 * 64 |
| Inverted_residual_8 (linear) | None, 16, 16, 64 | 1 | 1 * 1 * 64 |
| Inverted_residual_9 (linear) | None, 16, 16, 64 | 1 | 1 * 1 * 64 |
| Inverted_residual_10 (linear) | None, 16, 16, 96 | 1 | 1 * 1 * 64 * 96 |
| Inverted_residual_11 (linear) | None, 16, 16, 96 | 1 | 1 * 1 * 96 |
| Inverted_residual_12 (linear) | None, 16, 16, 96 | 1 | 1 * 1 * 96 |
| Inverted_residual_13 (ReLU6) | None, 8, 8, 160 | 2 | 3 * 3 * 96 * 160 |
| Inverted_residual_14 (linear) | None, 8, 8, 160 | 1 | 1 * 1 * 160 |
| Inverted_residual_15 (linear) | None, 8, 8, 160 | 1 | 1 * 1 * 160 |
| Inverted_residual_16 (linear) | None, 8, 8, 320 | 1 | 1 * 1 * 160 * 320 |
| Conv (ReLU6) | None, 8, 8, 1,280 | 1 | 1 * 1 * 320 * 1,280 |
| Global average pooling | None, 1,280 | 1 | Pool 8 * 8 |
| Dropout | None, 1,280 | 1 | Probability = 0.2 |
| Classifier (ReLU) | None, 2 | / | Classifier |
Figure 3The visual structure of MobileNetV2.
Figure 4The performance measure of this model, including the ROC curve, AUC (0.949), and confidence intervals (0.849, 1.000).
Figure 5A demo of a DL-based radiomics workstation (next study).