| Literature DB >> 36105290 |
Zhenglin Yi1,2, Zhenyu Ou1,2, Jiao Hu1,2, Dongxu Qiu1,2, Chao Quan1,2, Belaydi Othmane1, Yongjie Wang3,2, Longxiang Wu1,2.
Abstract
Objectives: To evaluate a new deep neural network (DNN)-based computer-aided diagnosis (CAD) method, namely, a prostate cancer localization network and an integrated multi-modal classification network, to automatically localize prostate cancer on multi-parametric magnetic resonance imaging (mp-MRI) and classify prostate cancer and non-cancerous tissues. Materials and methods: The PROSTAREx database consists of a "training set" (330 suspected lesions from 204 cases) and a "test set" (208 suspected lesions from 104 cases). Sequences include T2-weighted, diffusion-weighted, Ktrans, and apparent diffusion coefficient (ADC) images. For the task of abnormal localization, inspired by V-net, we designed a prostate cancer localization network with mp-MRI data as input to achieve automatic localization of prostate cancer. Combining the concepts of multi-modal learning and ensemble learning, the integrated multi-modal classification network is based on the combination of mp-MRI data as input to distinguish prostate cancer from non-cancerous tissues through a series of operations such as convolution and pooling. The performance of each network in predicting prostate cancer was examined using the receiver operating curve (ROC), and the area under the ROC curve (AUC), sensitivity (TPR), specificity (TNR), accuracy, and Dice similarity coefficient (DSC) were calculated.Entities:
Keywords: computer-aided diagnosis (CAD); deep neural networks (DNN); multi-parametric magnetic resonance imaging (MP-MRI); prostate cancer classification; prostate cancer localization
Year: 2022 PMID: 36105290 PMCID: PMC9465082 DOI: 10.3389/fphys.2022.918381
Source DB: PubMed Journal: Front Physiol ISSN: 1664-042X Impact factor: 4.755
Details of PROSTATEx dataset.
| Category | PZ | TZ | As | SV | Total |
|---|---|---|---|---|---|
| Training set | 191 | 82 | 55 | 2 | 330 |
| Test set | 113 | 59 | 34 | 2 | 208 |
AS, anterior fibromuscular stroma; PZ, peripheral zone; SV, seminal vesicle; TZ, transitional zone.
PROSTATEx database image classification, including Ktrans, ADC, and t2-weighted images.
|
|
FIGURE 1Prostate MRI data preprocessing steps.
FIGURE 2Prostate MR image alignment results, where (A) ADC image, (B) T2-weighted image, and (C) overlap map after image alignment.
FIGURE 3Network structure of the V-Net–based prostate cancer anomaly localization system.
FIGURE 4(A) Single-mode classification network structure. (B) Input tensor multi-modal classification network structure. (C) Integrated multi-modal classification network structure.
Prediction results of prostate cancer localization network.
|
|
Table 3 shows the results for four different patients in the dataset. The first column shows the patient ID., The second column shows the 2D Ktrans map, represented by a “viridis” color band for better visualization. The third and fourth columns show the two-dimensional images of the prediction results of the artificial labeling and localization network after inputting Ktrans images, which are all grayscale images, and it can be observed that the prediction results are very close to the label image. Due to the small size of the prostate, it is, on average, 40 × 30 × 20 mm. Numerically, the error between the predicted results and the labeled results for the four patients was less than 3 mm, with an average error of only 1.64 mm, and the prediction results were only about 6% error compared to the normal prostate volume. Therefore, it can be considered that the prostate cancer localization network has excellent performance and accurate prediction results, and the results can be further improved by using a larger database or better data preprocessing in the future.
Performance of prostate cancer localization network compared with previous classical segmentation methods.
| Model | Sensitivity | Specificity | Jaccard index | PPV | NPV | DSC |
|---|---|---|---|---|---|---|
| U-Net | 0.80 | 0.83 | 0.79 | 0.76 | 0.80 | 0.74 |
| U-Net++ | 0.82 | 0.84 | 0.82 | 0.81 | 0.83 | 0.75 |
| DenseNet | 0.86 | 0.88 | 0.87 | 0.85 | 0.89 | 0.81 |
| FCN | 0.85 | 0.89 | 0.86 | 0.90 | 0.89 | 0.82 |
| SegNet | 0.91 | 0.87 | 0.87 | 0.86 | 0.90 | 0.78 |
| Our Method |
|
|
|
|
|
|
DSC, dice similarity coefficient; NPV, negative predictive value; PPV, positive predictive value. Best performance values are in bold.
FIGURE 5(A) Confusion matrix of five single-modal classification networks. (B) ROC curves of five single-mode classification networks.
FIGURE 6(A) ROC curve of input tensor multi-modal classification network. (B) Confusion matrix of input tensor multi-modal classification network. (C) ROC curve of integrated multi-modal classification network. (D) Confusion matrix of integrated multi-modal classification network.
Indicators of integrated multi-modal classification network, input tensor multi-modal classification network, and five single-modal classification networks.
| Modality | TPR | TNR | F1-score | AUC | Accuracy |
|---|---|---|---|---|---|
| Integrated Multi-modal Classification Network |
| 0.82 |
|
|
|
| Input Tensor Multi-modal Classification Network | 0.90 | 0.82 | 0.8654 | 0.900 | 0.86 |
| Ktrans | 0.90 | 0.80 | 0.8571 | 0.853 | 0.85 |
| ADC | 0.89 | 0.72 | 0.8203 | 0.826 | 0.805 |
| T2-Weighted COR | 0.85 | 0.68 | 0.7834 | 0.741 | 0.765 |
| T2-Weighted SAG | 0.64 |
| 0.7636 | 0.735 | 0.74 |
| T2-Weighted TRA | 0.80 | 0.69 | 0.7583 | 0.775 | 0.745 |
ADC, apparent diffusion coefficient; AUC, area under curve; COR, coronal; TNR, true negative rate; TPR, true positive rate; SAG, sagittal; TRA, transverse. Best performance values are in bold.
Effect of the number of modalities on model performance.
| Modality | TPR | TNR | F1-score | AUC | Accuracy |
|---|---|---|---|---|---|
| Ktrans + ADC | 0.91 | 0.80 | 0.8575 | 0.864 | 0.851 |
| Ktrans + T2-Weighted | 0.89 | 0.81 | 0.8424 | 0.859 | 0.834 |
| ADC + T2-Weighted | 0.87 | 0.81 | 0.8281 | 0.853 | 0.842 |
| Ktrans + ADC + T2-Weighted |
|
|
|
|
|
AUC, area under curve; TNR, true negative rate; TPR, true positive rate. Best performance values are in bold.
Comparison between different classification networks, stratified by accuracy and 95% confidence interval.
| Model | Modality | Average accuracy, 95% confidence interval |
|---|---|---|
| Integrated Multi-modal Classification Network | - |
|
| Input Tensor Multi-modal Classification Network | - | 0.86 [0.852, 0.868] |
| Single-modal Classification Network | Ktrans | 0.85 [0.84, 0.86] |
| ADC | 0.805 [0.702, 0.818] | |
| T2-Weighted COR | 0.765 [0.75, 0.78] | |
| T2-Weighted SAG | 0.74 [0.721, 0.759] | |
| T2-Weighted TRA | 0.745 [0.727, 0.763] |
Comparison of the classification model proposed in this article with the results of previous classification models.
| Model | Author | AUC |
|---|---|---|
| Inception V3 | Quan Chen | 0.83 |
| VGG-16 | Quan Chen | 0.81 |
| XmasNet | Saifeng Liu | 0.84 |
| SVM | Jarrel C.Y. Seah | 0.84 |
| 3D Convolutional Neural Networks | Alireza Mehrtash | 0.80 |
| Single-modal Classification Network | - | 0.853 |
| Input Tensor Multi-modal Classification Network | - | 0.900 |
| Integrated Multi-modal Classification Network | - | 0.912 |