| Literature DB >> 35204380 |
Huanye Li1, Chau Hung Lee2, David Chia3, Zhiping Lin1, Weimin Huang4, Cher Heng Tan2,5.
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.Entities:
Keywords: PI-RADS; cancer; deep learning; detection; diagnosis; machine learning; prostate MRI; registration; segmentation; survey
Year: 2022 PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1On MRI, the periprostatic venous plexus appears as serpinginous hyperintense structures with foci of signal voids adjacent to the prostate (green outline), and can be closely related to the prostate capsule (red outline). It may have similar heterogeneous appearance as the peripheral zone. Therefore, during manual segmentation, it can be mistaken as part of the prostate to less experienced operators.
Machine learning-based segmentation methods for prostate MRI. The abbreviations are shown below 1.
| Publication Year | Method | Prostate Zone | Input Image Dimension (Pixel/Voxel/mm) | Data Source | MRI Sequence(s) | Sample Sizes | CV | Results | Refs. | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Train | Val | Test | Acc (%) | DSC (%) | ||||||||
| 2008 | Nonrigid registration of prelabelled atlas images | WG | 512 × 512 × 90, 271 × 333 × 86 | Pv | T2w | 38 | - | 50 | No | - | 85 | [ |
| 2009 | Level set | WG | Pv | DWI | 10 | - | 10 | No | - | 91 | [ | |
| 2012 | AAM | WG | 0.54 × 0.54 × 3 mm | Pv | T2w | 86 | - | 22 | 5-fold | 88 | [ | |
| 2007 | Organ model-based, region-growing | WG | 3D | Pv | T1w, T2w | 15 | - | 24 | No | 94.75 | [ | |
| 2014 | RF and graph cuts | WG | 512 × 512 or 320 × 320 | PRO12 | T2w | 50 | - | 30 | 10-fold | - | >91 (training), | [ |
| 2014 | Atlas-based AAM and SVM | WG | 512 × 512 | Pv | T2w | 100 | - | 40 | leave-one-out | 90 | 87 | [ |
| 2016 | Atlas and C-Means classifier | WG, PZ, TZ | Varying sizes | PRO12, Pv | T2w | 30 | 35 | No | - | 81 (WG), | [ | |
| 2016 | Volumetric CNN | WG | 128 × 128 × 64 | PRO12 | T2w | 50 | - | 31 | No | - | 86.9 | [ |
| 2017 | FCN | WG, TZ | 0.625 × 0.625 × 1.5 mm | PRO12 | T2w | 50 | - | 30 | 10-fold | - | 89.43 | [ |
| 2021 | V-Net using bicubic interpolation | WG | 1024 × 1024 × 3 × 16 | PRO12, Pv | T2w | 106 | - | 30 | Y | - | 96.13 | [ |
| 2019 | Cascade dense-UNet | WG | 256 × 256 | PRO12 | T2w | 40 | - | 10 | 5-fold | - | 85.6 | [ |
| 2021 | 3D-2D UNet | WG | - | Pv | T2w | 299 | - | 5-fold | - | 89.8 | [ | |
| 2020 | convLSTM and GGNN | WG | 28 × 28 × 128 | PRO12, ISBI13, Pv | T2w | 140 | - | 30 | No | - | 91.78 | [ |
| 2020 | Transfer learning, data augmentation, fine-tuning | WG, TZ | - | Pv | T2w | 684 | - | 406 | 10-fold | - | 91.5 (WG), 89.7 (TZ) | [ |
| 2021 | Federated learning with AutoML | WG | 160 × 160 × 32 | MSD-Pro, PRO12, ISBI13, PROx | T2w | 344 | 46 | 96 | No | - | 89.06 | [ |
| 2020 | Anisotropic 3D multi-stream CNN | WG | 144 × 144 × 144 | PRO12, Pv | T2w | 87 | 30 | 19 | 4-fold | - | 90.6 (base), 90.1 (apex) | [ |
| 2020 | MS-Net | WG | 384 × 384 | Pv | T2w | 63 | - | 16 | No | - | 91.66 | [ |
| 2017 | FCN | WG, TZ | 144 × 144 × 26 | PRO12, Pv | DWI | 141 | - | 13 | 4-fold | 97 | 93,88 | [ |
| 2020 | Transfer learning | WG, TZ | 1.46 × 1.46 × 3 mm | Pv | DWI | 291 | 97 | 145 | No | - | 65 (WG), | [ |
| 2019 | Cascaded U-Net | WG, PZ | 192 × 192 | Pv | DWI | 76 | 36 | 51 | No | - | 92.7 (WG), 79.3 (PZ) | [ |
| 2021 | Three 3D/2D UNet pipeline | WG, PZ, TZ | 256 × 256 × (3 mm) | Pv | T2w | 145 | 48 | 48 | No | - | 0.94 (WG), 0.914 (TZ), 0.776 (PZ) | [ |
| 2021 | U-Net, ENet, ERFNet | WG, PZ, TZ | 512 × 512 | PROx | T2w | 99 | - | 105 | 5-fold | - | ENet (best): | [ |
| 2021 | Transfer learning, aggregated learning, U-Net | WG, PZ, CG | 192 × 192, 192 × 192 × 192 | ISBI13 | T2w | 5–40 | - | 20 | 5-fold | - | 73 (PZ), | [ |
| 2018 | PSNet | WG | 320 × 320 × 512 × 512 | PRO12, ISBI13 | T2w | 112 | - | 28 | 5-fold | - | 85 | [ |
1 Val = validation, CV = cross–validation, Acc = accuracy, DSC = dice similarity coefficient, Refs. = reference, - = not reported. For datasets, Pv = private, PRO12 = PROMISE12 [51], ISBI13 = NCI-ISBI 2013 Challenge [52], MSD-Pro = MSD Prostate [53], PROx = PROSATETx Challenge [15]. For prostate zones, WG = whole gland, TZ = transition zone, PZ = peripheral zone.
Figure 2Multifocal prostate cancer seen as hypointense lesions on T2-weighted imaging (star), obscuring the boundaries between peripheral and transition zone, making zonal segmentation challenging.
Figure 3Prostatitis typically appears as diffuse hypointensity in the peripheral zone on T2-weighted imaging (star), resulting in an almost similar signal to stromal nodules related to benign prostatic hyperplasia in the transition zone (arrowhead). This may make differentiation between peripheral and transition zone difficult, and zonal segmentation challenging.
Figure 4Severe hypertrophy of the transition zone (segmented in red) compressing on the peripheral zone which appears as a thin sliver (white arrows). Reduced visualisation of the peripheral zone in this case can make zonal segmentation challenging.
Machine learning-based MR image registration methods. The abbreviations are shown below 2.
| Publication Year | Approach | Registration Type | Registration Modalities | ML/DL Method | Auto-Seg | Sample Size | CV | Results | Refs. | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| TRE (mm) | DSC% | MSD (mm) | HD (mm) | Error (%) | |||||||||
| 2002 | Knowledge-based | Deformable | MRI–TRUS | homogeneous Mooney-Rivlin model, | N | 25 simulations of TRUS | No | - | - | - | - | 26.7 | [ |
| 2011 | Knowledge-based | non-rigid, deformable | MRI–TRUS | PCA | N | 5 patients | Leave-one-out | 5.8 | - | - | - | - | [ |
| 2012 | Knowledge-based | non-rigid, deformable | MRI–TRUS | PCA | N | 8 patients | Leave-one-out | 2.4 | - | - | - | - | [ |
| 2016 | Knowledge-based | non-rigid, deformable | MRI–TRUS | PCA, surface point matching | N | 1 MRI dataset and 60 TRUS datasets | Leave-one-out | 1.44 | - | - | - | - | [ |
| 2018 | Weakly supervised | Deformable | MRI–TRUS | CNN | N | 111 pairs | 10-fold | 9.4 | 73 | - | - | - | [ |
| 2018 | Weakly supervised | non-rigid, deformable | MRI–TRUS | CNN | N | 76 patients | 12-fold | 3.6 | 87 | - | - | - | [ |
| 2018 | Unsupervised | Affine | MRI–TRUS | GAN, CNN | N | 763 pairs | No | 3.48 | - | - | - | - | [ |
| 2020 | Weakly supervised | Affine and nonrigid, deformable | MRI–TRUS | FCN, 3D UNet | Y | 36 pairs | Leave-one-out | 2.53 | 91 | 0.88 | 4.41 | - | [ |
| 2021 | Weakly supervised | Deformable | MRI–TRUS | 3D UNet | Y | 288 patients | No | - | 87 | - | 7.21 | [ | |
| 2020 | Supervised | Rigid, Deformable | MRI–TRUS | UNet, CNN | Y | 12 patients | No | 2.99 | - | - | - | - | [ |
| 2018 | Knowledge-based and DL | Non-rigid, deformable | MRI–TRUS | 3D encoder-decoder | N | 108 pairs | 12-fold | 6.3 | 82 | - | - | - | [ |
| 2020 | Knowledge-based and DL | non-rigid, deformable | MRI–TRUS | CNN, 3D Point Cloud | Y | 50 patients | Leave-one-out | 1.57 | 94 | 0.90 | 2.96 | - | [ |
| 2019 | Supervised | Rigid, deformable | MRI–CT | RF based on an Auto-context model | N | 17 treatment plans from 10 patients | No | - | - | - | - | <1 | [ |
| 2020 | Knowledge-based | Rigid, Affine, and Deformable | MRI–histology images | - | N | 157 patients | No | - | 97 | - | 1.99 | - | [ |
| 2021 | Knowledge-based | Rigid, deformable | MRI–CBCT | CNN, 3D Point Cloud | Y | 50 patients | 5-fold | 2.68 | 93 | 1.66 | - | - | [ |
| 2021 | Unsupervised | Affine, Deformable | MRI–histology image | CNN | N | 99 patients (training), 53 patients (test) | No | - | 97.5, 96.1, 96.7 | - | 1.72, 1.98, 1.96 | - | [ |
| 2017 | Unsupervised | Rigid, affine, deformable | MRI–histology image | Multi-image super-resolution GAN | N | 533 patients | 5-fold | - | 95 (prostate), 68 (cancer) | - | - | - | [ |
2 Auto-Seg = auto-segmentation, CV = cross-validation, TRE = target registration error, DSC = dice similarity coefficient, MSD = mean surface distance, HD = Hausdorff distance, Refs. = reference, MRI–TRUS = MRI–transrectal ultrasound, - = not reported.
Figure 5MRI–US fusion technique for targeted prostate biopsy requires precise registration between pre-operative prostate MRI (bottom image) and real-time ultrasound (top image).
Machine learning methods for lesion detection and characterization. The abbreviations are shown below 3.
| Publication Year | Application | Method | Serum PSA (ng/mL) | Prostate Zone | Data Source | MRI Sequence(s) | Sample Sizes: | CV | Ground-Truth | Non-MRI Data Features (If Any) | Results | Refs. | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Train | Val | Test | Acc, AUC (%) | Ssv, Spc (%) | Kap, DSC (%) | |||||||||||
| 2018 | Detecting csPCa in AS patients | MRMR, QDA, RF, SVM | 6.96 ± 5.8 | WG | Pv | T2w, ADC | 31 | - | 25 | 3-fold (training) | PI-RADS score and biopsy | - | 72, - | - | - | [ |
| 2019 | Differentiating csPCa and non-cs PCa | MRMR and LASSO algorithm | >10 | WG | Pv | T1w, T2w, DWI, ADC | 187 | - | 93 | 10-fold | Gleason Score | - | -, 82.3 | 84.1, 72.7 | - | [ |
| 2019 | Differentiating TZ Pca from BPH | Logistic Regression and SVM | - | TZ | Pv | T2w, ADC | 105 | - | No | - | - | -, 98.9 | 93.2, 98.4 | 84 (tumour), 87 (BPH) | [ | |
| 2021 | Prediction of csPCa (PI-RADS ≥ 4) | Textured-DL and CNN | 4.7–8.7 | WG | Pv | T2w, ADC | 239 | 42 | 121 | No | PI-RADS score | - | -, 85 | -, 70 | - | [ |
| 2020 | Differentiating csPCA and non-cs Pca | 3D CNN | - | WG | PROx | ADC, DWI, K-trans (from DCE) | 175 | - | 25 | 8-fold | PI-RADS score | Location of lesion center | -, 89.7 | 81.9, 86.1 | - | [ |
| 2017 | Differentiating csPCa and non-cs Pca | Transfer learning, ImageNet | - | WG | PROx | T2W, DWI, ADC, DCE | 330 | - | 208 | k-fold | PI-RADS score | - | -, 83 | - | - | [ |
| 2019 | Classifying low-grade and high-grade Pca | Transfer learning, AlexNet NN | - | WG | Pv, PROx-2 | T2w, ADC | 110 | 66 | 44 | No | Gleason Score | - | 86.92, - | - | - | [ |
| 2019 | Prediction of csPCa (PI-RADS ≥ 4) | Transfer learning | 7.9 ± 2.5 | WG | Pv | T2w, ADC | 169 | 47 | No | PI-RADS score | Zonal information | 72.3, 72.6 | 63.6, 80 | - | [ | |
| 2015 | PZ cancer detection | Regression, SVM | 4.9–8.6 | PZ | Pv | T2w, ADC | 56 | 56 | Yes | prostectomy | - | -, 91 | 97 | - | [ | |
| 2018 | Predictive maps of epithelium and lumen density | Least square regression | - | WG | Pv | T2w, ADC, ECE | 20 | - | 19 | No | prostectomy | - | -, 77(epithelium); 84 (lumen) | - | [ | |
| 2021 | Pca detection and segmentation | Growcut, Zernik, KNN, SVM, MLP | - | PZ, TZ | Pv | T2w | 217 | - | 54 | No | prostectomy | Clinical and histopathological variables | 80.97, - | - | 79 | [ |
| 2020 | Pca detection and segmentation | 3D CNN | - | WG | Pv | T2w, DWI, ADC | 116 | - | 155 | 3-fold | biopsy | Location of lesion | -, 0.65–0.89 | 82–92, 43–76 | - | [ |
| 2021 | Pca differentiation and segmentation | SPCNet | 6.8–7.1 | WG | Pv | T2w, ADC | 102 | - | 332 | 5-fold | prostectomy | - | -, 0.75–0.85 | - | - | [ |
| 2021 | Pca detection and classification | Cascaded DL | 4.7–9.9 | WG | Pv, PROx | T2w, ADC | 1290 | - | 150 | 5-fold | PI-RADS score | - | 30.8, - | 56.1, - | 35.9 | [ |
| 2021 | Pca segmentation | Transfer learning, CNN, Test time augmentation | 2.1–18 | WG | Pv, PROx | T2w, DWI and DCE | 16, 16 | - | 16 | Leave-one-out | prostectomy | - | - | - | 59 | [ |
| 2018 | Pca segmentation | Encoder–decoder CNN | - | WG, PZ, CG | I2CVB | T2w | 1413 | 236 | 707 | 10-fold | Radiologist segmented results | - | 89.4, - | - | - | [ |
| 2017 | Improve PI-RADS v2 | RBF-SVM, SVM-RFE | 12.5–56.1 | WG, TZ, PZ | Pv | T2w, DWI, DCE | 97 | - | - | Leave-one-out | PI-RAD scores | - | -, 98.3 (PZ); 96.8 (TZ) | 94.4 (PZ); 91.6 (TZ), 97.7 (PZ); 95.5 (TZ) | - | [ |
| 2020 | Prediction of PI-RADS v2 Score | Resnet34 CNN | - | WG | Pv, PROX | T2W, DWI, ADC, DCE | 482 | 137 | 68 | No | PI-RADS score | - | - | - | 40, - | [ |
| 2019 | Pca detection, prediction of GGG score | Unet, batch normalization, ordinal regression | - | WG | PROX-2 | T2w, ADC | 99 | - | 63 | 5-fold | Gleason score | - | - | - | 32.1 | [ |
| 2019 | Pca segmentation, prediction of GS Score | multi-class CNN (Deeplab) | - | WG | Pv | T2w, DWI | 417 | - | - | 5-fold | Gleason score | - | -, 80.9 | 88.8, - | - | [ |
| 2021 | Prediction of GGG score | Unet, ordinal regression | - | WG, TZ, PZ | PROX-2 | T2W, DWI, ADC | 112 | - | 70 | 5-fold | GGG | Zonal information | - | - | 13, 37 | [ |
| 2019 | Prediction of GGG score | KNN | - | TZ, PZ | Pv | T2w, DCE, DWI, ADC | 112 | - | 70 | 3-fold | GGG | Texture features, zonal information | -, 92 (PZ); 87 (TZ) | - | - | [ |
| 2018 | Prediction of GGG score | Stacked sparse autoencoders | - | WG | PROX-2 | T2w, DWI, ADC, | 112 | - | 70 | 3-fold | GGG | Hand-crafted texture features | 47.3, - | - | 27.72, - | [ |
| 2021 | Lesion detection and classification | Cascaded DL | 4.7–9.9 | WG | Pv, PROX | T2w, ADC | 1290 | - | 150 | 5-fold | PI-RADS score | - | 30.8, - | 56.1, - | -, 35.9 | [ |
3 Val = validation, CV = cross-validation, Acc = accuracy, AUC = area under ROC curve, Ssv = sensitivity, Spc = specificity, Kap = Kappa score, DSC = dice similarity coefficient, Refs. = reference, - = not reported. csPCa = clinically-significant prostate cancer, GGG = Gleason grade group. For prostate zones, WG = whole gland, PZ = peripheral zone, TZ = transition zone. For data source, Pv = private, PROx = PROSATETx Challenge [15], PROx-2: PROSATETx-2 challenge [15], I2CVB: I2CVB Benchmark dataset [108].
Machine learning methods for treatment aiding. The abbreviations are shown below 4.
| Publication Year | Application | Method | Input Feature | Sample Size | Ground-Truth | MRI Sequence (s) | CV | Results | Refs. | ||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Acc (%) | AUC (%) | C-Index | |||||||||
| 2019 | EPE Detection | Bayesian Network, Texture analysis | Index lesions from biparametric MRI | 39 | Prostectomy | T2w, ADC | No | 82 | 88 | - | [ |
| 2020 | ECE Prediction | LASSO regression | ROIs of T2W images | 119 | Prostectomy | T2w, DWI, DCE | 10-fold | - | 82.1 | - | [ |
| 2020 | EPE Detection | LASSO regression | Radiomic features, patients’ clinical and pathological variables | 115 | Prostectomy | T2w, ADC, DWI, DCE | No | 81.8 | 86.5 | - | [ |
| 2020 | EPE Prediction | Combination of RF model, radiology interpretation and clinical nomogram | MR radiomic features | 228 | Prostectomy | T1w, T2w, DWI, DCE | 10-fold | - | 79 | - | [ |
| 2021 | EPE Detection | SVM | Radiomic feature from MRI index lesions | 193 | Prostectomy | T2w, ADC | 10-fold | 79 | - | - | [ |
| 2009 | BCR Prediction | Cox regression | GS and clinical variales | 610 | BCR defined by NCCN guideline | T2w, DWI, ADC, DCE | No | - | - | 0.776 (5-year), 0.788 (10-year) | [ |
| 2015 | BCR Prediction | Univariate and multivariate analyses using Cox’s proportional hazards model | PI-RADSv2 score, surgical parameters | 158 | Two consecutive PSA ≥ 0.2 ng/mL | T2w, DWI, DCE | No | - | - | - | [ |
| 2019 | pre-biopsy mpMRI to improve preoperative risk model | Cox regression | pre-biopsy mpMRI score | 372 | Two consecutive PSA ≥ 0.1 ng/mL | T1w, T2w | No | - | - | - | [ |
| 2010 | BCR Prediction | Univariate and multivariate analyses | Clinical variables and tumour ADC data | 158 | PSA ≥ 0.2 ng/mL | ADC, DWI | No | - | 75.5 | - | [ |
| 2019 | BCR and bRFS Prediction | Univariate and multivariate Cox regression | IBSI-compliant radiomic features | 107 | Two consecutive PSA ≥ 0.2 ng/mL | T2w, ADC | No | - | 76 | - | [ |
| 2016 | BCR Prediction | SVM | Clinicopathologic and bpMRI variables | 205 | PSA ≥ 0.2 ng/mL | T2w, DWI, DCE | 5-fold | 92.2 |
|
| [ |
| 2018 | Identify predictive radiomic features for BCR | SVM, Linear discriminant analysis and RF | Radiomic features from pretreatment bpMRI | 120 | PSA > 0.2 ng/mL (post-RP) and PSA >2 ng/mL (post-RT) | T2w, ADC | 3-fold | - |
|
| [ |
| 2021 | BCR Prediction | Radiomic-based DL | Quantitative features of MRI | 485 | PSA ≥ 0.2 ng/mL | T1w, T2w, DWI, ADC | No | - | - | 0.802 | [ |
| 2018 | Post-Prostatectomy Pathology prediction | RF | Demographics, PSA trends, and location-specific biopsy findings | 1560 | Prostatectomy | - | - | - | 75 (OCD), 73 (ECE), 64 (pN+) | - | [ |
| 2019 | IMRT response prediction | Univariate radiomic analysis, ML classification models | pre-/post-IMRT mpMRI radiomic features | 33 | Change of ADC values before and after IMRT. | T2w, ADC | 10-fold | - | 63.2 |
| [ |
| 2004 | BCR Prediction | ANN | MRI findings, PSA, biopsy Gleason score | 210 | PSA level ≥ 0.1 ng/mL | T2w, DWI, ADC, DCE | 5-fold | - | 89.7 | - | [ |
4 CV = cross-validation, Acc = accuracy, AUC = area under ROC curve, C-Index = concordance index [129], Refs. = reference, - = not reported. bRFS = biochemical recurrence free survival, RP = radical prostatectomy.