| Literature DB >> 35453847 |
Henrik J Michaely1, Giacomo Aringhieri2,3, Dania Cioni2,3, Emanuele Neri2,3.
Abstract
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate MRI protocol. The intention of this review is to analyze the current value of biparametric prostate MRI in combination with methods of machine-learning and deep learning in the detection, grading, and characterization of prostate cancer; if available a direct comparison with human radiologist performance was performed. PubMed was systematically queried and 29 appropriate studies were identified and retrieved. The data show that detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesion according to the PIRADS score.Entities:
Keywords: PIRADS; artificial intelligence; biparametric prostate MRI; cancer detection; deep-learning; multiparametric prostate MRI; prostate cancer; radiomics
Year: 2022 PMID: 35453847 PMCID: PMC9027206 DOI: 10.3390/diagnostics12040799
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Overview of the performance of mpMRI and bpMRI based on data from Woo et al. [33] and Alabousi et al. [25] demonstrating the near equal performance of bpMRI to mpMRI (reprinted with permission from [17], Copyright 2020 Gland Surgery).
Figure 2Hierarchical structure of AI-techniques. Whereas ML requires human feature engineering as guidance for learning, DL is based on self-learning algorithms that can detect and process simple and complex image features.
Figure 3Sample radiomics workflow (reprinted with permission from [40], Copyright 2019 Springer Nature).
Figure 4Workflow of standard radiology reporting compared to AI-based methods of radiomic and DL. The entire complexity of deep learning is only schematically shown. There is an abundance of different network architectures or CNN which are beyond the scope of this study. This figure only demonstrates a schematic CNN (reprinted under common creative license 4.0 from [44], Copyright 2021 Springer Nature).
Figure 5Literature selection work-flow. ML–machine-learning. DL–deep learning. up–uniarametric. bp–biparametric. mp–multiparametric.
List of include studies and relevant key information.
| Reference | Year | ML | DL | Field Strength | Target | Number of Patients | Age | SS/SP/Accuracy | AUC | Sequences Used |
|---|---|---|---|---|---|---|---|---|---|---|
| Abdollahi H. et al. [ | 2019 | 1 | 0 | 1.5 T | Gleason score prediction | 33 | 73 (51–82) | 0.739 | T2, ADC | |
| Wu M. et al. [ | 2019 | 1 | 0 | 3 T | TZ PCA detection | 44 | 68 ± 7 | 93.2%/98.4% | 0.989 (LR) | T2, ADC |
| Varghese B. et al. [ | 2019 | 1 | 0 | 3 T | Grading prediction | 68 | 86%/72% | 0.71 | T2, ADC, | |
| Min X. et al. [ | 2019 | 1 | 0 | 3 T | ci/csPCA discrimination TZ and PZ | 280 | 84.1%/72.7% | 0.823 | T2, ADC, b1500 | |
| Toivonen J. et al. [ | 2019 | 1 | 0 | 3 T | Gleason prediction TZ and PZ | 62 | 65 (45–73) | 0.88 | T2, b0-b2000, T2mapping | |
| Chen T. et al. [ | 2019 | 1 | 0 | 3 T | Tumor detection | 182 | 73 (55–90) | 98.6/99.2%/98.9% (noPCA vs. PCA) | 0.999 (noPCA vs. PCA) | T2, ADC |
| Xu M. et al. [ | 2019 | 1 | 0 | 3 T | Tumor detection | 331 | 71 (46–94) | 0.92 (Radiomics) | T2, ADC, DWI | |
| Zhong X. et al. [ | 2019 | 0 | 1 | 3 T | ci/cs PCA discrimination | 140 | 63.6%/80.6%/72.3% | 0.726 (DL) | ||
| Yuan Y. et al. [ | 2019 | 0 | 1 | 3 T | ci/cs PCS discrimination (GS > 7) | 132 | –/–/86.9% | T2 ax and sag, ADC | ||
| Xu H. et al. [ | 2019 | 0 | 1 | 3 T | Detection of PIRADS ≥ 3 lesions | 346 | –/–/93.0% | 0.950 | T2, ADC, high b-value | |
| Schelb P. et al. [ | 2019 | 0 | 1 | 3 T | DL and radiologist for lesion (PIRADS ≥ 3 and 4) detection and segmentation | 250 | 64 (58–71) | 98/17% Rad, PIRADS ≥ 3 | T2, ADC, DWI | |
| Montoya Perez I. et al. [ | 2020 | 1 | 0 | 3 T | Detection of csPCA with bpMRI, RNA and clinical data | 80 | 65 ± 7.1 | 0.92 | T2, DWI | |
| Hou Y. et al. [ | 2020 | 1 | 0 | 3 T | csPCA in PIRADS 3 identification in TZ and PZ | 263 | 66.8 ± 11.4 | 0.89 | T2, ADC, b1500 | |
| Mehralivand S. et al. [ | 2020 | 1 | 0 | 3 T | Detection csPCA in TZ and PZ | 236 | 50.8%/–/– (TZ, MRI) | 0.749 (MRI) | T2, b1500 | |
| Gong L et al. [ | 2020 | 1 | 0 | 3 T | ci/cs PCA discrimination | 326 | 73.8%/65.8%/69.9% | 0.788 | T2, ADC, b800 | |
| Bleker J. et al. [ | 2020 | 1 | 0 | 3 T | ci/cs PCA discrimination in PZ | 206 | 66 (48–83) | 0.870 (mpMRI) | T2, ADC, DWI, (DCE) | |
| Zong W. et al. [ | 2020 | 0 | 1 | 3 T | CNN optimization | 367 | 100/92% | 0.840 | T2, ADC, b0 | |
| Sanford T. et al. [ | 2020 | 0 | 1 | 3 T | Automated PIRADS classification compared to radiologist | 687 | 67 (46–89) | T2, ADC, high b-value | ||
| Brunese L. et al. [ | 2020 | 1 | 1 | 1.5 T | Gleason score prediction | 52 | –/–/98% | T2, DCE | ||
| Chen Y. et al. [ | 2020 | 0 | 1 | 3 T | Prostate and cancer segmentation | 136 | 68 (49–62) | 75.1/99.9% | T2, ADC, b1200 | |
| Winkel D.J. et al. [ | 2020 | 0 | 1 | 3 T | bpMRI PCA Screening | 49 | 58 (45–75) | 87/50% | T2, ADC, b2000 | |
| Arif M. et al. [ | 2020 | 0 | 1 | 3 T | Detection of csPCA in AS | 292 | 68 (62–72) | 92/76% | 0.89 | T2, ADC, b800 |
| He D. et al. [ | 2021 | 1 | 0 | 3 T | Tumor detection | 459 | 65 (30–89) | 0.863 | T2, ADC | |
| Vente C. et al. [ | 2021 | 0 | 1 | 3 T | csPCA detection and grading | 99 | T2, ADC | |||
| Chen J. et al. [ | 2021 | 0 | 1 | 3 T | csPCA detection and grading | 25 | 89.6/90.2%/92.1% | 0.964 | T2, T1 | |
| Cao R. et al. [ | 2021 | 0 | 1 | 3 T | PCA detection and grading | 126 | 62.4 ± 6.4 | 98/17% PIRADS, ≥3 | T2, ADC | |
| Hou Y. et al. [ | 2021 | 0 | 1 | 3 T | ECE prediction | 590 | 69.2 (42–86) | 0.857 | T2, ADC, b1500 | |
| Yan Y. et al. [ | 2021 | 1 | 1 | 3 T | BCR prediction | 485 | 69.8 | 0.802 (C-index) | T2 | |
| Schelb P. et al. [ | 2021 | 0 | 1 | 3 T | csPCA detection and grading | 284 | 64 (IQR 61–72) | 98/17% PIRADS, ≥3 | T2, ADC, b1500 |
Figure 6“Examples of lesion detection. The left two columns show the input T2WI and ADC map, respectively. The right two columns show the FocalNet-predicted lesion probability map and detection points (green crosses) with reference lesion annotation (red contours), respectively. (a) Patient at age 66, with a prostate cancer (PCa) lesion at left anterior peripheral zone with Gleason Group 5 (Gleason Score 4 + 5). (b) Patient at age 68, with a PCa lesion at left posterolateral peripheral zone with Gleason Group 2 (Gleason Score 3 + 4). (c) Patient at age 69, with a PCa lesion at right posterolateral peripheral zone with Gleason Group 3 (Gleason Score 4 + 3). ADC = apparent diffusion coefficient; T2WI = T2-weighted imaging“(reprinted with permission from [72], Copyright 2021 John Wiley and Sons).
Display of study results comparing human and AI-based performance.
| Reference | Year | ML | DL | Metric | Human Radiologist | AI-Approach |
|---|---|---|---|---|---|---|
| Chen T. et al. [ | 2019 | 1 | 0 | AUC | 0.867 | 0.999 |
| Schelb P. et al. [ | 2019 | 0 | 1 | Sensitivity/Specificity | 98/17% PIRADS ≥ 3 | 99/25% PIRADS ≥ 3 |
| Mehralivand S. et al. [ | 2020 | 1 | 0 | AUC | 0.816 | 0.780 |
| Sanford T. et al. [ | 2020 | 0 | 1 | Cancer detection rates | 53% PIRADS 3 | 57%, PIRADS 3 |
| Cao R. et al. [ | 2021 | 0 | 1 | Sensitivity/Specificity | 98/17% PIRADS, ≥3 | 100/17% PIRADS, ≥3 |
| Schelb P. et al. [ | 2021 | 0 | 1 | Sensitivity/Specificity | 98/17% PIRADS, ≥3 | 99/24% PIRADS, ≥3 |
Display of PRISMA items.
| PRISMA Item | Description |
|---|---|
| Title | Current Value of Biparametric Prostate MRI with Machine-Learning or Deep-Learning in the Detection, Grading and Characterization of Prostate Cancer: a systematic review. |
| Main objective | Assessing the current value of deep-learning and machine-learning applied to biparametric MRI of the prostate |
| Inclusion and exclusion criteria | Inclusion criteria: Study listed in Pubmed Search terms: “prostate” and “magnetic” and either “deep learning” or “machine learning” or “radiomics” Full text access available through University of Heidelberg Paper type: original investigation/research Focus: Detection or grading of prostate cancer with biparametric prostate MRI Language: English or German Year of publication 2019–2021 No full text access Wrong paper type: reviews, meta-analysis Wrong focus (e.g., prostate segmentation, radiation therapy planning) Wrong technique (uniparametric or multiparametric prostate MRI) |
| Information source and access time | PubMed query in August 2021 |
| Methods to assess risk of bias in included studies | No structured program was used to assess bias in study selection. Internal review by the authors and critical appraisal of the data was performed. |
| Methods to present and synthesize results | Descriptive statistics, listing in tabular form |
| Number of studies and participants included | 29 publications included |
| Main outcomes | Very heterogenous data did not allow for a general interpretation of all studies. |
| Limitations |
No overall statistical analysis feasible due to the heterogeneity of methods and inclusion criteria reported 7 out of 29 studies based on the same dataset (ProstatEx, Radbound Nijmwegen, The Netherlands) Heterogenous studies with different inclusion criteria and ground truth (i.e., if Gleason Grade constitutes high-grade cancer or not) Often lacking demographic and statistical data |
| General interpretation | Detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesions according to the PIRADS score. |
| Primary source for funding | No general funding. Publication costs are covered by the Universtiy of Pisa, Pisa, Italy. |
| Register name and registration number | No registration |