| Literature DB >> 35701671 |
Virginia Liberini1,2, Riccardo Laudicella3,4,5, Michele Balma6, Daniele G Nicolotti6, Ambra Buschiazzo6, Serena Grimaldi7, Leda Lorenzon8, Andrea Bianchi6, Simona Peano6, Tommaso Vincenzo Bartolotta9, Mohsen Farsad10, Sergio Baldari4, Irene A Burger3,11, Martin W Huellner3, Alberto Papaleo6, Désirée Deandreis7.
Abstract
In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients' risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these "big data" in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer.Entities:
Keywords: Artificial intelligence; Positron emission tomography; Prostate cancer; Radiomics; Theragnostics
Mesh:
Year: 2022 PMID: 35701671 PMCID: PMC9198151 DOI: 10.1186/s41747-022-00282-0
Source DB: PubMed Journal: Eur Radiol Exp ISSN: 2509-9280
Fig. 1The workflow includes the steps required in a radiomic and artificial intelligence analysis in prostate cancer patients. The first step involves collecting clinical data on patient characteristics, histopathological data on tumor characteristics, and imaging data, with the extraction of radiomic features (such as shape, intensity, and texture features). Radiomic modeling involves three major aspects: feature selection, modeling methodology, and validation. The number of radiomic features that can be extracted from images is virtually unlimited. Once extracted, radiomic features must be selected; redundant or non-robust features against sources of variability must be identified and eliminated through dimensionality reduction techniques, to avoid overfitting problems. The choice of modeling methodology and the identification of optimal machine learning methods for radiomic applications are a crucial step in obtaining robust and clinically relevant results. The choice of a modeling methodology (supervised or unsupervised machine learning method) depends on the setting of the data, the characteristics of the analyzed population, and the experience of the researchers. The model chosen affects prediction and performance in radiomics, and hence, implementations of multiple modeling methodologies are highly desirable. Finally, validation techniques are useful tools for assessing model performance. An externally validated model has more credibility than an internally validated model because data obtained by the first approach are more independent. Validation is essential to verify the repeatability and reproducibility of the model, demonstrating statistical consistency between the training and validation datasets
Fig. 2Schematic representation of the performed literature search and the review strategy
Overview of retrospective studies on ML-based improvement of the segmentation-based MRAC method by AI
| Author and publication year | Algorithm | Cohort (patients) | Ground truth | Performance |
|---|---|---|---|---|
| Bradshaw et al. (2018 [ | DL-based attenuation-correction method (deepMRAC) = 3D-CNNs, namely DeepMedic ( The network was trained to produce a discretized (air, water, fat, and bone) substitute computed tomography (CT) (CTsub). Discretized (CTref discrete) and continuously valued (CTref) reference CT images were created to serve as ground truth for network training and attenuation correction, respectively | Eighteen female patients with cervical cancer were randomly split into 12 training subjects and 6 testing subjects with [18F]FDG PET/MRI scan (with T2 MRI, T1 LAVA Flex, and 2-point-Dixon-based MRAC images) and following a PET/CT scan No validation cohort | Reference CT (CTref) images were generated by using a combination of different techniques for different tissue types. Bone = CT image + T2 MRI image followed by segmentation of the bone. Fat and water = fat-fraction image, generated from the 2-point Dixon acquisition. Air (including bowel gas) = intensity threshold of the T2 image on the basis of an ROI in the muscle, with manual corrections | The Dice coefficient of the AI-produced CTsub compared with CTref discrete was 0.79 for cortical bone, 0.98 for soft tissue, and 0.49 for bowel gas. The root-mean-square error (RMSE) of the whole PET image was 4.9% by using deepMRAC and 11.6% by using the system MRAC |
| Leynes et al. (2018 [ | DL-based attenuation-correction method = U-net-CNNs, composed of 13 layers in total The deep learning model allowed a direct and fully automated conversion of MRI images to synthetic CT images, a so-called zero echo-time and Dixon Deep pseudoCT (“ZeDD-CT”), for PET image reconstruction (providing a patient-specific continuous-valued attenuation coefficients in soft tissues and in bone, respectively) and to evaluate the impact on radiotracer uptake estimation | Twenty-six patients with pelvic lesions (split into 10 training subjects and 16 evaluation subjects) and a PET/MRI scan performed with [18F]FDG or [68Ga]Ga-PSMA-11 PET/MRI No validation cohort | Helical CT images of the patients were acquired and were co-registered to the MRI images | Thirty bone and 60 soft tissue lesions were evaluated, and the SUVmax was measured. Comparing the MRAC methods with the ground-truth CTAC, there was a reduction factor of 4 of RMSE in PET quantification for bone lesions, of 1.5 of RMSE for soft tissue lesions |
| Mostafapour et al. (2021 [ | DL-based attenuation-correction method = residual deep learning model, taking PET non-attenuation-corrected images (PET-NAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference) | Three-hundred ninety-nine whole-body [68Ga]Ga-PSMA-11 images were used as the training dataset Forty-six whole-body [68Ga]Ga-PSMA-11 images were used as an independent validation dataset | CT from corresponding PET-CTAC was used as reference (ground truth) | The AI method achieved a mean absolute error (MAE), relative error (RE%), structural similarity index (SSIM), and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within images of the independent external validation dataset |
| Jang et al. (2018 [ | DL-based attenuation-correction method = DL network via convolutional neural networks, which was pre-trained with T1-weighted MRI images. Ultrashort echo time (UTE) images are used as input to the network, which was trained using labels derived from co-registered CT images | Head PET/MRI of 8 human subjects No validation cohort | A registered CT image was used as ground truth | Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76 ± 0.03, 0.96 ± 0.006, and 0.88 ± 0.01. In PET quantitation, the proposed MRAC method produced relative PET errors of less than 1% within most brain regions |
| Torrado-Carvajal et al. (2020 [ | DL-based attenuation-correction method = Dixon-VIBE Deep Learning (DIVIDE). A deep-learning network that allows synthesizing pelvis pseudo-CT maps based only on the standard Dixon volumetric interpolated breath-hold examination (Dixon-VIBE) images | Twenty-eight datasets obtained from 19 patients who underwent PET/CT and PET/MRI examinations were used to evaluate the proposed method No validation cohort | CT from PET/CT | Absolute mean relative change values relative to CT AC were lower than 2% on average for the DIVIDE method in every ROI except for bone tissue, where it was lower than 4% |
| Pozaruk et al. (2021 [ | DL-based attenuation-correction method = an augmented generative adversarial network (GAN) Aim to improve the accuracy of estimated attenuation maps from MRI Dixon contrast images by training an augmented generative adversarial network (GANs) in a supervised manner | Twenty-eight prostate cancer patients. Eighteen patients (2,160 slices and later augmented to 270,000 slices) were used for training the GANs, the remaining 10 patients for validation | CT images | The DL-based MRI methods generated the pseudo-CT AC μ-maps with an accuracy of 4.5% more than standard MRI-based techniques, thanks to the augmentation of the training datasets for the training of the GAN results in improved accuracy of the estimated μ-map and consequently the PET quantification compared to the state of the art |