| Literature DB >> 35448210 |
Maria Elena Laino1, Pierandrea Cancian1, Letterio Salvatore Politi2, Matteo Giovanni Della Porta3, Luca Saba4, Victor Savevski1.
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.Entities:
Keywords: CT; MRI; PET; brain imaging; fMRI; generative adversarial networks
Year: 2022 PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083
Source DB: PubMed Journal: J Imaging ISSN: 2313-433X
Figure 1Flow diagram of the study search and inclusion process.
Articles included in the review focusing on image-to-image translation and cross-modality synthesis.
| Author | Year | Application | Population | Imaging Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Jin | 2019 | Image-to-Image translation and cross-modality synthesis | 202 patients | MRI from CT image | MR-GAN | MAE: 19.36 |
| Kazemifar | 2019 | Image-to-Image translation and cross-modality synthesis | 66 patients | CT from MRI | GAN | mean absolute difference |
| Dai | 2020 | Image-to-Image translation and cross-modality synthesis | 274 subjects (54 patients with low-grade glioma, and 220 patients with high-grade glioma) | MRI | multimodal MR image synthesis method unified generative adversarial network. | NMAEs for the generated T1c, T2, Flair: 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006. |
| Hamghalam | 2020 | Image-to-Image translation and cross-modality synthesis | Various datasets | MRI-HTC | Cycle-GAN | Dice similarity scores: |
| Maspero | 2020 | Image-to-Image translation and cross-modality synthesis | 60 pediatric patients | SynCT from T1-weighted MRI | cGANs | mean absolute error of 61 ± 14 HU |
| Sanders | 2020 | Image-to-Image translation and cross-modality synthesis | 109 brain tumor patients | relative cerebral blood volume | cGANs | Pearson correlation analysis showed strong correlation (ρ = 0.87, |
| Wang | 2020 | Image-to-Image translation and cross-modality synthesis | 20 patients | MRI-PET | cycleGANs | PSNR > 24.3 |
| Lan | 2021 | Image-to-Image translation and cross-modality synthesis | 265 subjects | PET-MRI | 3D self- attention conditional GAN | NRMSE:0.076 ± 0.017 |
| Bourbonne | 2021 | Image-to-Image translation and cross-modality synthesis | 184 patients with brain metastases | CT-MRI | 2D-GAN(2D U-Net) | mean global gamma analysis passing rate: 99.7% |
| Cheng | 2021 | Image-to-Image translation and cross-modality synthesis | 17 adults | Two-dimensional fMRI images using two-dimensional | BMT-GAN | MSE: 128.6233 |
| La Rosa | 2021 | Image-to-Image translation and cross-modality synthesis | 12 healthy controls and 44 patients diagnosed with Multiple Sclerosis | MRI (MP2RAGE uniform images (UNI) from | GAN | PSNR: 31.39 ± 0.96 |
| Lin | 2021 | Image-to-Image translation and cross-modality synthesis | AD 362 subjects; 647 images | MRI-PET | Reversible Generative Adversarial Network (RevGAN) | Synthetic PET: |
| Liu | 2021 | Image-to-Image translation and cross-modality synthesis | 12 brain cancer patients | SynCT images from T1-weighted postgadolinium MR | GAN model with a residual network (ResNet) | Average gamma passing rates at 1%/1 mm and 2%/2 mm were 99.0 ± 1.5% |
| Tang | 2021 | Image-to-Image translation and cross-modality synthesis | 37 brain cancer patients | SynCT from T1-weighted MRI | GAN | Average gamma passing rates at 3%/3 mm and 2%/2 mm criteria were 99.76% and 97.25% |
| Uzunova | 2021 | Image-to-Image translation and cross-modality synthesis | Various datasets | MRI (T1/Flair to T2, healthy to pathological) | GAN | T1 → T2 |
| Yang | 2021 | Image-to-Image translation and cross-modality synthesis | 9 subjects | Multimodal MRI-CT registration into monomodal sCT-CT registration | CAE-GAN | MAE: 99.32 |
Articles included in the review focusing on image reconstruction.
| Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Ouyang | 2019 | Image reconstruction | 39 participants | PET/MRI | GAN | MAE: 8/80 |
| Song | 2020 | Image reconstruction | 30 HRRT scans from the ADNI database. Validation dataset = 12 subjects | low-resolution PET and high-resolution MRI images | Self-supervised SR (SSSR) GAN | Various results |
| Shaul | 2020 | Image reconstruction | 490 3D brain MRI of a healthy human adult; 64 patients from Longitudinal MS Lesion Segmentation Challenge (T1, T2, PD, and FLAIR); 14 DCE-MRI acquisitions of Stroke and brain tumor | MRI | GAN | PSNR: 40.09 ± 3.24 |
| Zhao | 2020 | Image reconstruction | 109 patients | PET | S-CycleGAN | Average coincidence: 110 ± 23 |
| Zhang | 2021 | Image reconstruction | 581 healthy adults | MRI | noise-based super-resolution network (nESRGAN) | SSIM: 0.09710 ± 0.0022 |
| Sundar | 2021 | Image reconstruction | 10 healthy adults | PET/MRI | cGAN | AUC: 0.9 ± 0.7% |
| Zhou | 2021 | Image reconstruction | 151 patients with Alzheimer’s Disease | MRI | GAN | Image quality: 9.6% |
| Lv | 2021 | Image reconstruction | 17 participants with a brain tumor | MRI | PI-GAN | SSIM: 0.96 ± 0.01 |
| Delannoy | 2020 | Image reconstruction and segmentation | dHCP dataset = 40; Epirmex dataset = 1500 | MRI | SegSRGAN | Dice 0.050 |
Articles included in the review focusing on image segmentation.
| Author | Year | Application | Population | Imaging Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Liu | 2020 | Image segmentation | 14 subjects | MRI | cycle-consistent generative adversarial network (CycleGAN) | Dice 75.5%; ASSD: 1.2 |
| Oh | 2020 | Image segmentation | 192 subjects | 18 F-FDG PET/CT and MRI | GAN | AUC-PR: 0.869 ± 0.021 |
| Yuan | 2020 | Image segmentation | 484 brain tumor scans | MRI | GAN | Dice: 42.35% |
Articles included in the review focusing on image synthesis.
| Author | Year | Application | Population (No. of Patients) | Imaging Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Kazuhiro | 2018 | Image synthesis | 30 healthy individuals and 33 patients with cerebrovascular accident | MRI | DCGAN | 45% and 71% were identified as real images by neuroradiologists. |
| Islam | 2020 | Image synthesis | 479 patients | PET | DCGAN | SSIM 77.48 |
| Kim | 2020 | Image synthesis | 139 patients with Alzheimer’s Disease and 347 Normal Cognitive participants. | PET/CT | Boundary Equilibrium Generative Adversarial Network (BEGAN) | Accuracy: 94.82; Sensitivity: 92.11; Specificity: 97.45; AUC:0.98 |
| Qingyun | 2020 | Image synthesis | 226 patients | MRI (FLAIR, T1, T1CE) | TumorGAN | Dice 0.725 |
| Barile | 2021 | Image synthesis | 29 relapsing-remitting and 19 secondary-progressive MS patients. | MRI | GAN AAE | F1 score 81% |
| Hirte | 2021 | Image synthesis | 2029 patients normal brain | MRI | GAN | Data similarity 0.0487 |
| Kossen | 2021 | Image synthesis | 121 patients with Cerebrovascular disease | MRA | 3 GANs: | FID 37.01 |
Articles included in the review focusing on brain decoding.
| Authors | Year | Application | Population | Image Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Qiao | 2020 | Brain decoding | 1750 training sample and 120 testing sample | fMRI | GAN-based Bayesian visual reconstruction model (GAN-BVRM) | PSM: 0381 ± 0.082 |
| Ren | 2021 | Brain decoding | Various datasets | MRI | Dual-Variational Autoencoder/Generative Adversarial Network (D-Vae/Gan) | Mean identification accuracy: 87% |
| Huang | 2021 | Brain decoding | Five volunteers | fMRI | CAE, LSTM, and conditional progressively growing GAN (C-PG-GAN) deep | Various results for each participant |
| Al-Tahan | 2021 | Brain decoding | 50 healthy right-handed participants | fMRI | Adversarial Autoencoder (AAE) framework | MAE 0.49 ± 0.024 |
Articles included in the review focusing on disease progression modeling.
| Author | Year | Application | Population | Imaging Modality | ML Model | Results |
|---|---|---|---|---|---|---|
| Elazab | 2020 | Disease progression modeling | 9 subjects | MRI | growth prediction GAN (GP-GAN) GP-GAN | Dice: 88.26 |
| Han | 2021 | Disease progression modeling | 408 subjects/1133 scans/57,834 slices | MRI | medical anomaly detection generative adversarial network (MADGAN) | Cognitive impairment: AUC: 0.727 |
Figure 2Example of the functioning of a GAN. The generator creates synthetic images from random noise while the discriminator has to differentiate between real and synthetic images. The blue arrow shows the discriminator’s loss back-propagation, the red arrow shows the generator’s loss back-propagation.