Vasant Kearney 1 , Benjamin P Ziemer 1 , Alan Perry 1 , Tianqi Wang 1 , Jason W Chan 1 , Lijun Ma 1 , Olivier Morin 1 , Sue S Yom 1 , Timothy D Solberg 1 . Show Affiliations »
Abstract
PURPOSE: To suggest an attention-aware, cycle-consistent generative adversarial network (A-CycleGAN) enhanced with variational autoencoding (VAE) as a superior alternative to current state-of-the-art MR-to-CT image translation methods. MATERIALS AND METHODS: An attention-gating mechanism is incorporated into a discriminator network to encourage a more parsimonious use of network parameters, whereas VAE enhancement enables deeper discrimination architectures without inhibiting model convergence. Findings from 60 patients with head, neck, and brain cancer were used to train and validate A-CycleGAN, and findings from 30 patients were used for the holdout test set and were used to report final evaluation metric results using mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). RESULTS: A-CycleGAN achieved superior results compared with U-Net, a generative adversarial network (GAN), and a cycle-consistent GAN. The A-CycleGAN averages, 95% confidence intervals (CIs), and Wilcoxon signed-rank two-sided test statistics are shown for MAE (19.61 [95% CI: 18.83, 20.39], P = .0104), structure similarity index metric (0.778 [95% CI: 0.758, 0.798], P = .0495), and PSNR (62.35 [95% CI: 61.80, 62.90], P = .0571). CONCLUSION: A-CycleGANs were a superior alternative to state-of-the-art MR-to-CT image translation methods.© RSNA, 2020. 2020 by the Radiological Society of North America, Inc.
PURPOSE: To suggest an attention-aware, cycle-consistent generative adversarial network (A-CycleGAN) enhanced with variational autoencoding (VAE) as a superior alternative to current state-of-the-art MR-to-CT image translation methods. MATERIALS AND METHODS: An attention-gating mechanism is incorporated into a discriminator network to encourage a more parsimonious use of network parameters, whereas VAE enhancement enables deeper discrimination architectures without inhibiting model convergence. Findings from 60 patients with head, neck, and brain cancer were used to train and validate A-CycleGAN, and findings from 30 patients were used for the holdout test set and were used to report final evaluation metric results using mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). RESULTS: A-CycleGAN achieved superior results compared with U-Net, a generative adversarial network (GAN), and a cycle-consistent GAN. The A-CycleGAN averages, 95% confidence intervals (CIs), and Wilcoxon signed-rank two-sided test statistics are shown for MAE (19.61 [95% CI: 18.83, 20.39], P = .0104), structure similarity index metric (0.778 [95% CI: 0.758, 0.798], P = .0495), and PSNR (62.35 [95% CI: 61.80, 62.90], P = .0571). CONCLUSION: A-CycleGANs were a superior alternative to state-of-the-art MR-to-CT image translation methods.© RSNA, 2020. 2020 by the Radiological Society of North America, Inc.
Entities: Chemical
Year: 2020
PMID: 33937817 PMCID: PMC8017410 DOI: 10.1148/ryai.2020190027
Source DB: PubMed Journal: Radiol Artif Intell ISSN: 2638-6100