| Literature DB >> 34007428 |
Jiale Dong1,2, Caiwei Liu1,2, Panpan Man1,2, Guohua Zhao1,2, Yaping Wu3, Yusong Lin2,4,5.
Abstract
The use of medical image synthesis with generative adversarial networks (GAN) is effective for expanding medical samples. The structural consistency between the synthesized and actual image is a key indicator of the quality of the synthesized image, and the region of interest (ROI) of the synthesized image is related to its usability, and these parameters are the two key issues in image synthesis. In this paper, the fusion-ROI patch GAN (Fproi-GAN) model was constructed by incorporating a priori regional feature based on the two-stage cycle consistency mechanism of cycleGAN. This model has improved the tissue contrast of ROI and achieved the pairwise synthesis of high-quality medical images and their corresponding ROIs. The quantitative evaluation results in two publicly available datasets, INbreast and BRATS 2017, show that the synthesized ROI images have a DICE coefficient of 0.981 ± 0.11 and a Hausdorff distance of 4.21 ± 2.84 relative to the original images. The classification experimental results show that the synthesized images can effectively assist in the training of machine learning models, improve the generalization performance of prediction models, and improve the classification accuracy by 4% and sensitivity by 5.3% compared with the cycleGAN method. Hence, the paired medical images synthesized using Fproi-GAN have high quality and structural consistency with real medical images.Entities:
Mesh:
Year: 2021 PMID: 34007428 PMCID: PMC8099524 DOI: 10.1155/2021/6678031
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1(a) Cropping of the INbreast. (b) Slices containing tumor regions were extracted from the 3D images of glioma; × indicates that images that do not contain tumor domains were excluded.
Figure 2(a) Regional feature extraction method. (b) The base model is a like-cycleGAN model consisting of two generators and two discriminators. (c) Synthesis of paired images. (i) Process synthesis of medical images. (ii) Process synthesis of ROI images.
Figure 3RFB architecture; the convolution process zoomed into the box on the right side corresponding to the dimension.
Figure 4Generator architecture; the convolution details of the generator are zoomed into the boxes on both sides.
Figure 5Discriminator architecture; Conv2D and LeakyReLU layers were applied to all Conv blocks.
Quantitative evaluation of the INbreast dataset (mean ± standard deviation). We compared the measurements of the different synthesis methods over the whole image domain and the tumor domain at a significance level of 0.05, and the underline indicates that Fproi-GAN is statistically significantly different from other methods.
| Region | Methods | PSNR | SSIM | MS-SSIM |
|---|---|---|---|---|
| Whole image | DCGAN [ | 16.834 ± 3.28 | 0.769 ± 0.15 | 0.879 ± 0.21 |
| Pix2Pix [ | 17.398 ± 3.81 | 0.843 ± 0.13 | 0.923 ± 0.19 | |
| cycleGAN [ | 17.815 ± 5.18 | 0.829 ± 0.17 | 0.919 ± 0.18 | |
|
|
|
|
| |
|
| ||||
| Tumor region | DCGAN [ | 19.231 ± 7.43 | 0.872 ± 0.15 | 0.894 ± 0.23 |
| Pix2Pix [ | 21.811 ± 6.98 | 0.915 ± 0.11 | 0.902 ± 0.22 | |
| cycleGAN [ | 20.485 ± 6.15 | 0.882 ± 0.07 | 0.904 ± 0.18 | |
|
|
|
|
| |
Results of the quantitative evaluation of the ROI images of the INbreast dataset (mean ± standard deviation) with a significance level of 0.05; the underline indicates that the Fproi-GAN is statistically significantly different from other methods.
| Methods | Dice coefficient | Hausdorff distance |
|---|---|---|
| DCGAN [ | 0.827 ± 0.25 | 7.31 ± 4.95 |
| Pix2Pix [ | 0.945 ± 0.17 | 7.27 ± 4.18 |
| cycleGAN [ | 0.952 ± 0.13 | 6.83 ± 3.38 |
|
|
|
|
Figure 6Comparison of Fproi-GAN with the three other synthesis methods on the INbreast dataset. (a) Input image. (b) DCGAN. (c) Pix2Pix. (d) cycleGAN. (e) Fproi-GAN.
Results of the quantitative evaluation of the BRATS 2017 dataset (mean ± standard deviation), where we compare the measurements of the different synthesis methods over the whole image domain and the tumor domain at a significance level of 0.05, and the underline indicates that Fproi-GAN is statistically significantly different from the other methods.
| Data | Region | Methods | PSNR | SSIM | MS-SSIM |
|---|---|---|---|---|---|
| HGG | Whole image | DCGAN [ | 25.749 ± 3.49 | 0.882 ± 0.04 | 0.890 ± 0.05 |
| Pix2Pix [ | 28.938 ± 4.68 | 0.952 ± 0.03 | 0.956 ± 0.05 | ||
| cycleGAN [ | 34.280 ± 4.85 | 0.984 ± 0.02 | 0.984 ± 0.05 | ||
|
|
|
|
| ||
| Tumor region | DCGAN [ | 29.539 ± 5.05 | 0.903 ± 0.02 | 0.910 ± 0.05 | |
| Pix2Pix [ | 33.031 ± 5.99 | 0.951 ± 0.02 | 0.952 ± 0.04 | ||
| cycleGAN [ | 35.652 ± 5.97 | 0.977 ± 0.03 | 0.970 ± 0.04 | ||
|
|
|
|
| ||
|
| |||||
| LGG | Whole image | DCGAN [ | 23.093 ± 4.71 | 0.895 ± 0.11 | 0.908 ± 0.06 |
| Pix2Pix [ | 25.912 ± 4.95 | 0.933 ± 0.09 | 0.945 ± 0.07 | ||
| cycleGAN [ | 28.045 ± 4.47 | 0.958 ± 0.08 | 0.966 ± 0.03 | ||
|
|
|
|
| ||
| Tumor region | DCGAN [ | 25.809 ± 4.39 | 0.892 ± 0.09 | 0.911 ± 0.07 | |
| Pix2Pix [ | 30.228 ± 5.28 | 0.939 ± 0.08 | 0.948 ± 0.07 | ||
| cycleGAN [ | 29.192 ± 7.22 | 0.993 ± 0.01 | 0.983 ± 0.06 | ||
|
|
|
|
| ||
Results of the quantitative evaluation of the ROI images of the BRATS 2017 dataset (mean ± standard deviation) with a significance level of 0.05; underline indicates that the Fproi-GAN is statistically significantly different from other methods.
| Data | Methods | Dice coefficient | Hausdorff distance |
|---|---|---|---|
| HGG | DCGAN [ | 0.808 ± 0.29 | 8.36 ± 5.66 |
| Pix2Pix [ | 0.876 ± 0.23 | 7.54 ± 5.90 | |
| cycleGAN [ | 0.931 ± 0.18 | 5.15 ± 3.03 | |
|
|
|
| |
|
| |||
| LGG | DCGAN [ | 0.889 ± 0.26 | 7.83 ± 4.84 |
| Pix2Pix [ | 0.947 ± 0.23 | 6.25 ± 3.12 | |
| cycleGAN [ | 0.984 ± 0.21 | 4.66 ± 2.33 | |
|
|
|
| |
Figure 7Comparison of Fproi-GAN with the three other synthesis methods on the BRATS 2017 dataset, where III is the visual performance in ITK-SNAP. (a) Input image (HGG). (b) DCGAN. (c) Pix2Pix. (d) cycleGAN. (e) Fproi-GAN. (f) Input image (LGG). (g) DCGAN. (h) Pix2Pix. (i) cycleGAN. (j) Fproi-GAN.
Figure 8Image distribution results and grayscale trends of the four synthesis methods under HGG and LGG, where Fp(roi)-GAN represents Fproi-GAN. (a) HGG. (b) LGG.
Classification results.
| Data | Methods | AUC | Acc | Sen | Spe |
|---|---|---|---|---|---|
| BRATS2017 | Resnet + SVM | 0.872 | 0.789 | 0.823 | 0.778 |
| BRATS2017 + DCGAN | 0.881 | 0.803 | 0.720 | 0.831 | |
| BRATS2017 + Pix2Pix | 0.894 | 0.815 | 0.857 | 0.855 | |
| BRATS2017 + cycleGAN | 0.928 | 0.855 | 0.910 | 0.843 | |
| BRATS2017 + |
|
|
|
|
Figure 9ROC plot of the classification experiment. (a) BRATS 2017. (b) BRATS 2017 + DCGAN. (c) BRATS 2017 + Pix2Pix. (d) BRATS 2017 + cycleGAN. (e) BRATS 2017 + Fproi-GAN.