| Literature DB >> 34940729 |
Eleftherios Fysikopoulos1,2, Maritina Rouchota1,2, Vasilis Eleftheriadis2, Christina-Anna Gatsiou2, Irinaios Pilatis2, Sophia Sarpaki2, George Loudos2, Spiros Kostopoulos1, Dimitrios Glotsos1.
Abstract
In the current work, a pix2pix conditional generative adversarial network has been evaluated as a potential solution for generating adequately accurate synthesized morphological X-ray images by translating standard photographic images of mice. Such an approach will benefit 2D functional molecular imaging techniques, such as planar radioisotope and/or fluorescence/bioluminescence imaging, by providing high-resolution information for anatomical mapping, but not for diagnosis, using conventional photographic sensors. Planar functional imaging offers an efficient alternative to biodistribution ex vivo studies and/or 3D high-end molecular imaging systems since it can be effectively used to track new tracers and study the accumulation from zero point in time post-injection. The superimposition of functional information with an artificially produced X-ray image may enhance overall image information in such systems without added complexity and cost. The network has been trained in 700 input (photography)/ground truth (X-ray) paired mouse images and evaluated using a test dataset composed of 80 photographic images and 80 ground truth X-ray images. Performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and Fréchet inception distance (FID) were used to quantitatively evaluate the proposed approach in the acquired dataset.Entities:
Keywords: PET; SPECT; X-ray; cGAN; deep learning; image-to-image translation; molecular preclinical imaging; pix2pix
Year: 2021 PMID: 34940729 PMCID: PMC8704599 DOI: 10.3390/jimaging7120262
Source DB: PubMed Journal: J Imaging ISSN: 2313-433X
Figure 1Indicative optical image acquired using a conventional photographic sensor located in BIOEMTECH eye’s series radioisotope screening tools (a); Indicative X-ray image acquired in a prototype PET/SPECT X-ray system (c) used as ground truth; Aligned pair (b).
Figure 2Aligned image pairs used for training. Photographic input image (left); Corresponding X-ray scan used as ground truth (right).
Train and test dataset detailed characteristics.
| Mouse Color | Bed Color | Train | Test | Test/Train (%) |
|---|---|---|---|---|
| White | Black | 260 | 30 | 11.5 |
| White | White | 90 | 10 | 11.1 |
| Black | White | 260 | 30 | 11.5 |
| Black | Black | 90 | 10 | 11.1 |
|
|
|
|
Figure 3The pix2pix Generator’s training layout. The generator creates output image (y) from input image (x) and random noise vector (z) and improves its performance by receiving feedback from the discriminator, as well as regarding the degree of fakeness of the synthetic image (y) compared to the ground truth (r).
Figure 4The cGAN discriminator’s training layout. The discriminator compares the input(x)/ground truth(r) pair of images and the input(x)/output(y) pair of images and outputs its guess about how realistic they look. The weights vector of the discriminator is then updated based on the classification error of the input/output pair (D fake Loss) and the input/target pair (D real Loss).
Values of important adjustable training parameters and hyperparameters.
| Parameter | Value |
|---|---|
| Learning rate | 0.0002 |
| Beta 1 parameter for the optimizer (adam) | 0.5 |
| Beta 2 parameter for the optimizer (adam) | 0.999 |
| Maximum epochs | 200 |
| Lambda ( | 100 |
| Generator layers | 8 |
| Discriminator layers | 3 |
| Load size | 512 |
| Mini batch size | 1 |
Figure 5Training loss curves of the cross entropy, M.S.E. and Wasserstein distance loss function models.
Figure 6Indicative “fake” X-ray images from the pix2pix trained network using different loss functions: Cross Entropy (3rd column); MSE (4th column); Wasserstein distance (5th column). The input photographic images and the corresponding ground truth images are presented in the first two columns.
Metrics of the different cGAN loss functions tested.
| cGAN Loss Function | PSNR ↑ | SSIM ↑ | FID ↓ |
|---|---|---|---|
| Cross entropy | 21.923 | 0.771 | 85.428 |
| MSE | 21.954 | 0.770 | 90.824 |
| Wasserstein distance | 17.952 | 0.682 | 162.015 |
Metrics of the pix2pix Cross Entropy model on the different combinations of mouse and bed color.
| Mouse Color | Bed Color | PSNR ↑ | SSIM ↑ | FID ↓ |
|---|---|---|---|---|
| black | white | 22.808 | 0.791 | 112.948 |
| black | black | 22.894 | 0.794 | 151.006 |
| white | black | 21.196 | 0.750 | 109.116 |
| white | white | 20.270 | 0.743 | 163.056 |
Figure 7-MDP-labelled nuclear image of a healthy mouse fused with the optical image provided in the -eye scintigraphic system (left) and the X-ray produced from the pix2pix trained network (right).
Figure 8-FDG nuclear image of a healthy mouse fused with the optical image provided in the -eye planar coincidence imaging system (left) and the X-ray produced from the pix2pix trained network (right).