| Literature DB >> 34428138 |
Shengye Hu, Baiying Lei, Shuqiang Wang, Yong Wang, Zhiguang Feng, Yanyan Shen.
Abstract
Fusing multi-modality medical images, such as magnetic resonance (MR) imaging and positron emission tomography (PET), can provide various anatomical and functional information about the human body. However, PET data is not always available for several reasons, such as high cost, radiation hazard, and other limitations. This paper proposes a 3D end-to-end synthesis network called Bidirectional Mapping Generative Adversarial Networks (BMGAN). Image contexts and latent vectors are effectively used for brain MR-to-PET synthesis. Specifically, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high-dimensional latent space. Moreover, the 3D Dense-UNet generator architecture and the hybrid loss functions are further constructed to improve the visual quality of cross-modality synthetic images. The most appealing part is that the proposed method can synthesize perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive methods in terms of quantitative measures, qualitative displays, and evaluation metrics for classification.Entities:
Mesh:
Year: 2021 PMID: 34428138 DOI: 10.1109/TMI.2021.3107013
Source DB: PubMed Journal: IEEE Trans Med Imaging ISSN: 0278-0062 Impact factor: 10.048