| Literature DB >> 30854517 |
Lei Xiang1, Yang Li2,3, Weili Lin3, Qian Wang1, Dinggang Shen3.
Abstract
Cross-modality synthesis can convert the input image of one modality to the output of another modality. It is thus very valuable for both scientific research and clinical applications. Most existing cross-modality synthesis methods require large dataset of paired data for training, while it is often non-trivial to acquire perfectly aligned images of different modalities for the same subject. Even tiny misalignment (i.e., due patient/organ motion) between the cross-modality paired images may place adverse impact to training and corrupt the synthesized images. In this paper, we present a novel method for cross-modality image synthesis by training with the unpaired data. Specifically, we adopt the generative adversarial networks and conduct the fast training in cyclic way. A new structural dissimilarity loss, which captures the detailed anatomies, is introduced to enhance the quality of the synthesized images. We validate our proposed algorithm on three popular image synthesis tasks, including brain MR-to-CT, prostate MR-to-CT, and brain 3T-to-7T. The experimental results demonstrate that our proposed method can achieve good synthesis performance by using the unpaired data only.Entities:
Year: 2018 PMID: 30854517 PMCID: PMC6407421 DOI: 10.1007/978-3-030-00889-5_18
Source DB: PubMed Journal: Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018)