| Literature DB >> 33283210 |
Zhe Xu1,2, Jie Luo2,3, Jiangpeng Yan1, Ritvik Pulya2, Xiu Li1, William Wells2, Jayender Jagadeesan2.
Abstract
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies. In this paper, we propose a novel translation-based unsupervised deformable image registration method. Distinct from other translation-based methods that attempt to convert the multimodal problem (e.g., CT-to-MR) into a unimodal problem (e.g., MR-to-MR) via image-to-image translation, our method leverages the deformation fields estimated from both: (i) the translated MR image and (ii) the original CT image in a dual-stream fashion, and automatically learns how to fuse them to achieve better registration performance. The multimodal registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.Entities:
Keywords: Generative Adversarial Network; Multimodal Registration; Unsupervised Learning
Year: 2020 PMID: 33283210 PMCID: PMC7712495 DOI: 10.1007/978-3-030-59716-0_22
Source DB: PubMed Journal: Med Image Comput Comput Assist Interv