| Literature DB >> 34366715 |
Zhe Xu1,2, Jiangpeng Yan1, Jie Luo2, Xiu Li1, Jayender Jagadeesan2.
Abstract
Multimodal image registration (MIR) is a fundamental procedure in many image-guided therapies. Recently, unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration. However, the estimated deformation fields of the existing methods fully rely on the to-be-registered image pair. It is difficult for the networks to be aware of the mismatched boundaries, resulting in unsatisfactory organ boundary alignment. In this paper, we propose a novel multimodal registration framework, which elegantly leverages the deformation fields estimated from both: (i) the original to-be-registered image pair, (ii) their corresponding gradient intensity maps, and adaptively fuses them with the proposed gated fusion module. With the help of auxiliary gradient-space guidance, the network can concentrate more on the spatial relationship of the organ boundary. Experimental results on two clinically acquired CT-MRI datasets demonstrate the effectiveness of our proposed approach.Entities:
Keywords: Multimodal image registration; gradient guidance; unsupervised registration
Year: 2021 PMID: 34366715 PMCID: PMC8340619 DOI: 10.1109/icassp39728.2021.9414320
Source DB: PubMed Journal: Proc IEEE Int Conf Acoust Speech Signal Process ISSN: 1520-6149