Zhe Xu1,2, Jie Luo2, Jiangpeng Yan1, Xiu Li3, Jagadeesan Jayender2. 1. Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China. 2. Brigham and Women's Hospital, Harvard Medical School, Boston, 02115, USA. 3. Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China. li.xiu@sz.tsinghua.edu.cn.
Abstract
PURPOSE: Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes. METHODS: We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency. RESULTS: We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches. CONCLUSION: By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.
PURPOSE: Deformable image registration (DIR) is essential for many image-guided therapies. Recently, deep learning approaches have gained substantial popularity and success in DIR. Most deep learning approaches use the so-called mono-stream high-to-low, low-to-high network structure and can achieve satisfactory overall registration results. However, accurate alignments for some severely deformed local regions, which are crucial for pinpointing surgical targets, are often overlooked. Consequently, these approaches are not sensitive to some hard-to-align regions, e.g., intra-patient registration of deformed liver lobes. METHODS: We propose a novel unsupervised registration network, namely full-resolution residual registration network (F3RNet), for deformable registration of severely deformed organs. The proposed method combines two parallel processing streams in a residual learning fashion. One stream takes advantage of the full-resolution information that facilitates accurate voxel-level registration. The other stream learns the deep multi-scale residual representations to obtain robust recognition. We also factorize the 3D convolution to reduce the training parameters and enhance network efficiency. RESULTS: We validate the proposed method on a clinically acquired intra-patient abdominal CT-MRI dataset and a public inspiratory and expiratory thorax CT dataset. Experiments on both multimodal and unimodal registration demonstrate promising results compared to state-of-the-art approaches. CONCLUSION: By combining the high-resolution information and multi-scale representations in a highly interactive residual learning fashion, the proposed F3RNet can achieve accurate overall and local registration. The run time for registering a pair of images is less than 3 s using a GPU. In future works, we will investigate how to cost-effectively process high-resolution information and fuse multi-scale representations.
Entities:
Keywords:
Deep learning; Deformable image registration; Image-guided therapy; Residual learning
Authors: Mattias P Heinrich; Mark Jenkinson; Manav Bhushan; Tahreema Matin; Fergus V Gleeson; Sir Michael Brady; Julia A Schnabel Journal: Med Image Anal Date: 2012-05-31 Impact factor: 8.545
Authors: Bob D de Vos; Floris F Berendsen; Max A Viergever; Hessam Sokooti; Marius Staring; Ivana Išgum Journal: Med Image Anal Date: 2018-12-08 Impact factor: 8.545
Authors: Yipeng Hu; Marc Modat; Eli Gibson; Wenqi Li; Nooshin Ghavami; Ester Bonmati; Guotai Wang; Steven Bandula; Caroline M Moore; Mark Emberton; Sébastien Ourselin; J Alison Noble; Dean C Barratt; Tom Vercauteren Journal: Med Image Anal Date: 2018-07-04 Impact factor: 8.545