Literature DB >> 33166775

Rigid and non-rigid motion artifact reduction in X-ray CT using attention module.

Youngjun Ko1, Seunghyuk Moon1, Jongduk Baek2, Hyunjung Shim3.   

Abstract

Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs. The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.
Copyright © 2020 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Attention module; CT motion artifact reduction; Deep learning; Perceptual loss; Residual block

Mesh:

Year:  2020        PMID: 33166775     DOI: 10.1016/j.media.2020.101883

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  5 in total

1.  Elimination of stripe artifacts in light sheet fluorescence microscopy using an attention-based residual neural network.

Authors:  Zechen Wei; Xiangjun Wu; Wei Tong; Suhui Zhang; Xin Yang; Jie Tian; Hui Hui
Journal:  Biomed Opt Express       Date:  2022-02-07       Impact factor: 3.732

2.  Reference-free learning-based similarity metric for motion compensation in cone-beam CT.

Authors:  H Huang; J H Siewerdsen; W Zbijewski; C R Weiss; M Unberath; T Ehtiati; A Sisniega
Journal:  Phys Med Biol       Date:  2022-06-16       Impact factor: 4.174

3.  Medical Image Segmentation Algorithm for Three-Dimensional Multimodal Using Deep Reinforcement Learning and Big Data Analytics.

Authors:  Weiwei Gao; Xiaofeng Li; Yanwei Wang; Yingjie Cai
Journal:  Front Public Health       Date:  2022-04-08

4.  Motion blur invariant for estimating motion parameters of medical ultrasound images.

Authors:  Barmak Honarvar Shakibaei Asli; Yifan Zhao; John Ahmet Erkoyuncu
Journal:  Sci Rep       Date:  2021-07-12       Impact factor: 4.996

5.  Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images.

Authors:  Keisuke Matsubara; Masanobu Ibaraki; Yuki Shinohara; Noriyuki Takahashi; Hideto Toyoshima; Toshibumi Kinoshita
Journal:  Int J Comput Assist Radiol Surg       Date:  2021-04-05       Impact factor: 2.924

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.