Literature DB >> 35655819

Synthesis of magnetic resonance images from computed tomography data using convolutional neural network with contextual loss function.

Zhaotong Li1,2, Xinrui Huang3, Zeru Zhang1,2, Liangyou Liu1,2, Fei Wang4, Sha Li4, Song Gao1, Jun Xia5.   

Abstract

Background: Magnetic resonance imaging (MRI) images synthesized from computed tomography (CT) data can provide more detailed information on pathological structures than that of CT data alone; thus, the synthesis of MRI has received increased attention especially in medical scenarios where only CT images are available. A novel convolutional neural network (CNN) combined with a contextual loss function was proposed for synthesis of T1- and T2-weighted images (T1WI and T2WI) from CT data.
Methods: A total of 5,053 and 5,081 slices of T1WI and T2WI, respectively were selected for the dataset of CT and MRI image pairs. Affine registration, image denoising, and contrast enhancement were done on the aforementioned multi-modality medical image dataset comprising T1WI, T2WI, and CT images of the brain. A deep CNN was then proposed by modifying the ResNet structure to constitute the encoder and decoder of U-Net, called double ResNet-U-Net (DRUNet). Three different loss functions were utilized to optimize the parameters of the proposed models: mean squared error (MSE) loss, binary crossentropy (BCE) loss, and contextual loss. Statistical analysis of the independent-sample t-test was conducted by comparing DRUNets with different loss functions and different network layers.
Results: DRUNet-101 with contextual loss yielded higher values of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Tenengrad function (i.e., 34.25±2.06, 0.97±0.03, and 17.03±2.75 for T1WI and 33.50±1.08, 0.98±0.05, and 19.76±3.54 for T2WI respectively). The results were statistically significant at P<0.001 with a narrow confidence interval of difference, indicating the superiority of DRUNet-101 with contextual loss. In addition, both image zooming and difference maps presented for the final synthetic MR images visually reflected the robustness of DRUNet-101 with contextual loss. The visualization of convolution filters and feature maps showed that the proposed model can generate synthetic MR images with high-frequency information. Conclusions: The results demonstrated that DRUNet-101 with contextual loss function provided better high-frequency information in synthetic MR images compared with the other two functions. The proposed DRUNet model has a distinct advantage over previous models in terms of PSNR, SSIM, and Tenengrad score. Overall, DRUNet-101 with contextual loss is recommended for synthesizing MR images from CT scans. 2022 Quantitative Imaging in Medicine and Surgery. All rights reserved.

Entities:  

Keywords:  ResNet; Synthesis of magnetic resonance imaging (synthesis of MRI); U-Net; contextual loss; radiotherapy treatment planning system (radiotherapy TPS)

Year:  2022        PMID: 35655819      PMCID: PMC9131350          DOI: 10.21037/qims-21-846

Source DB:  PubMed          Journal:  Quant Imaging Med Surg        ISSN: 2223-4306


  26 in total

1.  Recurrent residual U-Net for medical image segmentation.

Authors:  Md Zahangir Alom; Chris Yakopcic; Mahmudul Hasan; Tarek M Taha; Vijayan K Asari
Journal:  J Med Imaging (Bellingham)       Date:  2019-03-27

2.  Treatment planning using MRI data: an analysis of the dose calculation accuracy for different treatment regions.

Authors:  Joakim H Jonsson; Magnus G Karlsson; Mikael Karlsson; Tufve Nyholm
Journal:  Radiat Oncol       Date:  2010-06-30       Impact factor: 3.481

3.  Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN.

Authors:  Heran Yang; Jian Sun; Aaron Carass; Can Zhao; Junghoon Lee; Jerry L Prince; Zongben Xu
Journal:  IEEE Trans Med Imaging       Date:  2020-11-30       Impact factor: 10.048

4.  MRI-based treatment planning with pseudo CT generated through atlas registration.

Authors:  Jinsoo Uh; Thomas E Merchant; Yimei Li; Xingyu Li; Chiaho Hua
Journal:  Med Phys       Date:  2014-05       Impact factor: 4.071

5.  Investigation of a method for generating synthetic CT models from MRI scans of the head and neck for radiation therapy.

Authors:  Shu-Hui Hsu; Yue Cao; Ke Huang; Mary Feng; James M Balter
Journal:  Phys Med Biol       Date:  2013-11-11       Impact factor: 3.609

6.  MRI and CT evaluation of primary bone and soft-tissue tumors.

Authors:  A M Aisen; W Martel; E M Braunstein; K I McMillin; W A Phillips; T F Kling
Journal:  AJR Am J Roentgenol       Date:  1986-04       Impact factor: 3.959

7.  Medical Image Synthesis with Context-Aware Generative Adversarial Networks.

Authors:  Dong Nie; Roger Trullo; Jun Lian; Caroline Petitjean; Su Ruan; Qian Wang; Dinggang Shen
Journal:  Med Image Comput Comput Assist Interv       Date:  2017-09-04

8.  Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy.

Authors:  Wen Li; Yafen Li; Wenjian Qin; Xiaokun Liang; Jianyang Xu; Jing Xiong; Yaoqin Xie
Journal:  Quant Imaging Med Surg       Date:  2020-06

9.  A technique to generate synthetic CT from MRI for abdominal radiotherapy.

Authors:  Shu-Hui Hsu; Pamela DuPre; Qi Peng; Wolfgang A Tomé
Journal:  J Appl Clin Med Phys       Date:  2020-02-11       Impact factor: 2.102

10.  Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images.

Authors:  Yafen Li; Wen Li; Jing Xiong; Jun Xia; Yaoqin Xie
Journal:  Biomed Res Int       Date:  2020-11-05       Impact factor: 3.411

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.