Literature DB >> 33130416

MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation.

Baoqiang Ma1, Yan Zhao1, Yujing Yang1, Xiaohui Zhang1, Xiaoxi Dong1, Debin Zeng1, Siyu Ma1, Shuyu Li2.   

Abstract

BACKGROUND AND
OBJECTIVE: Hippocampal subfields (HS) segmentation accuracy on high resolution (HR) MRI images is higher than that on low resolution (LR) MRI images. However, HR MRI data collection is more expensive and time-consuming. Thus, we intend to generate HR MRI images from the corresponding LR MRI images for HS segmentation. METHODS AND
RESULTS: To generate high-quality HR MRI hippocampus region images, we use a dual discriminator adversarial learning model with difficulty-aware attention mechanism in hippocampus regions (da-GAN). A local discriminator is applied in da-GAN to evaluate the visual quality of hippocampus region voxels of the synthetic images. And the difficulty-aware attention mechanism based on the local discriminator can better model the generation of hard-to-synthesis voxels in hippocampus regions. Additionally, we design a SemiDenseNet model with 3D Dense CRF postprocessing and an Unet-based model to perform HS segmentation. The experiments are implemented on Kulaga-Yoskovitz dataset. Compared with conditional generative adversarial network (c-GAN), the PSNR of generated HR T2w images acquired by our da-GAN achieves 0.406 and 0.347 improvement in left and right hippocampus regions. When using two segmentation models to segment HS, the DSC values achieved on the generated HR T1w and T2w images are both improved than that on LR T1w images.
CONCLUSION: Experimental results show that da-GAN model can generate higher-quality MRI images, especially in hippocampus regions, and the generated MRI images can improve HS segmentation accuracy.
Copyright © 2020 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Adversarial learning; Difficulty-aware attention; Dual discriminator; Hippocampal subfields segmentation; MRI image synthesis

Year:  2020        PMID: 33130416     DOI: 10.1016/j.compmedimag.2020.101800

Source DB:  PubMed          Journal:  Comput Med Imaging Graph        ISSN: 0895-6111            Impact factor:   4.790


  1 in total

1.  FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN).

Authors:  Farideh Bazangani; Frédéric J P Richard; Badih Ghattas; Eric Guedj
Journal:  Sensors (Basel)       Date:  2022-06-20       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.