Literature DB >> 35167997

BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis.

Jin Zhang1, Xiaohai He2, Linbo Qing1, Feng Gao3, Bin Wang1.   

Abstract

BACKGROUND AND
OBJECTIVE: Multi-modal medical images, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), have been widely used for the diagnosis of brain disorder diseases like Alzheimer's disease (AD) since they can provide various information. PET scans can detect cellular changes in organs and tissues earlier than MRI. Unlike MRI, PET data is difficult to acquire due to cost, radiation, or other limitations. Moreover, PET data is missing for many subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. To solve this problem, a 3D end-to-end generative adversarial network (named BPGAN) is proposed to synthesize brain PET from MRI scans, which can be used as a potential data completion scheme for multi-modal medical image research.
METHODS: We propose BPGAN, which learns an end-to-end mapping function to transform the input MRI scans to their underlying PET scans. First, we design a 3D multiple convolution U-Net (MCU) generator architecture to improve the visual quality of synthetic results while preserving the diverse brain structures of different subjects. By further employing a 3D gradient profile (GP) loss and structural similarity index measure (SSIM) loss, the synthetic PET scans have higher-similarity to the ground truth. In this study, we explore alternative data partitioning ways to study their impact on the performance of the proposed method in different medical scenarios.
RESULTS: We conduct experiments on a publicly available ADNI database. The proposed BPGAN is evaluated by mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and SSIM, superior to other compared models in these quantitative evaluation metrics. Qualitative evaluations also validate the effectiveness of our approach. Additionally, combined with MRI and our synthetic PET scans, the accuracies of multi-class AD diagnosis on dataset-A and dataset-B are 85.00% and 56.47%, which have been improved by about 1% and 1%, respectively, compared to the stand-alone MRI.
CONCLUSIONS: The experimental results of quantitative measures, qualitative displays, and classification evaluation demonstrate that the synthetic PET images by BPGAN are reasonable and high-quality, which provide complementary information to improve the performance of AD diagnosis. This work provides a valuable reference for multi-modal medical image analysis.
Copyright © 2022. Published by Elsevier B.V.

Entities:  

Keywords:  Alzheimer’s disease; Generative adversarial networks; MRI; Medical imaging synthesis; PET

Mesh:

Year:  2022        PMID: 35167997     DOI: 10.1016/j.cmpb.2022.106676

Source DB:  PubMed          Journal:  Comput Methods Programs Biomed        ISSN: 0169-2607            Impact factor:   5.428


  1 in total

1.  Associations of multiple visual rating scales based on structural magnetic resonance imaging with disease severity and cerebrospinal fluid biomarkers in patients with Alzheimer's disease.

Authors:  Mei-Dan Wan; Hui Liu; Xi-Xi Liu; Wei-Wei Zhang; Xue-Wen Xiao; Si-Zhe Zhang; Ya-Ling Jiang; Hui Zhou; Xin-Xin Liao; Ya-Fang Zhou; Bei-Sha Tang; Jun-Ling Wang; Ji-Feng Guo; Bin Jiao; Lu Shen
Journal:  Front Aging Neurosci       Date:  2022-07-29       Impact factor: 5.702

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.