| Literature DB >> 35125977 |
Haoliang Sun1,2,3, Ronak Mehta1, Hao H Zhou1, Zhichun Huang1, Sterling C Johnson1, Vivek Prabhakaran1, Vikas Singh1.
Abstract
Positron emission tomography (PET) imaging is an imaging modality for diagnosing a number of neurological diseases. In contrast to Magnetic Resonance Imaging (MRI), PET is costly and involves injecting a radioactive substance into the patient. Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data. We derive new flow-based generative models which we show perform well in this small sample size regime (much smaller than dataset sizes available in standard vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks and a relation network that maps the latent spaces to each other. We discuss how given the prior distribution, learning the conditional distribution of PET given the MRI image reduces to obtaining the conditional distribution between the two latent codes w.r.t. the two image types. We also extend our framework to leverage "side" information (or attributes) when available. By controlling the PET generation through "conditioning" on age, our model is also able to capture brain FDG-PET (hypometabolism) changes, as a function of age. We present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset with 826 subjects, and obtain good performance in PET image synthesis, qualitatively and quantitatively better than recent works.Entities:
Year: 2020 PMID: 35125977 PMCID: PMC8813086 DOI: 10.1109/iccv.2019.01071
Source DB: PubMed Journal: Proc IEEE Int Conf Comput Vis ISSN: 1550-5499