Seisaku Komori1,2, Donna J Cross3, Megan Mills1, Yasuomi Ouchi4, Sadahiko Nishizawa5, Hiroyuki Okada5,6, Takashi Norikane7, Tanyaluck Thientunyakit8, Yoshimi Anzai1, Satoshi Minoshima1. 1. Department of Radiology and Imaging Sciences, University of Utah, 30 N. 1900 E. #1A71, Salt Lake City, UT, 84132-2140, USA. 2. Future Design Lab, New Concept Design, Global Strategic Challenge Center, Hamamatsu Photonics K.K, 5000, Hirakuchi, Hamakita-ku, Hamamatsu-City, 434-8601, Japan. 3. Department of Radiology and Imaging Sciences, University of Utah, 30 N. 1900 E. #1A71, Salt Lake City, UT, 84132-2140, USA. d.cross@utah.edu. 4. Department of Biofunctional Imaging, Hamamatsu University School of Medicine, Hamamatsu City, Japan. 5. Hamamatsu Medical Photonics Foundation, Hamamatsu, Japan. 6. Global Strategic Challenge Center, Hamamatsu Photonics K.K, Hamamatsu City, Japan. 7. Department of Radiology, Faculty of Medicine, Kagawa Univerisity, Takamatsu, Japan. 8. Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand.
Abstract
OBJECTIVE: While the use of biomarkers for the detection of early and preclinical Alzheimer's Disease has become essential, the need to wait for over an hour after injection to obtain sufficient image quality can be challenging for patients with suspected dementia and their caregivers. This study aimed to develop an image-based deep-learning technique to generate delayed uptake patterns of amyloid positron emission tomography (PET) images using only early-phase images obtained from 0-20 min after radiotracer injection. METHODS: We prepared pairs of early and delayed [11C]PiB dynamic images from 253 patients (cognitively normal n = 32, fronto-temporal dementia n = 39, mild cognitive impairment n = 19, Alzheimer's disease n = 163) as a training dataset. The neural network was trained with the early images as the input, and the output was the corresponding delayed image. A U-net convolutional neural network (CNN) and a conditional generative adversarial network (C-GAN) were used for the deep-learning architecture and the data augmentation methods, respectively. Then, an independent test data set consisting of early-phase amyloid PET images (n = 19) was used to generate corresponding delayed images using the trained network. Two nuclear medicine physicians interpreted the actual delayed images and predicted delayed images for amyloid positivity. In addition, the concordance of the actual delayed and predicted delayed images was assessed statistically. RESULTS: The concordance of amyloid positivity between the actual versus AI-predicted delayed images was 79%(κ = 0.60) and 79% (κ = 0.59) for each physician, respectively. In addition, the physicians' agreement rate was at 89% (κ = 0.79) when the same image was interpreted. And, the actual versus AI-predicted delayed images were not readily distinguishable (correct answer rate, 55% and 47% for each physician, respectively). The statistical comparison of the actual versus the predicted delated images indicated that the peak signal-to-noise ratio (PSNR) was 21.8 dB ± 2.2 dB, and the structural similarity index (SSIM) was 0.45 ± 0.04. CONCLUSION: This study demonstrates the feasibility of an image-based deep-learning framework to predict delayed patterns of Amyloid PET uptake using only the early phase images. This AI-based image generation method has the potential to reduce scan time for amyloid PET and increase the patient throughput, without sacrificing diagnostic accuracy for amyloid positivity.
OBJECTIVE: While the use of biomarkers for the detection of early and preclinical Alzheimer's Disease has become essential, the need to wait for over an hour after injection to obtain sufficient image quality can be challenging for patients with suspected dementia and their caregivers. This study aimed to develop an image-based deep-learning technique to generate delayed uptake patterns of amyloid positron emission tomography (PET) images using only early-phase images obtained from 0-20 min after radiotracer injection. METHODS: We prepared pairs of early and delayed [11C]PiB dynamic images from 253 patients (cognitively normal n = 32, fronto-temporal dementia n = 39, mild cognitive impairment n = 19, Alzheimer's disease n = 163) as a training dataset. The neural network was trained with the early images as the input, and the output was the corresponding delayed image. A U-net convolutional neural network (CNN) and a conditional generative adversarial network (C-GAN) were used for the deep-learning architecture and the data augmentation methods, respectively. Then, an independent test data set consisting of early-phase amyloid PET images (n = 19) was used to generate corresponding delayed images using the trained network. Two nuclear medicine physicians interpreted the actual delayed images and predicted delayed images for amyloid positivity. In addition, the concordance of the actual delayed and predicted delayed images was assessed statistically. RESULTS: The concordance of amyloid positivity between the actual versus AI-predicted delayed images was 79%(κ = 0.60) and 79% (κ = 0.59) for each physician, respectively. In addition, the physicians' agreement rate was at 89% (κ = 0.79) when the same image was interpreted. And, the actual versus AI-predicted delayed images were not readily distinguishable (correct answer rate, 55% and 47% for each physician, respectively). The statistical comparison of the actual versus the predicted delated images indicated that the peak signal-to-noise ratio (PSNR) was 21.8 dB ± 2.2 dB, and the structural similarity index (SSIM) was 0.45 ± 0.04. CONCLUSION: This study demonstrates the feasibility of an image-based deep-learning framework to predict delayed patterns of Amyloid PET uptake using only the early phase images. This AI-based image generation method has the potential to reduce scan time for amyloid PET and increase the patient throughput, without sacrificing diagnostic accuracy for amyloid positivity.