F Reith1, M E Koran1,2, G Davidzon1,2, G Zaharchuk3. 1. From the Departments of Radiology (F.R., M.E.K., G.D., G.Z.). 2. Nuclear Medicine (M.E.K., G.D.), Stanford University, Stanford, California. 3. From the Departments of Radiology (F.R., M.E.K., G.D., G.Z.) gregz@stanford.edu.
Abstract
BACKGROUND AND PURPOSE: Cortical amyloid quantification on PET by using the standardized uptake value ratio is valuable for research studies and clinical trials in Alzheimer disease. However, it is resource intensive, requiring co-registered MR imaging data and specialized segmentation software. We investigated the use of deep learning to automatically quantify standardized uptake value ratio and used this for classification. MATERIALS AND METHODS: Using the Alzheimer's Disease Neuroimaging Initiative dataset, we identified 2582 18F-florbetapir PET scans, which were separated into positive and negative cases by using a standardized uptake value ratio threshold of 1.1. We trained convolutional neural networks (ResNet-50 and ResNet-152) to predict standardized uptake value ratio and classify amyloid status. We assessed performance based on network depth, number of PET input slices, and use of ImageNet pretraining. We also assessed human performance with 3 readers in a subset of 100 randomly selected cases. RESULTS: We have found that 48% of cases were amyloid positive. The best performance was seen for ResNet-50 by using regression before classification, 3 input PET slices, and pretraining, with a standardized uptake value ratio root-mean-square error of 0.054, corresponding to 95.1% correct amyloid status prediction. Using more than 3 slices did not improve performance, but ImageNet initialization did. The best trained network was more accurate than humans (96% versus a mean of 88%, respectively). CONCLUSIONS: Deep learning algorithms can estimate standardized uptake value ratio and use this to classify 18F-florbetapir PET scans. Such methods have promise to automate this laborious calculation, enabling quantitative measurements rapidly and in settings without extensive image processing manpower and expertise.
BACKGROUND AND PURPOSE: Cortical amyloid quantification on PET by using the standardized uptake value ratio is valuable for research studies and clinical trials in Alzheimer disease. However, it is resource intensive, requiring co-registered MR imaging data and specialized segmentation software. We investigated the use of deep learning to automatically quantify standardized uptake value ratio and used this for classification. MATERIALS AND METHODS: Using the Alzheimer's Disease Neuroimaging Initiative dataset, we identified 2582 18F-florbetapir PET scans, which were separated into positive and negative cases by using a standardized uptake value ratio threshold of 1.1. We trained convolutional neural networks (ResNet-50 and ResNet-152) to predict standardized uptake value ratio and classify amyloid status. We assessed performance based on network depth, number of PET input slices, and use of ImageNet pretraining. We also assessed human performance with 3 readers in a subset of 100 randomly selected cases. RESULTS: We have found that 48% of cases were amyloid positive. The best performance was seen for ResNet-50 by using regression before classification, 3 input PET slices, and pretraining, with a standardized uptake value ratio root-mean-square error of 0.054, corresponding to 95.1% correct amyloid status prediction. Using more than 3 slices did not improve performance, but ImageNet initialization did. The best trained network was more accurate than humans (96% versus a mean of 88%, respectively). CONCLUSIONS: Deep learning algorithms can estimate standardized uptake value ratio and use this to classify 18F-florbetapir PET scans. Such methods have promise to automate this laborious calculation, enabling quantitative measurements rapidly and in settings without extensive image processing manpower and expertise.
Authors: Gil D Rabinovici; Constantine Gatsonis; Charles Apgar; Kiran Chaudhary; Ilana Gareen; Lucy Hanna; James Hendrix; Bruce E Hillner; Cynthia Olson; Orit H Lesman-Segev; Justin Romanoff; Barry A Siegel; Rachel A Whitmer; Maria C Carrillo Journal: JAMA Date: 2019-04-02 Impact factor: 56.272
Authors: Susan M Landau; Ming Lu; Abhinay D Joshi; Michael Pontecorvo; Mark A Mintun; John Q Trojanowski; Leslie M Shaw; William J Jagust Journal: Ann Neurol Date: 2013-12 Impact factor: 10.422
Authors: Susan M Landau; Mark A Mintun; Abhinay D Joshi; Robert A Koeppe; Ronald C Petersen; Paul S Aisen; Michael W Weiner; William J Jagust Journal: Ann Neurol Date: 2012-10 Impact factor: 10.422
Authors: Sebastian Palmqvist; Henrik Zetterberg; Kaj Blennow; Susanna Vestberg; Ulf Andreasson; David J Brooks; Rikard Owenius; Douglas Hägerström; Per Wollmer; Lennart Minthon; Oskar Hansson Journal: JAMA Neurol Date: 2014-10 Impact factor: 18.302
Authors: Yiming Ding; Jae Ho Sohn; Michael G Kawczynski; Hari Trivedi; Roy Harnish; Nathaniel W Jenkins; Dmytro Lituiev; Timothy P Copeland; Mariam S Aboian; Carina Mari Aparici; Spencer C Behr; Robert R Flavell; Shih-Ying Huang; Kelly A Zalocusky; Lorenzo Nardo; Youngho Seo; Randall A Hawkins; Miguel Hernandez Pampaloni; Dexter Hadley; Benjamin L Franc Journal: Radiology Date: 2018-11-06 Impact factor: 29.146
Authors: Neetu Soni; Manish Ora; Girish Bathla; Chandana Nagaraj; Laura L Boles Ponto; Michael M Graham; Jitender Saini; Yusuf Menda Journal: Neuroradiol J Date: 2021-03-05