Tiarnan D Keenan1, Shazia Dharssi2, Yifan Peng3, Qingyu Chen3, Elvira Agrón1, Wai T Wong4, Zhiyong Lu5, Emily Y Chew6. 1. Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland. 2. Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland. 3. National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland. 4. Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; Unit on Microglia, National Eye Institute, National Institutes of Health, Bethesda, Maryland. 5. National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland. Electronic address: zhiyong.lu@nih.gov. 6. Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland. Electronic address: echew@nei.nih.gov.
Abstract
PURPOSE: To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA). DESIGN: A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios. PARTICIPANTS: A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol. METHODS: A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists. MAIN OUTCOME MEASURES: Area under the curve (AUC), accuracy, sensitivity, specificity, and precision. RESULTS: The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888). CONCLUSIONS: A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet. Published by Elsevier Inc.
PURPOSE: To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA). DESIGN: A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios. PARTICIPANTS: A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol. METHODS: A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists. MAIN OUTCOME MEASURES: Area under the curve (AUC), accuracy, sensitivity, specificity, and precision. RESULTS: The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888). CONCLUSIONS: A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet. Published by Elsevier Inc.
Authors: Albert K Feeny; Mongkol Tadarati; David E Freund; Neil M Bressler; Philippe Burlina Journal: Comput Biol Med Date: 2015-07-09 Impact factor: 4.589
Authors: Tiarnan D Keenan; Elvira Agrón; Amitha Domalpally; Traci E Clemons; Freekje van Asten; Wai T Wong; Ronald G Danis; SriniVas Sadda; Philip J Rosenfeld; Michael L Klein; Rinki Ratnapriya; Anand Swaroop; Frederick L Ferris; Emily Y Chew Journal: Ophthalmology Date: 2018-07-27 Impact factor: 12.079
Authors: Matthew D Davis; Ronald E Gangnon; Li-Yin Lee; Larry D Hubbard; Barbara E K Klein; Ronald Klein; Frederick L Ferris; Susan B Bressler; Roy C Milton Journal: Arch Ophthalmol Date: 2005-11
Authors: Frederick L Ferris; Matthew D Davis; Traci E Clemons; Li-Yin Lee; Emily Y Chew; Anne S Lindblad; Roy C Milton; Susan B Bressler; Ronald Klein Journal: Arch Ophthalmol Date: 2005-11
Authors: Philippe M Burlina; Neil Joshi; Michael Pekala; Katia D Pacheco; David E Freund; Neil M Bressler Journal: JAMA Ophthalmol Date: 2017-11-01 Impact factor: 7.389
Authors: Philippe M Burlina; Neil Joshi; Katia D Pacheco; David E Freund; Jun Kong; Neil M Bressler Journal: JAMA Ophthalmol Date: 2018-12-01 Impact factor: 7.389
Authors: Jian Sun; Xiaoqin Huang; Charles Egwuagu; Youakim Badr; Stephen Charles Dryden; Brian Thomas Fowler; Siamak Yousefi Journal: Transl Vis Sci Technol Date: 2020-12-02 Impact factor: 3.283
Authors: Janan Arslan; Gihan Samarasinghe; Kurt K Benke; Arcot Sowmya; Zhichao Wu; Robyn H Guymer; Paul N Baird Journal: Transl Vis Sci Technol Date: 2020-10-26 Impact factor: 3.283