Hassan Muhammad1,2, Thomas J Fuchs1,2,3,4, Nicole De Cuir5,6, Carlos G De Moraes7, Dana M Blumberg7, Jeffrey M Liebmann7, Robert Ritch8, Donald C Hood5,7. 1. Department of Physiology, Biophysics, and Systems Biology, Weill Cornell Medicine. 2. Departments of Medical Physics. 3. Computational Biology. 4. Pathology, Memorial Sloan Kettering Cancer Center. 5. Departments of Psychology. 6. The College of Physicians and Surgeons, Columbia University. 7. Ophthalmology. 8. Einhorn Clinical Research Center, New York Eye and Ear Infirmary of Mount Sinai, New York, NY.
Abstract
PURPOSE: Existing summary statistics based upon optical coherence tomographic (OCT) scans and/or visual fields (VFs) are suboptimal for distinguishing between healthy and glaucomatous eyes in the clinic. This study evaluates the extent to which a hybrid deep learning method (HDLM), combined with a single wide-field OCT protocol, can distinguish eyes previously classified as either healthy suspects or mild glaucoma. METHODS: In total, 102 eyes from 102 patients, with or suspected open-angle glaucoma, had previously been classified by 2 glaucoma experts as either glaucomatous (57 eyes) or healthy/suspects (45 eyes). The HDLM had access only to information from a single, wide-field (9×12 mm) swept-source OCT scan per patient. Convolutional neural networks were used to extract rich features from maps derived from these scans. Random forest classifier was used to train a model based on these features to predict the existence of glaucomatous damage. The algorithm was compared against traditional OCT and VF metrics. RESULTS: The accuracy of the HDLM ranged from 63.7% to 93.1% depending upon the input map. The retinal nerve fiber layer probability map had the best accuracy (93.1%), with 4 false positives, and 3 false negatives. In comparison, the accuracy of the OCT and 24-2 and 10-2 VF metrics ranged from 66.7% to 87.3%. The OCT quadrants analysis had the best accuracy (87.3%) of the metrics, with 4 false positives and 9 false negatives. CONCLUSIONS: The HDLM protocol outperforms standard OCT and VF clinical metrics in distinguishing healthy suspect eyes from eyes with early glaucoma. It should be possible to further improve this algorithm and with improvement it might be useful for screening.
PURPOSE: Existing summary statistics based upon optical coherence tomographic (OCT) scans and/or visual fields (VFs) are suboptimal for distinguishing between healthy and glaucomatous eyes in the clinic. This study evaluates the extent to which a hybrid deep learning method (HDLM), combined with a single wide-field OCT protocol, can distinguish eyes previously classified as either healthy suspects or mild glaucoma. METHODS: In total, 102 eyes from 102 patients, with or suspected open-angle glaucoma, had previously been classified by 2 glaucoma experts as either glaucomatous (57 eyes) or healthy/suspects (45 eyes). The HDLM had access only to information from a single, wide-field (9×12 mm) swept-source OCT scan per patient. Convolutional neural networks were used to extract rich features from maps derived from these scans. Random forest classifier was used to train a model based on these features to predict the existence of glaucomatous damage. The algorithm was compared against traditional OCT and VF metrics. RESULTS: The accuracy of the HDLM ranged from 63.7% to 93.1% depending upon the input map. The retinal nerve fiber layer probability map had the best accuracy (93.1%), with 4 false positives, and 3 false negatives. In comparison, the accuracy of the OCT and 24-2 and 10-2 VF metrics ranged from 66.7% to 87.3%. The OCT quadrants analysis had the best accuracy (87.3%) of the metrics, with 4 false positives and 9 false negatives. CONCLUSIONS: The HDLM protocol outperforms standard OCT and VF clinical metrics in distinguishing healthy suspect eyes from eyes with early glaucoma. It should be possible to further improve this algorithm and with improvement it might be useful for screening.
Authors: Fabrício R Silva; Vanessa G Vidotti; Fernanda Cremasco; Marcelo Dias; Edson S Gomi; Vital P Costa Journal: Arq Bras Oftalmol Date: 2013 May-Jun Impact factor: 0.872
Authors: Akram Belghith; Christopher Bowd; Felipe A Medeiros; Madhusudhanan Balasubramanian; Robert N Weinreb; Linda M Zangwill Journal: Artif Intell Med Date: 2015-04-23 Impact factor: 5.326
Authors: Donald C Hood; Brad Fortune; Maria A Mavrommatis; Juan Reynaud; Rithambara Ramachandran; Robert Ritch; Richard B Rosen; Hassan Muhammad; Alfredo Dubra; Toco Y P Chui Journal: Invest Ophthalmol Vis Sci Date: 2015-10 Impact factor: 4.799
Authors: Michael H Goldbaum; Intae Lee; Giljin Jang; Madhusudhanan Balasubramanian; Pamela A Sample; Robert N Weinreb; Jeffrey M Liebmann; Christopher A Girkin; Douglas R Anderson; Linda M Zangwill; Marie-Josee Fredette; Tzyy-Ping Jung; Felipe A Medeiros; Christopher Bowd Journal: Invest Ophthalmol Vis Sci Date: 2012-09-25 Impact factor: 4.799
Authors: Donald C Hood; Ali S Raza; Carlos G De Moraes; Paula A Alhadeff; Juliet Idiga; Dana M Blumberg; Jeffrey M Liebmann; Robert Ritch Journal: Transl Vis Sci Technol Date: 2014-12-17 Impact factor: 3.283
Authors: Siamak Yousefi; Madhusudhanan Balasubramanian; Michael H Goldbaum; Felipe A Medeiros; Linda M Zangwill; Robert N Weinreb; Jeffrey M Liebmann; Christopher A Girkin; Christopher Bowd Journal: Transl Vis Sci Technol Date: 2016-05-03 Impact factor: 3.283
Authors: Atalie C Thompson; Alessandro A Jammal; Samuel I Berchuck; Eduardo B Mariottoni; Felipe A Medeiros Journal: JAMA Ophthalmol Date: 2020-04-01 Impact factor: 7.389
Authors: Mark Christopher; Christopher Bowd; Akram Belghith; Michael H Goldbaum; Robert N Weinreb; Massimo A Fazio; Christopher A Girkin; Jeffrey M Liebmann; Linda M Zangwill Journal: Ophthalmology Date: 2019-09-30 Impact factor: 12.079