Literature DB >> 31233461

Evaluation of a Deep Learning System For Identifying Glaucomatous Optic Neuropathy Based on Color Fundus Photographs.

Lama A Al-Aswad1, Rahul Kapoor1, Chia Kai Chu1, Stephen Walters1, Dan Gong1, Aakriti Garg1, Kalashree Gopal1, Vipul Patel1, Trikha Sameer2,3, Thomas W Rogers2, Jaccard Nicolas2, Gustavo C De Moraes1, Golnaz Moazami1.   

Abstract

PRECIS: Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the "best case" consensus between the ophthalmologists. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Furthermore, the high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy.
PURPOSE: The purpose of this study was to evaluate the performance of a deep learning system for the identification of glaucomatous optic neuropathy.
MATERIALS AND METHODS: Six ophthalmologists and the deep learning system, Pegasus, graded 110 color fundus photographs in this retrospective single-center study. Patient images were randomly sampled from the Singapore Malay Eye Study. Ophthalmologists and Pegasus were compared with each other and to the original clinical diagnosis given by the Singapore Malay Eye Study, which was defined as the gold standard. Pegasus' performance was compared with the "best case" consensus scenario, which was the combination of ophthalmologists whose consensus opinion most closely matched the gold standard. The performance of the ophthalmologists and Pegasus, at the binary classification of nonglaucoma versus glaucoma from fundus photographs, was assessed in terms of sensitivity, specificity and the area under the receiver operating characteristic curve (AUROC), and the intraobserver and interobserver agreements were determined.
RESULTS: Pegasus achieved an AUROC of 92.6% compared with ophthalmologist AUROCs that ranged from 69.6% to 84.9% and the "best case" consensus scenario AUROC of 89.1%. Pegasus had a sensitivity of 83.7% and a specificity of 88.2%, whereas the ophthalmologists' sensitivity ranged from 61.3% to 81.6% and specificity ranged from 80.0% to 94.1%. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Intraobserver agreement ranged from 0.62 to 0.97 for ophthalmologists and was perfect (1.00) for Pegasus. The deep learning system took ∼10% of the time of the ophthalmologists in determining classification.
CONCLUSIONS: Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the "best case" consensus between the ophthalmologists. The high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy. Future work will extend this study to a larger sample of patients.

Entities:  

Mesh:

Year:  2019        PMID: 31233461     DOI: 10.1097/IJG.0000000000001319

Source DB:  PubMed          Journal:  J Glaucoma        ISSN: 1057-0829            Impact factor:   2.290


  8 in total

Review 1.  The use of deep learning technology for the detection of optic neuropathy.

Authors:  Mei Li; Chao Wan
Journal:  Quant Imaging Med Surg       Date:  2022-03

2.  Artificial Intelligence for Glaucoma: Creating and Implementing Artificial Intelligence for Disease Detection and Progression.

Authors:  Lama A Al-Aswad; Rithambara Ramachandran; Joel S Schuman; Felipe Medeiros; Malvina B Eydelman
Journal:  Ophthalmol Glaucoma       Date:  2022-02-24

3.  Modeling and mitigating human annotations to design processing systems with human-in-the-loop machine learning for glaucomatous defects: The future in artificial intelligence.

Authors:  Prasanna V Ramesh; Shruthy V Ramesh; K Aji; Prajnya Ray; S Tamilselvan; Sathyan Parthasarathi; Meena Kumari Ramesh; Ramesh Rajasekaran
Journal:  Indian J Ophthalmol       Date:  2021-10       Impact factor: 2.969

Review 4.  Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis.

Authors:  Ravi Aggarwal; Viknesh Sounderajah; Guy Martin; Daniel S W Ting; Alan Karthikesalingam; Dominic King; Hutan Ashrafian; Ara Darzi
Journal:  NPJ Digit Med       Date:  2021-04-07

5.  Real-Time Mobile Teleophthalmology for the Detection of Eye Disease in Minorities and Low Socioeconomics At-Risk Populations.

Authors:  Lama A Al-Aswad; Cansu Yuksel Elgin; Vipul Patel; Deborah Popplewell; Kalashree Gopal; Dan Gong; Zach Thomas; Devon Joiner; Cha-Kai Chu; Stephen Walters; Maya Ramachandran; Rahul Kapoor; Maribel Rodriguez; Jennifer Alcantara-Castillo; Gladys E Maestre; Joseph H Lee; Golnaz Moazami
Journal:  Asia Pac J Ophthalmol (Phila)       Date:  2021 Sep-Oct 01

6.  Utilizing human intelligence in artificial intelligence for detecting glaucomatous fundus images using human-in-the-loop machine learning.

Authors:  Prasanna Venkatesh Ramesh; Tamilselvan Subramaniam; Prajnya Ray; Aji Kunnath Devadas; Shruthy Vaishali Ramesh; Sheik Mohamed Ansar; Meena Kumari Ramesh; Ramesh Rajasekaran; Sathyan Parthasarathi
Journal:  Indian J Ophthalmol       Date:  2022-04       Impact factor: 2.969

7.  A deep learning approach in diagnosing fungal keratitis based on corneal photographs.

Authors:  Ming-Tse Kuo; Benny Wei-Yun Hsu; Yu-Kai Yin; Po-Chiung Fang; Hung-Yin Lai; Alexander Chen; Meng-Shan Yu; Vincent S Tseng
Journal:  Sci Rep       Date:  2020-09-02       Impact factor: 4.379

8.  Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation.

Authors:  Sejong Oh; Yuli Park; Kyong Jin Cho; Seong Jae Kim
Journal:  Diagnostics (Basel)       Date:  2021-03-13
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.