Yuelin Wang1,2, Miao Yu3, Bojie Hu4, Xuemin Jin5, Yibin Li6,7, Xiao Zhang1,2, Yongpeng Zhang7, Di Gong8, Chan Wu1,2, Bilei Zhang1,2, Jingyuan Yang1,2, Bing Li1,2, Mingzhen Yuan1,2, Bin Mo7, Qijie Wei9, Jianchun Zhao9, Dayong Ding9, Jingyun Yang10, Xirong Li11, Weihong Yu1,2, Youxin Chen1,2. 1. Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China. 2. Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China. 3. Department of Endocrinology, Key Laboratory of Endocrinology, National Health Commission, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China. 4. Department of Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin, China. 5. Department of Ophthalmology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China. 6. Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China. 7. Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China. 8. Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China. 9. Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China. 10. Department of Neurological Sciences, Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA. 11. Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China.
Abstract
AIMS: To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS: A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS: Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS: The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
AIMS: To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS: A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS: Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS: The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
Authors: Antonio Yaghy; Aaron Y Lee; Pearse A Keane; Tiarnan D L Keenan; Luisa S M Mendonca; Cecilia S Lee; Anne Marie Cairns; Joseph Carroll; Hao Chen; Julie Clark; Catherine A Cukras; Luis de Sisternes; Amitha Domalpally; Mary K Durbin; Kerry E Goetz; Felix Grassmann; Jonathan L Haines; Naoto Honda; Zhihong Jewel Hu; Christopher Mody; Luz D Orozco; Cynthia Owsley; Stephen Poor; Charles Reisman; Ramiro Ribeiro; Srinivas R Sadda; Sobha Sivaprasad; Giovanni Staurenghi; Daniel Sw Ting; Santa J Tumminia; Luca Zalunardo; Nadia K Waheed Journal: Exp Eye Res Date: 2022-05-04 Impact factor: 3.770