Qiong Chen1, Wei-Hong Yu2, Song Lin1, Bo-Shi Liu1, Yong Wang1, Qi-Jie Wei3, Xi-Xi He3, Fei Ding3,4, Gang Yang4, You-Xin Chen2, Xiao-Rong Li1, Bo-Jie Hu1. 1. Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin 300384, China. 2. Key Lab of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100032, China. 3. Vistel AI Lab, Visionary Intelligence Ltd, Beijing 100081, China. 4. School of Information, Renmin University of China, Beijing 100081, China.
Abstract
AIM: To assist with retinal vein occlusion (RVO) screening, artificial intelligence (AI) methods based on deep learning (DL) have been developed to alleviate the pressure experienced by ophthalmologists and discover and treat RVO as early as possible. METHODS: A total of 8600 color fundus photographs (CFPs) were included for training, validation, and testing of disease recognition models and lesion segmentation models. Four disease recognition and four lesion segmentation models were established and compared. Finally, one disease recognition model and one lesion segmentation model were selected as superior. Additionally, 224 CFPs from 130 patients were included as an external test set to determine the abilities of the two selected models. RESULTS: Using the Inception-v3 model for disease identification, the mean sensitivity, specificity, and F1 for the three disease types and normal CFPs were 0.93, 0.99, and 0.95, respectively, and the mean area under the curve (AUC) was 0.99. Using the DeepLab-v3 model for lesion segmentation, the mean sensitivity, specificity, and F1 for four lesion types (abnormally dilated and tortuous blood vessels, cotton-wool spots, flame-shaped hemorrhages, and hard exudates) were 0.74, 0.97, and 0.83, respectively. CONCLUSION: DL models show good performance when recognizing RVO and identifying lesions using CFPs. Because of the increasing number of RVO patients and increasing demand for trained ophthalmologists, DL models will be helpful for diagnosing RVO early in life and reducing vision impairment. International Journal of Ophthalmology Press.
AIM: To assist with retinal vein occlusion (RVO) screening, artificial intelligence (AI) methods based on deep learning (DL) have been developed to alleviate the pressure experienced by ophthalmologists and discover and treat RVO as early as possible. METHODS: A total of 8600 color fundus photographs (CFPs) were included for training, validation, and testing of disease recognition models and lesion segmentation models. Four disease recognition and four lesion segmentation models were established and compared. Finally, one disease recognition model and one lesion segmentation model were selected as superior. Additionally, 224 CFPs from 130 patients were included as an external test set to determine the abilities of the two selected models. RESULTS: Using the Inception-v3 model for disease identification, the mean sensitivity, specificity, and F1 for the three disease types and normal CFPs were 0.93, 0.99, and 0.95, respectively, and the mean area under the curve (AUC) was 0.99. Using the DeepLab-v3 model for lesion segmentation, the mean sensitivity, specificity, and F1 for four lesion types (abnormally dilated and tortuous blood vessels, cotton-wool spots, flame-shaped hemorrhages, and hard exudates) were 0.74, 0.97, and 0.83, respectively. CONCLUSION: DL models show good performance when recognizing RVO and identifying lesions using CFPs. Because of the increasing number of RVO patients and increasing demand for trained ophthalmologists, DL models will be helpful for diagnosing RVO early in life and reducing vision impairment. International Journal of Ophthalmology Press.
Authors: J Anitha; C Kezi Selva Vijila; A Immanuel Selvakumar; A Indumathy; D Jude Hemanth Journal: Br J Ophthalmol Date: 2011-06-22 Impact factor: 4.638