Sang Phan1, Shin'ichi Satoh1, Yoshioki Yoda2, Kenji Kashiwagi3, Tetsuro Oshika4. 1. Research Center for Medical Bigdata (RCMB), National Institute of Informatics, Tokyo, Japan. 2. Yamanashi Koseiren Health Care Center, Kofu, Japan. 3. Department of Ophthalmology, Faculty of Medicine, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, Japan. kenjik@yamanashi.ac.jp. 4. Department of Ophthalmology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.
Abstract
PURPOSE: To investigate the performance of deep convolutional neural networks (DCNNs) for glaucoma discrimination using color fundus images STUDY DESIGN: A retrospective study PATIENTS AND METHODS: To investigate the discriminative ability of 3 DCNNs, we used a total of 3312 images consisting of 369 images from glaucoma-confirmed eyes, 256 images from glaucoma-suspected eyes diagnosed by a glaucoma expert, and 2687 images judged to be nonglaucomatous eyes by a glaucoma expert. We also investigated the effects of image size on the discriminative ability and heatmap analysis to determine which parts of the image contribute to the discrimination. Additionally, we used 465 poor-quality images to investigate the effect of poor image quality on the discriminative ability. RESULTS: Three DCNNs showed areas under the curve (AUCs) of 0.9 or more. The AUC of the DCNN using glaucoma-confirmed eyes against nonglaucomatous eyes was higher than that using glaucoma-suspected eyes against nonglaucomatous eyes by approximately 0.1. The image size did not affect the discriminative ability. Heatmap analysis showed that the optic disc area was the most important area for the discrimination of glaucoma. The image quality affected the discriminative ability, and the inclusion of poor-quality images in the analysis reduced the AUC by 0.1 to 0.2. CONCLUSIONS: DCNNs may be a useful tool for detecting glaucoma or glaucoma-suspected eyes by use of fundus color images. Proper preprocessing and collection of qualified images are essential to improving the discriminative ability.
PURPOSE: To investigate the performance of deep convolutional neural networks (DCNNs) for glaucoma discrimination using color fundus images STUDY DESIGN: A retrospective study PATIENTS AND METHODS: To investigate the discriminative ability of 3 DCNNs, we used a total of 3312 images consisting of 369 images from glaucoma-confirmed eyes, 256 images from glaucoma-suspected eyes diagnosed by a glaucoma expert, and 2687 images judged to be nonglaucomatous eyes by a glaucoma expert. We also investigated the effects of image size on the discriminative ability and heatmap analysis to determine which parts of the image contribute to the discrimination. Additionally, we used 465 poor-quality images to investigate the effect of poor image quality on the discriminative ability. RESULTS: Three DCNNs showed areas under the curve (AUCs) of 0.9 or more. The AUC of the DCNN using glaucoma-confirmed eyes against nonglaucomatous eyes was higher than that using glaucoma-suspected eyes against nonglaucomatous eyes by approximately 0.1. The image size did not affect the discriminative ability. Heatmap analysis showed that the optic disc area was the most important area for the discrimination of glaucoma. The image quality affected the discriminative ability, and the inclusion of poor-quality images in the analysis reduced the AUC by 0.1 to 0.2. CONCLUSIONS: DCNNs may be a useful tool for detecting glaucoma or glaucoma-suspected eyes by use of fundus color images. Proper preprocessing and collection of qualified images are essential to improving the discriminative ability.
Entities:
Keywords:
Artificial intelligence; Deep convolutional neural network; Deep learning; Glaucoma; Ocular fundus color image
Authors: V Sunanthini; J Deny; E Govinda Kumar; S Vairaprakash; Petchinathan Govindan; S Sudha; V Muneeswaran; M Thilagaraj Journal: J Healthc Eng Date: 2022-01-07 Impact factor: 2.682
Authors: Ruben Hemelings; Bart Elen; João Barbosa-Breda; Matthew B Blaschko; Patrick De Boever; Ingeborg Stalmans Journal: Sci Rep Date: 2021-10-13 Impact factor: 4.379
Authors: Christopher Bowd; Akram Belghith; Mark Christopher; Michael H Goldbaum; Massimo A Fazio; Christopher A Girkin; Jeffrey M Liebmann; Carlos Gustavo de Moraes; Robert N Weinreb; Linda M Zangwill Journal: Transl Vis Sci Technol Date: 2021-07-01 Impact factor: 3.048