| Literature DB >> 33294301 |
Cassie A Ludwig1,2, Chandrashan Perera1, David Myung1,3,4, Margaret A Greven5, Stephen J Smith1,4, Robert T Chang1, Theodore Leng1.
Abstract
Purpose: To evaluate the performance of a deep learning algorithm in the detection of referral-warranted diabetic retinopathy (RDR) on low-resolution fundus images acquired with a smartphone and indirect ophthalmoscope lens adapter.Entities:
Keywords: artificial intelligence; deep learning; diabetic retinopathy; fundus photography; mobile technology
Mesh:
Year: 2020 PMID: 33294301 PMCID: PMC7718806 DOI: 10.1167/tvst.9.2.60
Source DB: PubMed Journal: Transl Vis Sci Technol ISSN: 2164-2591 Impact factor: 3.283
Figure 1.Flow diagram for algorithm training, validation, testing, and sensitivity analysis.
Figure 2.Example mydriatic fundus photographs from the test data set taken from screenshots of live video fundus examinations performed using the EyeGo adapter on an iPhone 5S (Apple Inc.) (left) from an FF 450 plus Fundus Camera with VISUPAC Digital Imaging System (Carl Zeiss Meditec Inc., Oberkochen, Germany) (right). The deep learning algorithm demonstrated high sensitivity and specificity in spite of glare artifact (A), image warping (B), and lens artifact (C).
Figure 3.Image preprocessing allowed for standardization of the data set. We used an algorithm on the original image (A) to crop the fundus photo and reduce background noise (B). We then sized the image to a standard resolution of 224 × 224 pixels (C) to match the input size of our chosen model architecture.
Average AUC, F-Score, Sensitivity, and Specificity of Nonreferable Diabetic Retinopathy Versus Referable Diabetic Retinopathy Using the EyeGo Smartphone Data Set and a Publicly Available Dataset for Validation
| Dataset | No. With RDR | No. Without RDR | AUC (95% CI) | F1 (95% CI) | Sensitivity (95% CI) | Specificity (95% CI) |
|---|---|---|---|---|---|---|
| EyeGo (ground truth EyeGo photo) | 27 | 76 | 0.89 (0.83–0.95) | 0.85 (0.80–0.90) | 0.89 (0.81–1.0) | 0.83 (0.77–0.89) |
| EyeGo (ground truth fundus photo) | 52 | 25 | 0.82 (0.73–0.90) | 0.82 (0.75–0.89) | 0.83 (0.78–0.91) | 0.76 (0.63–0.88) |
| Messidor-2 (validation) | 383 | 675 | 0.92 (0.91–0.94) | 0.83 (0.81–0.85) | 0.87 (0.84–0.90) | 0.80 (0.78–0.82) |
Figure 4.Receiver operating characteristic (ROC) curves for the EyeGo smartphone data set using EyeGo images as ground truth (left), EyeGo smartphone data set using Fundus photos as ground truth (middle), and Messidor-2 dataset (right) demonstrating high reliability of a deep learning algorithm used to screen heterogeneous fundus photos.