| Literature DB >> 28842613 |
Hideharu Ohsugi1, Hitoshi Tabuchi2, Hiroki Enno3, Naofumi Ishitobi2.
Abstract
Rhegmatogenous retinal detachment (RRD) is a serious condition that can lead to blindness; however, it is highly treatable with timely and appropriate treatment. Thus, early diagnosis and treatment of RRD is crucial. In this study, we applied deep learning, a machine-learning technology, to detect RRD using ultra-wide-field fundus images and investigated its performance. In total, 411 images (329 for training and 82 for grading) from 407 RRD patients and 420 images (336 for training and 84 for grading) from 238 non-RRD patients were used in this study. The deep learning model demonstrated a high sensitivity of 97.6% [95% confidence interval (CI), 94.2-100%] and a high specificity of 96.5% (95% CI, 90.2-100%), and the area under the curve was 0.988 (95% CI, 0.981-0.995). This model can improve medical care in remote areas where eye clinics are not available by using ultra-wide-field fundus ophthalmoscopy for the accurate diagnosis of RRD. Early diagnosis of RRD can prevent blindness.Entities:
Mesh:
Year: 2017 PMID: 28842613 PMCID: PMC5573327 DOI: 10.1038/s41598-017-09891-x
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Representative fundus images obtained by ultra–wide-field scanning laser ophthalmoscopy. Ultra–wide-field fundus images of right eye without rhegmatogenous retinal detachment (RRD) (a) and with RRD (b). The arrow indicates the retinal break, and the arrowheads indicate the areas of RRD.
Figure 2Representative receiver operating characteristic curves (AUC) of the deep learning model and support vector machine (SVM) model. The AUC of the deep learning model was 0.988 (95% CI, 0.981–0.995) and AUC of the SVM model was 0.976 (95% CI, 0.957–0.996). The AUC was better for the deep learning model than for the SVM model.
Figure 3Overall architecture of the model. The data set for the retinal fundus images (96 × 96 pixels) is labelled as Input. Each of the convolutional layers (Conv1–3) is followed by an activation function (ReLU) layer, pooling layers (MP1–3) and two fully connected layers (FC1, FC2). The final output layer performs binary classification by using a softmax function.