| Literature DB >> 31552296 |
Zdenka Haskova1,2, Marco Prunotto3,4,5, Filippo Arcadu6,3, Fethallah Benmansour6,3, Andreas Maunz6,3, Jeff Willis1,2.
Abstract
The global burden of diabetic retinopathy (DR) continues to worsen and DR remains a leading cause of vision loss worldwide. Here, we describe an algorithm to predict DR progression by means of deep learning (DL), using as input color fundus photographs (CFPs) acquired at a single visit from a patient with DR. The proposed DL models were designed to predict future DR progression, defined as 2-step worsening on the Early Treatment Diabetic Retinopathy Diabetic Retinopathy Severity Scale, and were trained against DR severity scores assessed after 6, 12, and 24 months from the baseline visit by masked, well-trained, human reading center graders. The performance of one of these models (prediction at month 12) resulted in an area under the curve equal to 0.79. Interestingly, our results highlight the importance of the predictive signal located in the peripheral retinal fields, not routinely collected for DR assessments, and the importance of microvascular abnormalities. Our findings show the feasibility of predicting future DR progression by leveraging CFPs of a patient acquired at a single visit. Upon further development on larger and more diverse datasets, such an algorithm could enable early diagnosis and referral to a retina specialist for more frequent monitoring and even consideration of early intervention. Moreover, it could also improve patient recruitment for clinical trials targeting DR.Entities:
Keywords: Macular degeneration; Predictive markers; Vision disorders
Year: 2019 PMID: 31552296 PMCID: PMC6754451 DOI: 10.1038/s41746-019-0172-3
Source DB: PubMed Journal: NPJ Digit Med ISSN: 2398-6352
Fig. 1An overview of retinal imaging features analyzed to assess diabetic retinopathy (DR) severity and a schematic of the study design. a Example of fovea-centered color fundus photographs (CFPs) of a patient without DR (left) and a patient with signs of DR (right). In the CFP of the patient with signs of DR (right), one example each of hemorrhage, exudate, and a microaneurysm are highlighted. Both examples have been selected from the Kaggle DR dataset.[47] b Schematic of the Diabetic Retinopathy Severity Scale (DRSS) established by the Early Treatment Diabetic Retinopathy Study (ETDRS) group to measure DR worsening over time. c Schematic of the two-phase modeling to detect two-step or more DRSS worsening over time. In phase I, field-specific Inception-v3 deep convolutional neural networks (DCNNs) called “field-specific DCNNs” or “pillars” are trained by means of transfer learning to predict whether the patient will progress two ETDRS DRSS steps. In phase II, the probabilities independently generated by the field-specific DCNNs are aggregated by means of random forest
Fig. 2Summary of the results for the prediction of two-step or more diabetic retinopathy progression at months 6, 12, and 24 using 7-field color fundus photographs of patients at baseline. AUC area under the curve, CI confidence interval, CV cross-validation, ROC receiver operating characteristic, SD standard deviation, SENS sensitivity, SPEC specificity
Performance of the individual field-specific DCNNs in terms of AUC
| Month | F1 | F2 | F3 | F4 | F5 | F6 | F7 |
|---|---|---|---|---|---|---|---|
| 6 | 0.65 ± 0.12 | 0.65 ± 0.11 | 0.63 ± 0.09 | 0.59 ± 0.08 | 0.72 ± 0.11 | 0.66 ± 0.14 | 0.69 ± 0.12 |
| 12 | 0.68 ± 0.04 | 0.62 ± 0.07 | 0.67 ± 0.05 | 0.75 ± 0.06 | 0.70 ± 0.04 | 0.72 ± 0.05 | 0.74 ± 0.03 |
| 24 | 0.69 ± 0.07 | 0.61 ± 0.06 | 0.67 ± 0.04 | 0.68 ± 0.05 | 0.70 ± 0.03 | 0.65 ± 0.05 | 0.74 ± 0.04 |
The associated errors are the standard deviation over the AUC values of 25 DCNNs (five repetitions × five folds, n = 25) trained for each field
AUC area under the curve, DCNN deep convolutional neural network
Fig. 3SHAP plots summarizing the pointwise and average contribution of each deep convolutional neural network (DCNN) to the random forest aggregation. SHAP plots outlining the pointwise contribution of each DCNN. In this example, the SHAP analysis is related to the five folds used for the prediction of DR progression at month 24 is showed. The DCNNs are ordered in importance from top to bottom. The naming convention of the DCNNs highlights the field (‘f1,’ ‘f2,’ etc.) and repetition (‘rep00,’ ‘rep01,’ etc.)
Fig. 4Example of attribution maps placed side by side to the original test color fundus image. In each set, the original image is on the left and the attribution map is on the right. The attribution of the deep convolutional neural networks focuses mainly on microaneurysms, hemorrhages, and exudates. a Two examples of attribution maps for the model predicting diabetic retinopathy (DR) progression at month 6. b Two examples of attribution maps for the artificial intelligence model predicting DR progression at month 12. c Two examples of attribution maps for the model predicting DR progression at month 24