| Literature DB >> 35204404 |
Saif Aldeen Alryalat1, Mohammad Al-Antary2, Yasmine Arafa2, Babak Azad3, Cornelia Boldyreff2, Tasneem Ghnaimat4, Nada Al-Antary5, Safa Alfegi6, Mutasem Elfalah1, Mohammed Abu-Ameerh1.
Abstract
Diabetic macular edema (DME) is the most common cause of visual impairment among patients with diabetes mellitus. Anti-vascular endothelial growth factors (Anti-VEGFs) are considered the first line in its management. The aim of this research has been to develop a deep learning (DL) model for predicting response to intravitreal anti-VEGF injections among DME patients. The research included treatment naive DME patients who were treated with anti-VEGF. Patient's pre-treatment and post-treatment clinical and macular optical coherence tomography (OCT) were assessed by retina specialists, who annotated pre-treatment images for five prognostic features. Patients were also classified based on their response to treatment in their post-treatment OCT into either good responder, defined as a reduction of thickness by >25% or 50 µm by 3 months, or poor responder. A novel modified U-net DL model for image segmentation, and another DL EfficientNet-B3 model for response classification were developed and implemented for predicting response to anti-VEGF injections among patients with DME. Finally, the classification DL model was compared with different levels of ophthalmology residents and specialists regarding response classification accuracy. The segmentation deep learning model resulted in segmentation accuracy of 95.9%, with a specificity of 98.9%, and a sensitivity of 87.9%. The classification accuracy of classifying patients' images into good and poor responders reached 75%. Upon comparing the model's performance with practicing ophthalmology residents, ophthalmologists and retina specialists, the model's accuracy is comparable to ophthalmologist's accuracy. The developed DL models can segment and predict response to anti-VEGF treatment among DME patients with comparable accuracy to general ophthalmologists. Further training on a larger dataset is nonetheless needed to yield more accurate response predictions.Entities:
Keywords: anti-VEGF; artificial intelligence; deep learning; diabetic retinopathy; macular edema
Year: 2022 PMID: 35204404 PMCID: PMC8870773 DOI: 10.3390/diagnostics12020312
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Annotation classes and corresponding masks RGB values for our dataset.
| Classes | RGB Colour | RGB Value | Class Number |
|---|---|---|---|
| Background | Black | 0,0,0 | 0 |
| Inner Intraretinal fluid | Soft Blue | 51,221,255 | 1 |
| Disrupted ellipsoid zone | Strong Blue | 53,15,247 | 2 |
| Sub-retinal fluid | Orange | 245,147,49 | 3 |
| Hyper-reflective dot | Yellow | 250,250,55 | 4 |
| Outer intra-retinal fluid | Red | 255,53,94 | 5 |
Figure 1Some instances of the dataset, where the input images are shown in the first row and corresponding annotations are depicted in the second row. The first two samples belong to patients with good responses while the next two samples show the poor responses to the anti-VEGF treatment.
Figure 2Overview of the proposed segmentation model’s architecture. The Inception-Squeeze-Excitation (ISE) module included in the decoding path to extract hierarchical semantic representation. Furthermore, the multi-level attention mechanism is utilized to extract multi-scale representation.
Figure 3Our Inception Squeeze Excitation (ISE) block. This module scales the encoded feature F and then utilizes the inception module to generate transformed feature map.
Figure 4Classification Model Architecture. The classification model receives the predicted mask alongside the input image to perform initial attention mechanism.
Clinical characteristics of included sample.
| Mean | Standard Deviation | Count | Column N % | ||
|---|---|---|---|---|---|
| Age (years) | 63.34 | 10.11 | |||
| Gender | Female | 38 | 37.6% | ||
| Male | 63 | 62.4% | |||
| Eye laterality | Left | 44 | 43.6% | ||
| Right | 57 | 56.4% | |||
| Severity of DR | Mild non-proliferative diabetic retinopathy | 12 | 11.90% | ||
| Moderate non-proliferative diabetic retinopathy | 28 | 27.70% | |||
| Severe non-proliferative diabetic retinopathy | 19 | 18.80% | |||
| Proliferative diabetic retinopathy | 42 | 41.60% | |||
| Central macular thickness pre-treatment (μm) | 475 | 146 | |||
| Central macular thickness post-treatment (μm) | 382 | 149 | |||
| Best corrected visual acuity pre-treatment | 0.258 | 0.205 | |||
| Best corrected visual acuity post-treatment | 0.334 | 0.211 | |||
| Functional outcome | Worsened | 7 | |||
| Stable | 57 | ||||
| Improved | 37 | ||||
| Prior history of argon laser | No | 58 | |||
| Yes | 43 | ||||
| Prior history of anti-VEGF | No | 38 | |||
| Yes | 63 | ||||
| Prior steroid injections | 4 | ||||
| Phakic status | Phakic | 69 | 68.3% | ||
| Pseudo-phakic | 32 | 31.7% | |||
Performance comparison on DME dataset for different approaches.
| Methods | AUC | Accuracy | Specificity | Sensitivity | Precision | F1 Score | Dice Score |
|---|---|---|---|---|---|---|---|
| Baseline (U-Net) | 0.904 | 0.925 | 0.973 | 0.836 | 0.772 | 0.802 | 0.802 |
| U-Net + SE | 0.912 | 0.937 | 0.981 | 0.844 | 0.786 | 0.812 | 0.812 |
| U-Net + ISE | 0.921 | 0.951 | 0.985 | 0.846 | 0.788 | 0.817 | 0.817 |
| Proposed Method (U-Net+ ISE+Attention) | 0.934 | 0.959 | 0.989 | 0.879 | 0.807 | 0.839 | 0.839 |
Figure 5Sample of segmentation results including the input image, mask prediction and true mask.
Classification performance of different models for predicting the effectiveness of anti-VEGF treatment.
| Methods | Accuracy % | Precision % | F1 Score % | AUC | Sensitivity | Specificity |
|---|---|---|---|---|---|---|
| VGG | 65 | 60 | 70 | 70.93 | 70.77 | 76 |
| ResNet | 70 | 65 | 75 | 76 | 75.82 | 78 |
| DenseNet | 70 | 65 | 75 | 76.01 | 75.82 | 78 |
| EfficientNet-B3 (image) | 70 | 65 | 75 | 76.03 | 75.84 | 78 |
| EfficientNet-B3 (image + mask) | 75 | 70 | 80 | 81.07 | 80.88 | 84 |
Figure 6Classification accuracy of the deep learning model compared to different levels of ophthalmology trainees and specialists.