| Literature DB >> 36004891 |
Gehad A Saleh1, Nihal M Batouty1, Sayed Haggag2, Ahmed Elnakib3, Fahmi Khalifa3, Fatma Taher4, Mohamed Abdelazim Mohamed2, Rania Farag5, Harpal Sandhu3, Ashraf Sewelam5, Ayman El-Baz3.
Abstract
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.Entities:
Keywords: artificial intelligence; diabetic retinopathy; macular degeneration; modalities; retinal diseases
Year: 2022 PMID: 36004891 PMCID: PMC9405367 DOI: 10.3390/bioengineering9080366
Source DB: PubMed Journal: Bioengineering (Basel) ISSN: 2306-5354
Figure 1Common retinal diseases.
Figure 2Analysis of retinal images.
Figure 3Medical image modalities for the detection, diagnosis, and staging of DR and AMD.
Figure 4Components of artificial intelligence (AI).
Figure 5Summary of traditional ML methods for DR detection, diagnosis, and/or staging.
Traditional ML methods for early detection, diagnosis, and grading of DR.
| Study | Goal | Features | Classifier | Database Size | Performance |
|---|---|---|---|---|---|
| Welikala et al. [ | Detection of new vessels from fundus images as an indication of PDR | Local morphology features + genetic feature selection algorithm | SVM | 60 Images from | |
| Prasad et al. [ | Detection of DR (two classes: non DR vs. DR) using fundus images | 41-statistical and texture features+ Haar wavelet transform for feature selection + PCA for feature reduction | Back propagation neural network and one rule classifier | 89 images from DIARETDB1 [ | |
| Mahendran et al. [ | Classification of the data into normal vs. abnormal followed by classification of abnormal into moderate NPDR or severe NPDR using fundus images | Statistical and texture features using GLCM extracted from segmented images | SVM and neural network | 1200 images from MESSIDOR database | |
| Bhatkar et al. [ | Detect DR using fundus images | Discrete Cosine transform and statistical features | Multi-layer perceptron neural network | 130 images from DIARETDB0 database | |
| Labhade et al. [ | Classification of the data into four classes: normal, mild NPDR, severe NPDR, and PDR using fundus images | 40 statistical and GLCM texture features | SVM, | 1200 images from MESSIDOR database | Best |
| Rahim et al. [ | Classification of the data into five classes: no DR, mild NPDR, moderate NPDR, severe NPDR, and PDR using fundus images | Three features (area, mean, and standard deviation) of two extracted regions using fuzzy techniques (retina and exudates) | SVM with RBF kernel | 600 images from 300 patients collected at the Hospital Melaka, Malaysia | |
| Islam et al. [ | Discriminate between normal and DR using fundus images | Speeded up robust features | k-means, a bag of words approach, and SVM | 180 fundus images | |
| Carrera et al. [ | Classifying nonproliferative DR into 4 grades using fundus images | Extract features from isolates blood vessels, microaneurysms, and hard exudates | SVM | 400 images | |
| Somasundaram and Alli [ | Differentiate between NPDR and PDR | Extraction of the candidate objects (blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance) | Bagging ensemble classifier | 89 colors fundus images | |
| Eladawi et al. [ | Detecting early DR using OCTA | Density, appearance of the retinal blood vessels, and distance map of the foveal avascular zone | SVM | 105 subjects | |
| Costa et al. [ | Grading DR using fundus images | Joint optimization of the instance encoding and the image classification stages | Weakly supervised multiple instance learning framework | 1200 | |
| Alam et al. [ | Early detection of DR using OCTA images | Blood vessel tortuosity, blood vascular caliber, vessel perimeter index, blood vessel density, foveal avascular zone area, and foveal avascular zone contour irregularity | SVM | 120 images | |
| Sandhu et al. [ | Diagnosis of NPDR using OCT and OCTA | Curvature, reflectivity, and thickness of retinal layers (OCT), | Random forest | 111 patients | |
| Sharafeldeen et al. [ | Detecting DR using OCT | Thickness, tortuosity, and reflectivity of 12 extracted retinal layers | Two-level neural networks | 260 images from 130 patients | |
| Liu et al. [ | Detecting DR using OCTA | A discrete wavelet transform was applied to extract texture features from each image | Logistic regression, logistic regression regularized with the elastic net penalty, SVM, and the gradient boosting tree | 114 DR images + 132 control images | |
| Wang et al. [ | Grading DR using OCT images | Foveal avascular zone (FAZ) metrics, Vessel density, extrafoveal avascular area and vessel morphology metrics | Multivariate regression analysis was used to identify the most discriminative features | 105 eyes from 105 patients | |
| Abdelsalam et al. [ | Diagnosis of early | Multifractal geometry | SVM | 170 eye images | |
| Elsharkawy et al. [ | Detection of DR using OCT | Gibbs energy extracted from 12 retinal layers | Majority voting using an ensemble of Neural networks | 188 3D-OCT subjects |
Figure 6Summary of deep learning methods for DR detection, diagnosis, and/or staging.
Deep learning methods for early detection, diagnosis, and grading of DR.
| Study | Goal | Deep Network | Other Features | Database Size | Performance |
|---|---|---|---|---|---|
| Gulshan et al. [ | Grading of DR and DME using fundus images | Ensemble of 10 CNN networks | Final decision was computed as the linear average of the predictions of the ensemble | 128,175 + 9963 from EyePACS-1 +1748 from MESSIDOR-2 | |
| Colas et al. [ | Grading of DR using fundus images | Deep CNN network | Their technique provides the location of the detected anomalies | 70,000 image (training) +10,000 (test) | |
| Ghosh et al. [ | Grading of DR using fundus images | 28-layer CNN | Data augmentation, normalization denoising were applied before the CNN | 30,000 Kaggle images | |
| Eltanboly et al. [ | DR detection using OCT images | Deep fusion classifier using auto-encoders | Features are: reflectivity, curvature, and thickness of twelve segmented retinal layers | 52 scans | |
| Takahashi et al. [ | Differentiate between NPDR, Severe NPDR, and PDR using fundus images | Modified GoogleNet | Fundus scans are the inputs to the Modified GoogleNet | 9939 scans from 2740 patients | |
| Quellec et al. [ | Grading DR using fundus images | 26-layer ConvNets | An ensemble of ConvNet was used | 88,702 scans (Kaggle) +107,799 images (e-optha) | |
| Ting et al. [ | Identifying DR and related eye diseases using fundus images | Adapted VGGNet architecture | An ensemble of two networks for detecting referable DR | 494,661 images | |
| Wang et al. [ | Diagnosing DR and identifying suspicious regions using fundus images | Zoom-in-Net | Inception-Resnet for the backbone network | 35k/11k/43k for train/val/test (EyePACS) and 1.2k (Messidor) | |
| Dutta et al. [ | Differentiate between mild NPDR, moderate NPDR, severe NPDR, and PDR | Back propagation NN, Deep NN, and CNN | CNN used VGG16 model | 35,000 training and 15,000 test images | |
| Eltanboly et al. [ | Grading of nonproliferative DR using OCT images | Two-stage deep fusion classifier using autoencoder | Features are: reflectivity, curvature, and thickness of twelve segmented retinal layers | 74 OCT images | |
| Zhang et al. [ | Diagnose the severity of diabetic retinopathy (DR) | DR-Net with an adaptive cross-entropy loss | Data augmentation is applied | 88,702 images from EyePACS dataset | |
| Chakrabarty et al. [ | DR detection using fundus images | 9-layer CNN | Resized grey-level Fundus scans are the inputs to the CNN | 300 images | |
| Kwasigroch et al. [ | DR detection and staging using fundus images | VGGNet | Fundus scans are the inputs to the CNN | 88,000 images | |
| Li et al. [ | Detection of referral DR using fundus images | Inception-v3 | Enhanced contrast scans are the inputs to the CNN, Transfer learning is applied | 19,233 images from 5278 patients | |
| Nagasawa et al. [ | Differentiate between nonPDR and PDR using ultrawide-field fundus images | Inception-v3 | Transfer learning is applied | 378 scans | |
| Metan et al. [ | DR staging using fundus images | ResNet | Color fundus images are the inputs to the CNN | 88,702 | |
| Qummar et al. [ | DR staging using fundus images | Five CNNs: ResNet50, Inception-v3, Xception, Dense121, and Dense 169 | Ensemble of five CNN | 88,702 | |
| Sayres et al. [ | DR staging using fundus images | Inception-v4 | Fundus images are the inputs to the CNN | 1769 images from 1612 patients | |
| Sengupta et al. [ | DR staging using fundus images | Inception-v3 | Data preprocessing is applied | Kaggle EYEPACS and Messidor datasets | |
| Hathwar et al. [ | DR detection and staging using fundus images | Xception | Transfer learning is applied | 35,124 images (EyePACS) 413 images (IDRiD) | |
| Li et al. [ | Early detection of DR using OCT images | OCTD_Net | Data augmentation is applied | 4168 OCT images | |
| Heisler et al. [ | Classifying DR Using OCTA images | Four fine-tuned VGG19 | Ensemble training is applied based on majority voting or stacking | 463 volumes from 360 eyes | |
| Zang et al. [ | Classifying DR Using OCT and OCTA images | DcardNet | Data augmentation is applied | 303 eyes from 250 participants | |
| Ghazal et al. [ | Early detection of NPDR using OCT images | AlexNet | SVM was used for classification | 52 subjects | |
| Narayanan et al. [ | detect and grade the fundus images | AlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, GoogleNet | Transfer Learning is applied for each network | 3661 images | |
| Shankar et al. [ | DR grading using fundus images | Synergic deep learning | Histogram-based segmentation was applied to extract the details of the fundus image | 1200 images | |
| Ryu et al. [ | Early detection of DR using OCTA | ResNet101 | OCTA images are the inputs to the CNN | 496 eyes | |
| He et al. [ | Grading DR using fundus images | CABNet with DenseNet-121 as a backbone network | CABNet is an attention module with global attention block | 1200 images | |
| Saeed et al. [ | Grading DR using fundus images | Two pretrained CNNs | Transfer Learning is applied | 1200 images | |
| Wang et al. [ | Grading DR using fundus images | Inception-v3 + lesionNet | Transfer Learning is applied | 12,252 images + 565 (external test set) | |
| Hsieh et al. [ | Grading DR using fundus images | VeriSee™ software | Modified Inception-v4 model as backbone network | 7524 images | |
| Khan et al. [ | Grading DR using fundus images | VGG-NiN model | VGG16, spatial pyramid pooling layer and network-in-network are stacked to form VGG-NiN model | 25,810 images | |
| Gao et al. [ | Grading DR using fundus fluorescein angiography images | VGG16, ResNet50, DenseNet | Images are the inputs to the CNNs | 11,214 images from 705 patients | |
| Zia et al. [ | Grading DR using fundus images | VGGNet and Inception-v3 | Applied a feature fusion and selection steps | 35,126 Kaggle dataset | |
| Das et al. [ | Detecting and classifying DR using fundus images | A CNN is used with several layers that is optimized using a genetic algorithm | SVM was used for classification | 1200 images (Messidor dataset) | |
| Tsai et al. [ | Grading DR using fundus images | Inception-v3, ResNet101, and DenseNet121 | Transfer Learning is applied | 88,702 images (EyePACS) 4038 images |
Figure 7Summary of traditional ML methods for AMD detection, diagnosis, and/or staging.
Traditional ML methods for early detection, diagnosis, and grading of AMD.
| Study | Goal | Features | Classifier | Database Size | Performance |
|---|---|---|---|---|---|
| Liu et al. [ | Identify normal and three retinal diseases using OCT images: AMD, macular hole, and macular edema | Spatial and shape features | SVM | Train: 326 scans from 136 subject (193 eyes) | |
| Srinivasan et al. [ | Identify normal and two retinal diseases using SD-OCT: dry AMD and diabetic macular edema (DME) | Multiscale histograms of oriented gradient descriptors | SVM | 45 subjects: 15 normal, 15 with dry AMD, and 15 with DME | |
| Fraccaro et al. [ | To diagnose AMD using OCT images | Patient age, gender, and clinical binary attributes | White boxes (e.g., logistic regression & decision tree) and black boxes (e.g., SVM & random forest) | 487 patients (912 eyes): 50 bootstrap test | |
| García-Floriano et al. [ | To differentiate normal from AMD with drusen using color fundus images | Invariant momentums extracted from contrast enhanced, morphological processed images | SVM | 70 images: 37 healthy and 33 AMD with drusen |
Figure 8Summary of deep learning methods for AMD detection, diagnosis, and/or staging.
Deep learning methods for early detection, diagnosis, and grading of AMD.
| Study | Goal | CNN | Other Features | Database Size | Performance |
|---|---|---|---|---|---|
| Lee et al. [ | To differentiate between normal and AMD cases using OCT | Modified VGG19 | A modified VGG19 DCNN with changing the last fully connected layer with a two-nodes layer | 80,839 images for training and 20,163 images for test | |
| Ting et al. [ | Identify three retinal diseases: DR, glaucoma, AMD using color fundus images | Adapted VGGNet model | An ensemble of two networks is used for the classification of each eye disease | Validation dataset of 71,896 images; from 14,880 patients | |
| Burlina et al. [ | Identify no or early AMD from intermediate or advanced AMD using fundus images | AlexNet | Solving two-class problem | 130,000 images from 4613 patients | |
| Treder et al. [ | Detect exudative AMD from normal subjects using SD-OCT | Inception-v3 | Transfer learning | 1012 SD-OCT scans | |
| Tan et al. [ | Early detect AMD using fundus images | 14-layer CNN model | Data augmentation | 402 normal, 583 early, intermediate AMD, or GA, and 125 wet AMD eyes | |
| Hassan et al. [ | Diagnosis of three retinal diseases (i.e., macular edema, central serous choriorentopathy, and AMD) using OCT | SegNet followed by an AlexNet | Segmenting nine retinal layers | 41,921 retinal OCT scans for testing and 4992 for training | |
| An et al. [ | Two classifiers: AMD vs. normal and AMD with fluid vs. AMD without fluid | Two VGG16 models | A model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without fluid | 1234 training data and 391 test data | |
| Motozawa et al. [ | Two classifiers: AMD vs. normal and AMD with exudative changes vs. AMD without exudative changes using SD-OCT images | Two 18-layer CNN | A model to distinguish AMD from normal followed by a model to distinguish AMD with from AMD without exudative changes | 1621 images | |
| Hwang et al. [ | Distinguish between normal, Dry (drusen), active wet, and inactive wet AMD | ResNet50, Inception-v3, and VGG16 | A cloud computing website [ | 35,900 images | |
| Li et al. [ | Distinguish between normal, AMD, and diabetic macular edema using OCT images | VGG-16 | Transfer learning | 207,130 images |