| Literature DB >> 36185322 |
Shradha Dubey1, Manish Dixit1.
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.Entities:
Keywords: Diabetic retinopathy; Exudate; Hemorrhages; Microaneurysms; Optic disc/ cup; Retinal blood vessel
Year: 2022 PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9
Source DB: PubMed Journal: Multimed Tools Appl ISSN: 1380-7501 Impact factor: 2.577
Fig. 1a Healthy Retina Vision b DR Vision (http://www.vision-and-eye-health.com/diabetic-retinopathy.html)
Fig. 2a Non-Proliferative DR b Proliferative DR c Normal Retina (https://www.retinamd.com/diseases-and-treatments/retinal-conditions-and-diseases/diabetic-retinopathy/)
Fig. 3Diabetic Retinopathy DR Stages [185]
Diabetic Retinopathy Stages
| DR level | Retinal Findings |
|---|---|
| Mild NPDR [ | MAs Only |
| Moderate NPDR [ | One or more haemorrhages, MAs, or any of the ones that follow Cotton wool spots Retinal hemorrhages Hard exudate Venous beading |
| Moderately Severe NPDR [ | One or more of the following 4 quadrants have mild intraretinal microvascular anomalies. 2–3 quadrants have severe retinal hemorrhages. Beading of the venous system in one or more quadrants |
| Severe NPDR [ | No symptoms of PDR and any of the following (4–2-1 rule) Each quadrant had severe intraretinal hemorrhages and microaneurysms. Two or more quadrants with distinct venous beading Considerable intraretinal microvascular abnormalities in 1 or more quadrants. |
| PDR [ | One or both of following Neovascularization Vitreous/ preretinal hemorrhage |
Fig. 4Taxonomy of the Review Pap7SimplePara>
DR Dataset Specification
| Ref. | Dataset | No. of Images | Resolution | Format | Training Sets | Test Sets | Task |
|---|---|---|---|---|---|---|---|
| Hoover, et al., 2000 | STARE [ | 402 images | 605*700 | .ppm | – | – | Vessels extraction, Optic nerve |
| Staal, et al., 2004 | DRIVE [ | 40 images 33 normal 7 abnormal | 584*565 | .jpeg | 20 | 20 | Vessels extraction |
| Budai, et al., 2013 | HRF [ | 45 images 15- healthy eyes 15- DR eyes 15- glaucomatous eyes | 3304 * 2336 | .jpg | 22 | 23 | retinal vessel segmentation |
| Carmona, et al., 2008 | DRIONS-DB [ | 110 images | 600 * 400 | .png | – | – | OD |
| Fraz, et al., 2012 | CHASE_DB1 [ | 28 images | 1280 × 960 | .jpeg .png | vessel segmentation | ||
| Al Diri et al., 2008 | REVIEW [ | 16 images | 3584 × 2438 1360 × 1024 2160 × 1440 3300 × 2600 | – | – | – | Vessels extraction |
| Kaggle | Kaggle APTOS [ | 88,702 images | Different image resolution | .jpeg | 35,126 | 53,576 | -No DR -Mild -Moderate -Severe -PDR |
| Kauppi, et al., 2006 | DIARETDB0 [ | 130 images 20- Normal 110- Abnormal | Different image resolution | .txt | – | – | -MAs -SE -HE -HMs -neovascularization |
| Kälviäinen. Et al., 2007 | DIARETDB1 [ | 89 images No DR- 27 Mild- 7 Moderate & Severe- 28 PDR 27 | 1500 X 1152 pixels | Image masks GT:.png | 28 | 61 | -MAs -SE -HE -HMs |
| Decencière, Etienne, et al., 2014 | MESSIDOR [ | 1200 images 800- with pupil dilation 400- without dilation | 1440*960, 2240*1488 or 2304*1536 pixels | .tiff Diagnosis- excel | – | – | -DR grading -Risk of Macular Edema |
| MESSIDOR, 2015 | MESSIDOR2 [ | 1748 images | Different image resolution | 1052 images- .png 690 images- .jpg | – | – | -DR grading -Risk of Macular Edema |
| Porwal, et al.,2018 | IDRiD [ | 516 images | 4288 × 2848 | .jpg | 413 | 103 | -DR grading -Macular Edema |
| Li Tiao, et al.,2019 | DDR [ | 13,673 images 6266- healthy 6257- DR | 512 × 512 pixels | – | 6835 | 4105 | -MAs -SE -HE -HMs |
| Niemeijer, et al.,2009 | ROC [ | 100 images | Different image resolution | .jpeg | 50 | 50 | Mas |
| Decenciere, et al.,2013 | e-optha [ | e-optha MA- 381 images 233- normal 148- MAs 47- EXs e-optha EX- 82 images | 2544 × 1696 1440 × 9,601,504 × 1000 2048 × 1360 | Imges- .jpeg GT:.png | – | – | -MAs -EXs |
| Giancardo, et al., 2012 | HEI-MED [ | 169 images | 2196 × 1958 pixels | .jpeg | – | – | -DR -EXs |
| Zhang, et al., 2010 | ORIGA [ | 650 images 482- normal 168- glaucomatous | 720 × 576 | – | – | – | - OD - OC - Cup-to-disc ratio (CDR) |
| Sivaswamy, et al., 2014 | DRISHTI-GS [ | 101 images 31- normal 70- glaucomatous | 2896 × 1944 | .png | 50 | 51 | OD segmentation -OC segmentation |
| Fumero, et al., 2011 | RIM-ONE [ | 169 images | 2144 × 1424 | . bmp | – | – | Optic nerve |
| Derwin, et al., 2020 | AGAR300 [ | 28 images | 2448 × 3264 | .jpeg | – | – | Mas |
| Tariq Kan, et al.,2020 | ONHSD [ | 99 fundus images | 640 × 480 | – | – | – | Optic Nerve Head |
| Li ding, et al., 2020 | PRIME-FP20 [ | 15 | 400 × 400 | .tiff .png | – | – | retinal vessel segmentation |
| Jing Tian, et al.,2016 | University of Miami OCT [ | 50 OCT of 10 different patients with | 768 × 496 | – | – | – | mild, non-proliferative diabetic retinopathy |
Performance Metrics
| References | Performance Measure | Formulae | Description |
|---|---|---|---|
| Hao, et al., 2020 [ | Error Rate | The error rate is the average number of times we incorrectly predict the class of our target. | |
| Vakili et al., 2020 [ | Accuracy (ACC) | Rate of correct classifications | |
| Powers et al., 2011& Goutte et al.,2005 [ | Sensitivity/ True Positive Rate/ Recall | quantifies the number of positive class predictions made out of all positive | |
| Hao, et al., 2020 [ | Specificity/ True Negative Rate | determines the percentage of correctly identified positives | |
| Powers et al., 2011& Goutte et al.,2005 [ | Precision/ Positive Predictive Value | the number of positive class predictions that belong to the positive class. | |
| Vakili et al., 2020 [ | Area Under Curve (AUC) | (AUC) is a summary of the ROC curve and is an assessment of a classifier’s ability to discriminate between classes. | |
| Hao, et al., 2020 [ | False Positive Rate | The number of real negatives mistakenly predicted by the model determines the FPR. | |
| Powers et al., 2011 [ | Correlation Coefficient | Correlation calculates the strength of the relationship between two variables. CC is used to determine the strength of the association between two variables. | |
| Goutte et al.,2005 [ | F-Score | It’s a way of measuring to check how accurate a predictor is on a certain dataset. It’s a criterion for evaluating binary classification methods that categorize examples as either “positive” or “negative.” | |
| Furnkranz, et al., 2010 | Mean-Squared Error (MSE) [ | The mean of the squared difference between the expected and observed parameters. | |
| Korhonen et al., 2012 | Peak Signal-To-Noise Ratio (PSNR) [ | used to measure the quality of picture and video restoration after lossy compression | |
| Tang, et al., 2015 | Kappa Score [ | OA = (A + D) EA = (((A + B)*(A + C)) + ((C + D)*(B + D)))/(A + B + C + D) Kappa = ((OA) – (EA)/((A + B + C + D) – (EA)) | The kappa statistical measure of how well the instances categorized by the ML classifier matched the data labeled as ground truth while adjusting for the predicted accuracy of a random classifier. |
| Afroz, et al., 2014 & Thada et al., 2013 | Dice Similarity Coefficient (DSC) [ | DSC is the similarity quotient and has a value between 0 and 1. | |
| Rezatofighi, et al., 2019 | Intersection over Union (IoU) [ | IOU is a typical semantic image segmentation assessment metric that computes the IOU for every semantic class before averaging over all classes. |
Fig. 5DR Screening methods based on retinal features
Fig. 6Identification of different types of lesions [36, 147]
Different methods for microaneurysms detection
| Literature | Year | Database | Methods | Sensitivity | Specificity | Accuracy | AUC |
|---|---|---|---|---|---|---|---|
| P.R. Wankhede, K. B. Khanchandani [ | 2020 | DIARETDB1 E-optha MA | Pixel Intensity Rank Transform | DIARETDB1 98.79%, E-optha MA 94.59% | DIARETDB1 83.33% E-optha MA 96.56% | DIARETDB1 97.75% E-optha MA 95.80% | – |
| D. Jeba Derwin et al. [ | 2020 | ROC Single real- time, AGAR300 | Local Neighborhood Differential Coherence Pattern (LNDCP) | For ROC and AGAR300, FROC scores of 0.481 and 0.442 were attained, respectively. | – | ||
| Tania Melo et al. [ | 2020 | ROC e-ophtha SCREEN-DR Messidor | Sliding band filter (SBF) | At the lesion level, e-ophtha- 64% SCREEN-DR- 81% | – | – | Train dataset ROC- 0.716 e-ophtha MA- 0.792 SCREEN-DR- 0.831 |
| Shengchun Long et al. [ | 2019 | e-ophtha MA and DIARETDB1 | Directional Local Contrast (DLC) | FROC Score e-ophtha MA 0.374 DIARETDB1 0.210 | e-ophtha MA 0.87 and DIARETDB1 0.86 | ||
| Amrita Roy Chowdhury et al. [ | 2019 | DIARETDB1, Teleoptha, Messidor | Naïve Bayes classifier, Random Forest classifier, K-means clustering | – | – | Random Forest classifier- 93.58% Naïve Bayes classifier- 83.63% | – |
| Shailesh Kumar, Basant Kumar [ | 2018 | DIARETDB1 | PCA, CLAHE, Averaging filter, SVM | 96% | 92% | – | – |
| Jose Ignacio Orlando et al. [ | 2018 | DIARETDB1, e-optha, Messidor | CNN using handcrafted elements, Random Forest classifier | 97.2% | – | – | 93.4% |
| Diana Veiga et al. [ | 2018 | LaTIM, e-ophtha, ROC | SVM | For average of ten false positive per image LaTIM- 62%, e-ophtha- 66%, ROC-32% | – | – | – |
| Baisheng Dai et al. [ | 2016 | ROC DIARETDB1 | Gradient Vector Analysis and Class Imbalance Classification | ROC- 0.433 at 1/8, 1/4, 1/2, 1, 2, 4 and 8 false positives per image DIARETDB1–0.321 | – | – | – |
| Ruchir Srivastava et al. [ | 2015 | DIARETDB1 | Frangi-based filters | ROC- 97% | |||
Fig. 7a Normal Image b Image with MAs [119]
Fig. 8a Hard Exudate and b Soft Exudate in DR affected eye [87]
Different methods for diagnosis of exudate
| Literature | Year | Database | Methods | Performance |
|---|---|---|---|---|
| Hui Wang et al. [ | 2020 | e-optha, HEI-MED | Multi-feature joint representation, DCNN | e-optha-0.9644, HEI-MED- 0.9323 e-optha- 0.8929, HEI-MED- 0.9326 |
| Nipon Theera-Umpon et al. [ | 2019 | DiaRetDB1 | Multilayer perceptron network (MLP), SVM, Hierarchical adaptive neuro-fuzzy inference system, CNN | AUC - 0.998 |
| S. Karkuzhali, D. Manimegalai [ | 2019 | DIARETDB0, DIARETDB1, MESSIDOR, DRIVE, STARE and Bejan Singh Eye Hospital | Inverse Surface Adaptive Thresholding Algorithm | Sensitivity 97.43%, 98.87%, 99.12%, 97.21%, 98.72%, and 96.63%, Specificity 91.56%, 92.31%, 90.21%, 90.14%, 89.58%, 92.56% Accuracy 99.34%, 99.67%, 98.34%, 98.87%, 99.13%, 98.34% for DIARETDB0, DIARETDB1, MESSIDOR, DRIVE, STARE, and Bejan Singh Eye Hospital databases respectively |
| Anoop Balakrishnan Kadan and Perumal Sankar Subbian [ | 2019 | DIARETDB1 and DRIVE | Evolutionary Feature Selection, KNN | Accuracy- 99.34% |
| Juan Mo et al. [ | 2018 | HEI-MED E-Ophtha EX | Cascaded Deep Residual Networks | HEI-MED Sensitivity- 0.9255 PPV- 0.8212 F-Score- 0.8499 E-Ophtha EX Sensitivity- 0.9227 PPV- 0.9100 F-Score- 0.9053 |
| Shuang Yu et al. [ | 2017 | E-Ophtha EX | DCNN | Accuracy 96.21%, Sensitivity 94.28%, Specificity 98.06%, F-Score 96.05% |
| M. Moazam Fraz [ | 2017 | DIARETDB1, e-Ophtha EX, HEI-MED and Messidor | Ensemble Classifier of Bootstrapped Decision Trees | Accuracy- 0.8772, 0.8925, 0.9577, and 0.9836 and Area Under ROC- 0.9310, 0.9403, 0.9842, and 0.9961 for DIARETDB1, e-Ophtha EX, HEI-MED and Messidor respectively |
| Imani and Pourreza [ | 2016 | DIARETDB1 | Dynamic thresholding, morphological processing, false-positive removal | Sensitivity- 89.01% Specificity- 99.93% |
| Xiwei Zhang et al. [ | 2014 | e-ophtha EX Messidor, DiaRetDB1 v2, and HEI-MED | Mathematical Morphology, random forest algorithm | e-ophtha EX- 0.95 DiaRetDB1 v2–0.95, Messidor-0.93 HEI-MED- 0.94 |
| Carla Agurto et al. [ | 2014 | UTHSC SA, Messidor | Partial Least Squares (PLS) Multiscale Optimization | AUC UTHSC SA + Messidor- 0.962 UTHSC SA- 0.970 Messidor- 0.973 |
Fig. 9Retinal Hemorrhages Associated with High Altitude [112]
Methods to detect hemorrhages
| Literature | Year | Database | Methods | Performance |
|---|---|---|---|---|
| Jun Wu et al. [ | 2019 | DIARETDB | Two-dimensional Gaussian fitting, Watershed segmentation, | Sensitivity- 100% Specificity- 82% Accuracy- 95.42% Sensitivity- 90.30% Positive Predictive- 94.01% |
| N.Shobha Rani et al. [ | 2019 | IDRiD | Connected object and Sobel edge detection, Laplacian filtering followed by morphological bridging operation | Accuracy- 92.31% |
| Amrita Roy Chowdhury et al. [ | 2019 | DIARETDB0 DIARETDB1 MESSIDOR Tele Optha | Random Forest | Accuracy- 93.58% |
| Sonali S. Gaikwad, Ramesh R. Manza [ | 2017 | DIARETDB0 DIARETDB1 | Template Matching, Morphological Opening, Intensity Transformation | Average Accuracy- 98.7% Sensitivity- 80% |
| Di Xiao et al. [ | 2017 | DiaRetDB1 Local Database | Rule-Based, Random Forest | DiaRetDB1–93.3% Local Database- 88% Specificity DiaRetDB1–91.9% Local Database- 85.6% |
| Nishigandha G. Kurale and M.V. Vaidya [ | 2017 | Messidor | Splat segmentation, watershed transform, SVM | Accuracy- 88% AUC- 0.89 |
| Mark J. J. P. van Grinsven et al. [ | 2016 | Kaggle MESSIDOR | CNN, Selective Data Sampling | Kaggle- 0.894 MESSIDOR- 0.972 |
| Priyakshi Bharali et al. [ | 2015 | HRF, DIARETDB0, DIARETDB1, MESSIDOR and Local databases | Region Growing, Morphological Operations, Modified NICK’s Local Threshold Algorithm | Sensitivity- 97.3% Specificity- 98.92% Accuracy- 98.22% |
| Liye Guo et al. [ | 2015 | Real-World Database | Multiclass discriminant analysis | Accuracy- 90.9% |
| Li Tang et al. [ | 2013 | MESSIDOR | Splat Feature Classification | Splat Level- 0.96 Image Level- 0.87 |
Different techniques for categorization of DR
| Literature | Year | Database | Methods | Performance | Classification |
|---|---|---|---|---|---|
| Borys Tymchenko [ | 2020 | Kaggle, APTOS | CNN | APTOS Dataset Quadratic Weighted Kappa score- 0.925 sensitivity and specificity- 0.99 | Multiclass (5- Classes) |
| Alexandr Pak et al. [ | 2020 | APTOS-2019 | Compare DenseNet, ResNet with EfficientNet | ordinal regression DenseNet- 0.690, ResNet 150–0.708, ResNet 101–0.734, EfficientNet- 0.790 | Multiclass (5- Classes) |
| Parshva Vora and Sudhir Shrestha [ | 2020 | 88,000 labeled images from Kaggle/ EyePacs | CNN and a k-fold cross-validation | Caffee 75.6% accuracy and 98% specificity | Multiclass |
| Nour Eldeen M. KhalifaIn et al. [ | 2019 | APTOS-2019 | Deep Transfer Learning Models | Among AlexNet, VGG16, ResNet 18, SqueezeNet, VGG19, Google Net Highest Accuracy- AlexNet 97.9% Highest Recall - VGG16 96.02% Highest Precision- AlexNet 96.23% Highest F1 Score- AlexNet 95.82% | Multiclass (5- Classes) |
| Yung-Hui Li et al. [ | 2019 | Kaggle | DCNN | Accuracy 5-class - 86.17% Accuracy binary class – 91.05% | Multiclass (5- Classes) Binary class |
| Muhammad Mateen et al. [ | 2018 | Kaggle | VGG-19 Architecture with PCA and SVD | Accuracy 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively | Multiclass (5- Classes) |
| Suvajit Dutta et al. [ | 2018 | Kaggle | Back Propagation NN, DNN and CNN, Fuzzy C-means | Testing Accuracy BNN- 42 CNN (VGG16)- 78.3 DNN- 86.3 | Multiclass |
| Zhiguang Wang et al. [ | 2017 | Kaggle | DCNN for Discriminative Localization and Visual Explanation | On the validation set, Kappa scores- 0.70 for 256-pixel images, 0.80 for 512-pixel images and 0.81 for 768-pixel images | Multiclass (4- Classes) |
| Ramon Pires et al. [ | 2019 | Training dataset- Kaggle Testing dataset- Messidor | Convolutional Neural Networks (CNN) | area under the ROC curve −98.2% | Binary Classification |
| M. T. Esfahan et al. [ | 2018 | Kaggle | CNN based ResNet34 | Accuracy- 85% Sensitivity- 86% F1 Score- 85% | Binary Classification |
| Gabriel Tozatto Zago et al. [ | 2020 | Standard Diabetic Retinopathy Database, DIARETDB0, DIARETDB1, Kaggle, Messidor, Messidor1, IDRiD, DDR | Fully Patch CNN based ResNet34 | AUC, Sensitivity Messidor- 0.912, 0.940, Kaggle- 0.764, 0.911, IDRiD- 0.818, 0.841, DDR- 0.848, 0.891, DIARETDB0–0.786, 0.821 Respectively | Binary Classification |
| Sudipta Dandapat et al. [ | 2021 | Own Dataset | SVM, CNN, k-NN | Best Results SVM Accuracy-96.6% Sensitivity-0.66 Specificity- 0.95 | Binary Classification |
| Quang H Nguyen et al. [ | 2020 | CNN, VGG-16, VGG-19 | sensitivity- 80%, accuracy- 82%, specificity- 82%, AUC- 0.904 | Multiclass (4- Classes) | |
| Supriya Mishra et al. [ | 2020 | Kaggle (APTOS) | DL DenseNet Architecture | Accuracy- 0.9611 Kappa Score- 0.8981 | Multiclass (5- Classes) |
| Víctor Vives-Boix, and Daniel Ruiz-Fernández [ | 2021 | small diabetic retinopathy dataset of 3662 images from Kaggle | Synaptic metaplasticity in CNN | Accuracy- 95.56%, F1-score- 94.24%, Precision- 98.9%, and Recall- 90% | Binary Classification |
| Deepthi K Prasad [ | 2015 | DIARETDB1 | morphological operations, Haar wavelet transformation Back propagation neural network and one rule classifier | BPNN sensitivity- 93.3%, accuracy- 93.8%, specificity- 95.23% One rule sensitivity- 97.8%, accuracy- 97.75%, specificity- 97.5% | Binary Classification |
Fig. 10Cropped region of interest (b) from original fundus image (a) [169]
Different methods for detection of optic disc and optic cup
| Literature | Year | Database | Methods | Performance |
|---|---|---|---|---|
| Xuesheng Bian et al. [ | 2020 | REFUGE Origa650 | Generative Adversarial Learning Anatomy Guided Cascade Network | Dice Score IoU- 0.8763 Dice Score IoU- 0.7914 |
| Lei Wang et al. [ | 2019 | CFI DIARETDB0 DIARETDB1 DRIONS-DB DRIVE MESSIDOR ORIGA | coarse-to-fine deep learning framework U-net model | Average Intersection over Union (IoU)- 89.1% Dice Similarity Coefficient (DSC)- 93.9% |
| Qing Liu et al. [ | 2019 | ORIGA and DRISHTI | Spatial-Aware Joint Segmentation | OD DRISHTI- 0.98 OC DRISHTI- 0.89 |
| Mohammad A.U. Khan et al. [ | 2019 | MESSIDOR | Vessel Convergence, Elliptical Symmetry | Accuracy- 93.5% |
| Sangita Bharkad [ | 2017 | DRIVE, DIRATEDB0, DIRATEDB1 and DRIONS | Grayscale Morphological Dilation, Equiripple low pass FIR filter | DRIVE, DRIONS- 100%, DIRATEDB0–96.92%, DIRATEDB1–98.98%, |
| Arunava Chakravarty, Jayanthi Sivaswamy [ | 2017 | INSPIRE DRISHTI-GS1 Dataset-1 RIM-ONE v2 DRIONS MESSIDOR | Depth reconstruction, Conditional Random Field, Coupled sparse dictionary | |
| Di Niu et al. [ | 2017 | ORIGA MESSIDOR | Cascading Localization Method, CNN, saliency map | ORIGA- 99.87% MESSIDOR- 99.01% ORIGA+ MESSIDOR- 99.44% ORIGA- 99.33% MESSIDOR- 98.75% ORIGA+ MESSIDOR- 99.04% |
| Hanan S. Alghamdi et al. [ | 2016 | DRIVE, DIARETDB1, STARE, MESSIDOR, HAPIEE, KENYA and PAMDI | Cascade Classifiers, CNN | DRIVE-100%, DIARETDB1–98.88%, STARE-86.71%, MESSIDOR-99.20%, HAPIEE-98.36%, KENYA-99.53% PAMDI-98.13% |
| Sa’ed. Abed [ | 2016 | DRIVE, DiaRetDB1, DMED and STARE | Background Subtraction-based Optic Disc Detection (BSODD), Swarm Intelligence Techniques | DRIVE, DiaRetDB1–100%, DMED-98.82% and STARE-95% |
| M. Partha Sarathi et al. [ | 2016 | DRIVE, MESSIDOR | In-painting Region growing Spline interpolation | Average Overlapping Ratio DRIVE- 87%, MESSIDOR- 89% Average OD Segmentation Accuracy- 91% |
| Ngan-Meng Tan et al. [ | 2015 | DRIONS-DB, RIM-ONE v.3, DRISHTI-GS | U-Net CNN | |
| Sohini Roychowdhury et al. [ | 2015 | DRIVE, DIARETDB1, DIARETDB0, CHASE DB1, MESSIDOR and STARE | Gaussian Mixture Model classifier | OD segmentation success range 98.8–100% OD segmentation overlap score range 72–84% |
| Balazs Harangi, Andras Hajdu [ | 2015 | DRIVE, DIARETDB1, DIARETDB0, MESSIDOR | Ensemble-based system Naïve Bayes (NB) | DRIVE-100%, DIARETDB0–96.15%, DIARETDB1–96.63% MESSIDOR-97.65% DRIVE-100%, DIARETDB0–98.46%, DIARETDB1–98.88% MESSIDOR- 98.33% |
Fig. 11a Retinal Image b Blood Vessel Segmentation [161]
Methods for blood vessel segmentation
| Literature | Year | Database | Methods | Sensitivity | Specificity | Accuracy | AUC |
|---|---|---|---|---|---|---|---|
| T. Jemima Jebaseeli et al. [ | 2019 | STARE, DRIVE, HRF, REVIEW, and DRIONS | Tandem Pulse Coupled Neural Network Model and Deep Learning Based SVM | 80.61% | 99.54% | 99.49% | – |
| Changlu Guo et al. [ | 2020 | DRIVE, CHASE_DB1 | Spatial Attention U-Net | DRIVE −0.8212 CHASE_DB1–0.8573 | – | DRIVE-0.9698 CHASE_DB1–0.9755 | DRIVE-0.9864 CHASE_DB1–0.9905 |
| Nasser Tamim, et al. [ | 2020 | DRIVE STARE CHASE_DB1 | Hybrid Features and Multi-Layer Perceptron Neural Networks | DRIVE 0.7542 STARE 0.7806 CHASE_DB10.7585 | DRIVE 0.9843 STARE 0.9825 CHASE_DB1 0.9846 | DRIVE 0.9607 STARE 0.9632 CHASE_DB1 0.9577 | |
| Juntang Zhuang [ | 2018 | DRIVE, CHASE_DB1 | Laddernet: Multi-Path Networks Based on U-Net | DRIVE −0.7856 CHASE_DB1–0.7978 | DRIVE-0.9810 CHASE_DB1–0.9818 | DRIVE-0.9561 CHASE_DB1–0.9656 | DRIVE-0.9793CHASE_DB1–0.9839 |
| Maison et al. [ | 2018 | DRIVE | Gaussian Filter | 96.90% | 82.10% | 95.72% | – |
| F. Orujov et al. [ | 2020 | DRIVE STARE CHASE_DB1 | Fuzzy Based Image Edge Detection Algorithm | DRIVE 0.838 STARE 0.8342 CHASE_DB1 0.880 | DRIVE 0.957 STARE 0.8806 CHASE_DB1 0.968 | DRIVE 0.939 STARE 0.865 CHASE_DB1 0.950 | – |
| Benzhi Chen et al. [ | 2020 | 3150 normal retinal images are collected | Multi-Scale Sparse Coding Based Learning (MSSCL) Algorithm | Mean AUC- 0.9918 Std AUC- 0.0028 Mean MAP- 0.8711 Std MAP- 0.0155 | |||
| Ibrahim Atli, Osman Serdar Gedik [ | 2021 | STARE, CHASE_DB1 and DRIVE | Sine-Net: A fully convolutional deep learning architecture | DRIVE 0.8260 STARE 0.6776 CHASE_DB1 0.7856 | DRIVE 0.9824 STARE 0.9946 CHASE_DB1 0.7856 | DRIVE 0.9685 STARE 0.9711 CHASE_DB1 0.9676 | DRIVE 0.9852 STARE 0.9807 CHASE_DB1 0.982 |
| Sonali Dasha, Manas Ranjan Senapati [ | 2020 | DRIVE | combined approach of DWT, Tyler Coye and Gamma correction | [DWT (db1, sym1,coif1) + Tyler Coye] 0.7314 [DWT(db1, sym1,coif1) + gamma (.5, .6, .7, .8, .9, 1) + Tyler Coye] 0.7403 | [DWT (db1, sym1,coif1) + Tyler Coye] 0.9891 [DWT(db1, sym1,coif1) + gamma (.5, .6, .7, .8, .9, 1) + Tyler Coye] 0.9905 | [DWT(db1, sym1,coif1) + Tyler Coye] 0.949 [DWT(db1, sym1,coif1) + gamma (.5, .6, .7, .8, .9, 1) + Tyler Coye] 0.9661 | – |
| Zhexin Jiang et al. [ | 2018 | DRIVE, STARE, CHASE_DB1 and HRF | Fully Convolutional Network with Transfer Learning | Single Database Set DRIVE 0.7540, STARE 0.8352, CHASE_DB1 0.8640 Cross Database Set DRIVE 0.7121, STARE 0.7820, CHASE_DB1 0.7217 | Single Database Set DRIVE 0.9825, STARE 0.9846, CHASE_DB1 0.9745 Cross Database Set DRIVE 0.9832, STARE 0.9798, CHASE_DB1 0.9770 | Single Database Set DRIVE 0.9624, STARE 0.9734, CHASE_DB1 0.9968 Cross Database Set DRIVE 0.9593, STARE 0.9653, CHASE_DB1 0.9591 | Single Database Set DRIVE 0.9810, STARE 0.9900, CHASE_DB1 0.9810 Cross Database Set DRIVE 0.9680, STARE 0.9870, CHASE_DB1 0.9580 |
Different types of feature selection and fusion methods
| Reference | Method | Technique | Dataset | Performance | Advantage | Disadvantage |
|---|---|---|---|---|---|---|
| K Yazhini et al., 2020 [ | Fusion based feature extraction | Gray level co-occurrence matrix and VGG-19 known as FM-GLCM-VGG19 | Kaggle | Accuracy- 71.30% Sensitivity- 50.43% Specificity- 80.19% | This fusion based takes less time. | Model is validated on only one image file format. The combination of images should be taken to check the robustness of the model |
| S. Gayathri et al., 2020 [ | Feature extraction and feature selection | Feature extraction SURF, BRISK, MR-MR feature selection and ranking, SVM, MLP, Naïve Bayes | IDRiD, MESSIDOR, DIARETDB0 | Average Accuracy- 98.13% Average Accuracy- 95.13% | Robust and reliable. Trained and tested on different types of images Suitable for both binary as well as multiclass classification | The novel deep learning techniques should be taken into consideration for better results. |
| Zun Shen et al., 2021 [ | Ensemble Learning | XGBoost, Stacking | Biochemical and Physical dataset | Average Accuracy- 83.95% | Prohibits data feature redundancy. | No appropriate technique for dimensionality reduction. Prediction accuracy is not up to the mark. |
| Farrukh Zia et al., 2021 [ | Feature Selection and Feature Fusion | VGG and Inception V3 | Kaggle | Accuracy – 96.4% | Comparatively takes less time. Different classification methods are applied for dataset testing. | Fitness function is missing while selecting features |
| Muhammad Mateen et al., 2020 [ | Convolutional Neural Network | e-Ophtha DIARETDB1 | Accuracy- 98.43% Accuracy- 98.91% | Pretrained models with transfer learning improves the model accuracy. Combination of images are used for better training | Detection of other lesion such as hemorrhages, microaneurysms are missing in this study | |
| Lakshmana Kumar Ramasamy et al., 2021 [ | Feature Fusion | Ridgelet Transform, Sequential Minimal Optimization | DIARETDB1 KAGGLE | Accuracy- 97.05% Sensitivity- 98.87% Specificity- 95.24% | Taken large number of images in account Use of optimization improves results | Computation time complexity is high Deep features are missing |
| Zhuang Ai et al., 2021 [ | Deep Ensemble Learning | DR-IIXRN | Kaggle | AUC- 95% Accuracy- 92% Recall Rate- 92% | Can provide specialists with more accurate disease grading in order to select the most appropriate treatment options | Testing of model which can provide specialists with more accurate disease grading in disease |
| Jyostna Devi Bodapati. Et al., 2020 [ | Blended Learning | Deep Neural Network (DNN), ConvNet | Kaggle | Accuracy- 97.41% kappa score-94.82% kappa score-71.1% | Faster as it uses blended features | Heterogeneity of images are missing. |
Fig. 12Example of adversarial attack on an authentic image [186]
Adversarial attack for image classification
| Reference | Attack Type | Approach | Description | Pros | Cons |
|---|---|---|---|---|---|
| Hongchen Cao et al., 2021 [ | Black Box | DL | In this research, black-box strategy for spoofing deep learning networks in apps by training alternative approaches is presented. To undertake blackbox adversarial assaults, the technique is tested on ten real-world deep-learning apps from Google Play. | • Consider Large dataset. • Real world apps are taken into consideration. • Success rate improves up to 38.97% | • Not suitable for obfuscated apps. • The experiment is only done on computer vision-based apps i.e., not widely acceptable |
| Sensen Guo, et al., 2021 [ | Black Box | ML | proposes a machine learning based abnormal flow detector. The substitute model is trained with a comparable decision boundary and algorithm is used to create adversarial instances then examines if these examples can avoid the target models detection. | • Heterogeneous datasets are used • The method has high chances to bypass the detection of target model. | • Experiments on real network are missing |
| Nicolas Papernot et al., 2017 [ | Black Box | ML | Synthetic data generation is used to craft misclassified examples. The solution of a work is a substantial step toward easing earlier attackers’ rigid preconceptions about adversarial abilities. | • Uses real world models such as hosted by two widely used sites i.e., Google and Amazon | • Did not focus on the other combinations of adversarial examples. |
| Hongying Liu et al., 2020 [ | White Box | DL | We introduced the ADV-ReLU framework, a new universal adversarial example generation system that can be successfully incorporated into gradient-based white-box gradient-based algorithms. | • Robust Framework | • Some other dataset should also be used to test the framework |
| Yixiang Wang et al., [ | White Box | DL | DIAA, an interpretable white-box AE attack strategy that investigates the use of the deep Taylor decomposition’s interpretable strategy for the evaluation of the most significant aspects is used. | • successfully attack both non-robust and robust systems with minimal perturbation • Tested on different datasets | • Not suitable for a smaller number of images. |
| Linfeng Ye [ | White Box | DL | One step First order method for neural network attack. TVM framework is used to fasten the backword and forward propagation | • High success rate as compared to second order or multiclass first order • Faster than other proposed attacks | • Not much robust • Does not work well if multiple steps attacks are taken into account. |
| Gege Qi et al., 2021 [ | Stabilized Medical Image | DL | image-based medical adversarial attack approach is proposed. The loss deviation and a loss stabilization functions are used | • Investigations on a variety of medical computer vision benchmarks, have shown that the suggested technique is stable. • Recent research (COVID 19) is taken into account. | • Only focuses on medical imaging dataset. |
| Jing Lin et al., 2022 [ | Secure ML | ML | a distributed adversarial retraining approach is used. The proposed framework has used soft label and uses transferability which reduces its time complexity. | • Robust against adversarial attacks. | • Not suitable for black box attacks |
| Sheeba Lal et al., [ | DR detection | ML, DL | Defensive model against noise, fusion technique on the retinal DR images. | • Different features are taken into consideration in order to improve the model’s sturdiness | • Limited to particular application |
| Jiawang Bai et al., 2021 [ | Targeted Attack | DL | To achieve stealthiness, aim is to misdiagnose a certain instance into a target class without modifying it, but without significantly reducing the predictive performance of other samples. Because the parameters are stored as bits (i.e., 0 and 1) Therefore, the problem is formulated as binary integer programming | • Numerous tests show that our strategy is superior when it comes to attacking DNNs. • The defined technique is resistance to different parameters. | • The proposed is not superior than other defined methods in terms of time complexity. |
| Pradeep Rathore et al., 2020 [ | Targeted, Untargeted, and Universal | DL | These attacks are studied on 54 multiclass UCR time series database | • It puts real-world implementations reliant on deep learning models in jeopardy. | • The experiment on multiple adversarial attacks is missing. |
| Hyun Kwon et al., 2018 [ | Untargeted Attack | DL | This work uses random untargeted adversarial examples. | • It is open to steganography • Overcomes problem of pattern vulnerability. | • Experiments on medical imaging dataset is missing. |
Traditional and DL methods comparison based on their retinal features
| Approach | Method | Feature | Database | Performance (%) |
|---|---|---|---|---|
| Traditional Approach | Morphological operations, local coarse segmentation [ | Retinal Vessel | DRIVE | Accuracy = 87 |
| Deep Learning | Spatial Attention U-Net [ | Accuracy = 96.98 | ||
| Traditional Approach | Ensemble Classifier of Bootstrapped Decision Trees [ | Exudate | DIARETDB1 | Accuracy = 87.72 |
| Deep Learning | CNN, ResNet [ | Accuracy = 98 | ||
| Traditional Approach | Directional Local Contrast (DLC) [ | Microaneurysms | DIARETDB1 | AUC = 86 |
| Deep Learning | Stacked Sparse Autoencoder (SSAE) [ | AUC = 96.2 | ||
| Traditional Approach | Random Forest [ | Retinal Hemorrhages | DIARETDB0 DIARETDB1 MESSIDOR, HRF, Teleopthaa | Accuracy = 93.58 |
| Deep Learning | 3D CNN [ | Accuracy = 97.71 | ||
| Traditional Approach | In-painting Region growing Spline interpolation [ | Optic Disc | MESSIDOR | Accuracy = 89 |
| Deep Learning | Cascade Classifiers, CNN [ | Accuracy = 99.20 |
Fig. 13Blood Vessel Segmentation performance in traditional and DL-based methods using DRIVE and CHASE Datasets
Fig. 14Performance of Lesion Based Techniques in traditional and DL-based methods using DIARETDB1 Dataset