| Literature DB >> 35009747 |
Hassan Tariq1, Muhammad Rashid2, Asfa Javed1, Eeman Zafar1, Saud S Alotaibi3, Muhammad Yousuf Irfan Zia4.
Abstract
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification.Entities:
Keywords: automatic detection; convolutional neural network; deep learning; deep transfer learning; diabetic retinopathy
Mesh:
Year: 2021 PMID: 35009747 PMCID: PMC8749542 DOI: 10.3390/s22010205
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Different stages of diabetic retinopathy with the passage of time [9].
| Stage | Normal | Non-Proliferate | Proliferate | ||
|---|---|---|---|---|---|
| Years | 0 | 3–5 | 5–10 | 10–15 | >15 |
| Type of DR | N/A | Mild | Moderate | Severe | High-risk |
| Condition of retina | Healthy | A few tiny bulges in the blood vessels | Little lumps in the veins with noticeable spots of blood spillage that stores the cholesterol. | Larger areas of blood leakage. Beading in veins that is unpredictable. The formation of new blood vessels at the optic circle. Vein occlusion. | High bleeding and the formation of new blood vessels elsewhere in the retina. Complete blindness. |
Figure 1Proposed framework for the detection of DR.
Figure 2Schematics of CNN model for the detection of different DR stages.
Figure 3The adopted DTL process.
Figure 4The DTL with pre-trained and learnable weights.
Figure 5The pre-trained architecture of AlexNet.
Figure 6The pre-trained architecture of GoogleNet.
Figure 7The pre-trained architecture of Inception V4.
Figure 8The pre-trained architecture of Inception ResNet V2.
Figure 9The pre-trained architecture of ResNeXt-50.
The setting of parameters for CNN architectures.
| Parameters | AlexNet | GoogleNet | Inception V4 | Inception ResNet V2 | ResNeXt-50 |
|---|---|---|---|---|---|
| Optimizer | ADAM | ADAM | ADAM | ADAM | ADAM |
| Base learning rate | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 |
| Learning decay rate | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
| Momentum | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 |
| RMSprop | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 |
| Dropout rate | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 |
| # of epochs | 30 | 30 | 30 | 30 | 30 |
| Train batch size | 32 | 32 | 32 | 32 | 32 |
| Test batch size | 8 | 8 | 8 | 8 | 8 |
| Total number of parameters | 60 M | 4 M | 43 M | 56 M | 27.56 M |
Results and performance obtained using pre-trained CNN architectures.
| Classifier | Folds | TP | TN | FP | FN | Accuracy (%) | Specificity (%) | Precision (%) | Recall (%) | Fscore (%) |
|---|---|---|---|---|---|---|---|---|---|---|
|
| F1 | 37 | 210 | 35 | 12 | 84.01 | 85.71 | 51.38 | 75.51 | 61.15 |
| F2 | 38 | 210 | 37 | 12 | 83.50 | 85.02 | 50.66 | 76.0 | 60.80 | |
| F3 | 38 | 214 | 27 | 8 | 87.80 | 88.79 | 58.46 | 82.60 | 68.46 | |
| F4 | 37 | 216 | 27 | 8 | 87.84 | 88.88 | 57.81 | 82.22 | 67.89 | |
| F5 | 37 | 216 | 27 | 8 | 87.84 | 88.88 | 57.81 | 82.22 | 67.89 | |
|
| F1 | 38 | 219 | 22 | 7 | 89.86 | 90.87 | 63.33 | 84.44 | 72.38 |
| F2 | 40 | 222 | 19 | 7 | 90.97 | 92.11 | 67.79 | 85.10 | 75.47 | |
| F3 | 38 | 221 | 18 | 8 | 90.87 | 92.46 | 67.85 | 82.61 | 74.51 | |
| F4 | 37 | 220 | 18 | 8 | 90.81 | 92.43 | 67.27 | 82.22 | 74.00 | |
| F5 | 38 | 220 | 18 | 7 | 91.16 | 92.43 | 67.85 | 84.44 | 75.24 | |
|
| F1 | 39 | 224 | 21 | 7 | 90.37 | 91.42 | 65.00 | 84.78 | 73.58 |
| F2 | 39 | 224 | 17 | 8 | 91.32 | 92.94 | 69.64 | 82.97 | 75.72 | |
| F3 | 39 | 225 | 16 | 8 | 91.66 | 93.36 | 70.90 | 82.97 | 76.47 | |
| F4 | 39 | 226 | 18 | 8 | 91.06 | 92.62 | 68.42 | 82.98 | 75.00 | |
| F5 | 39 | 222 | 20 | 8 | 90.31 | 91.73 | 66.10 | 82.98 | 73.58 | |
|
| F1 | 40 | 220 | 18 | 6 | 91.55 | 92.44 | 68.96 | 86.96 | 76.92 |
| F2 | 40 | 221 | 14 | 6 | 92.88 | 94.04 | 74.07 | 86.96 | 80.00 | |
| F3 | 40 | 227 | 14 | 7 | 92.71 | 94.19 | 74.07 | 85.11 | 79.21 | |
| F4 | 41 | 226 | 13 | 5 | 93.68 | 94.56 | 75.92 | 89.13 | 82.00 | |
| F5 | 39 | 223 | 18 | 6 | 91.61 | 92.53 | 68.42 | 86.67 | 76.47 | |
|
| F1 | 41 | 233 | 8 | 5 | 95.47 | 96.68 | 83.67 | 89.13 | 86.31 |
| F2 | 41 | 234 | 7 | 5 | 95.82 | 97.09 | 85.41 | 89.13 | 87.23 | |
| F3 | 42 | 234 | 6 | 4 | 96.50 | 97.50 | 87.50 | 91.30 | 89.36 | |
| F4 | 42 | 236 | 5 | 3 | 97.20 | 97.92 | 89.36 | 93.33 | 91.30 | |
| F5 | 41 | 236 | 5 | 2 | 97.53 | 97.92 | 89.13 | 95.35 | 92.13 |
Comparison with state-of-the-art classifiers.
| Classifiers | Alexnet | Inception V4 | ResNet/ResNeXt-50 | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Acc (%) | Pre (%) | Rec (%) | Acc (%) | Pre (%) | Rec (%) | Acc (%) | Pre (%) | Rec (%) | |
| Our Work | 87.84 | 57.81 | 82.22 | 90.31 | 66.10 | 82.98 | 97.53 | 89.13 | 95.35 |
| S. Kumar et al. [ | 60.10 | – | – | – | – | – | 55.70 | – | – |
| Z. Gao et al. [ | – | – | – | 88.72 | 95.77 | 9484 | 87.61 | 95.76 | 95.52 |
Acc: Accuracy, Pre: Precision, Rec: Recall.