| Literature DB >> 35463234 |
Malliga Subramanian1, M Sandeep Kumar2, V E Sathishkumar3, Jayagopal Prabhu2, Alagar Karthick4, S Sankar Ganesh5, Mahseena Akter Meem6.
Abstract
Retinal abnormalities have emerged as a serious public health concern in recent years and can manifest gradually and without warning. These diseases can affect any part of the retina, causing vision impairment and indeed blindness in extreme cases. This necessitates the development of automated approaches to detect retinal diseases more precisely and, preferably, earlier. In this paper, we examine transfer learning of pretrained convolutional neural network (CNN) and then transfer it to detect retinal problems from Optical Coherence Tomography (OCT) images. In this study, pretrained CNN models, namely, VGG16, DenseNet201, InceptionV3, and Xception, are used to classify seven different retinal diseases from a dataset of images with and without retinal diseases. In addition, to choose optimum values for hyperparameters, Bayesian optimization is applied, and image augmentation is used to increase the generalization capabilities of the developed models. This research also provides a comparison of the proposed models as well as an analysis of them. The accuracy achieved using DenseNet201 on the Retinal OCT Image dataset is more than 99% and offers a good level of accuracy in classifying retinal diseases compared to other approaches, which only detect a small number of retinal diseases.Entities:
Mesh:
Year: 2022 PMID: 35463234 PMCID: PMC9033334 DOI: 10.1155/2022/8014979
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Sample OCT images. (a) AMD. (b) CNV. (c) CSR. (d) DME. (e) DR. (f) Drusen. (g) MH. (h) Normal.
Figure 2VGG16 architecture.
Figure 3A 5-layer dense block [43].
Figure 4Workflow for the proposed classifiers.
Experimental platform.
| Item name | Specifications |
|---|---|
| GPU | GPU DELL EMC 740 |
| RAM | 128 GB |
| GPU RAM | 32 GB |
| DISK | 4 TB |
| OS | Ubuntu |
| Language | Python |
| IDE | Jupyter notebook environment |
Hyperparameters and their search space.
| Parameter | Search space | Description |
|---|---|---|
| Optimizer | Adam, RMSProp, SGD, AdaDelta | To optimize the input weights by comparing the prediction and the loss function |
| Learning rate | 1 | To determine the step size at each iteration while minimizing the loss function |
| Activation function | ReLu, Elu and Tanh, Leaky ReLu | To introduce nonlinearity into the output of neurons |
| Number of neurons in customized layers | 64,128, 56, 512,1024 | To compute the weighted average of the input |
| Batch size | 32,64,128 | Number of training examples utilized in one iteration |
Modification of the classification block.
| Pretrained models | Number of layers added |
|---|---|
| VGG16 | 2 fully connected + 1 softmax |
| DenseNet201 | 2 fully connected + 1 softmax |
| InceptionV3 | 1 fully connected + 1 softmax |
| Xception | 1 softmax |
Hyperparameters with tuned values.
| Hyperparameters | VGG16 | DenseNet201 | InceptionV3 | Xception | ||||
|---|---|---|---|---|---|---|---|---|
| Feature extractor | Fine-tuner | Feature extractor | Fine-tuner | Feature extractor | Fine-tuner | Feature extractor | Fine-tuner | |
| Optimizer | Adam | Adam | Adam | Adam | RMSProp | RMSProp | Adam | Adam |
| Learning rate | 0.0001 | 0.0001 | 0.00001 | 0.00001 | 0.00001 | 0.0001 | 0.001 | 0.0001 |
| Activation | tanh | tanh | elu | relu | tanh | relu | elu | relu |
| No. of neurons | 256 | 512 | 128 | 128 | 64 | 512 | 128 | 256 |
| Batch size | 32 | 32 | 32 | 32 | 32 | 32 | 32 | 32 |
Performance of VGG16.
| Class labels | Precision (%) | Recall (%) | F1-score (%) | |||
|---|---|---|---|---|---|---|
| Feature extractor | Fine tuner | Feature extractor | Fine tuner | Feature extractor | Fine tuner | |
| AMD | 99.41 | 100 | 96 | 99.71 | 97.67 | 99.86 |
| CNV | 79.74 | 87.24 | 69.71 | 95.71 | 74.39 | 91.28 |
| CSR | 83.05 | 97.77 | 96.57 | 100 | 89.3 | 98.87 |
| DME | 73.83 | 93.49 | 62.86 | 90.29 | 67.90 | 91.86 |
| DR | 81.44 | 99.14 | 84 | 99.14 | 82.7 | 99.14 |
| Drusen | 67.19 | 95.72 | 60.86 | 83.14 | 63.89 | 88.99 |
| MH | 95.41 | 100 | 77.14 | 98.29 | 85.31 | 99.14 |
| Normal | 62.65 | 89.81 | 87.71 | 95.71 | 73.1 | 96.67 |
| Macro average | 80.34 | 95.4 | 79.36 | 95.25 | 79.28 | 95.23 |
| Weighted average | ||||||
Performance of DenseNet201.
| Class labels | Precision (%) | Recall (%) |
| |||
|---|---|---|---|---|---|---|
| Feature extractor | Fine tuner | Feature extractor | Fine tuner | Feature extractor | Fine tuner | |
| AMD | 97.34 | 100.00 | 94.82 | 100.00 | 95.12 | 99.12 |
| CNV | 91.08 | 99.82 | 92.45 | 98.12 | 92.91 | 89.66 |
| DME | 94.26 | 99.55 | 89.12 | 99.34 | 90.72 | 99.61 |
| CSR | 97.18 | 99.11 | 93.29 | 97.56 | 94.61 | 98.55 |
| DR | 89.99 | 99.79 | 96.51 | 98.69 | 94.95 | 97.99 |
| Drusen | 91.73 | 99.51 | 93.21 | 98.88 | 92.91 | 97.99 |
| MH | 91.00 | 100.00 | 96.57 | 99.11 | 93.98 | 98.44 |
| Normal | 92.09 | 99.91 | 94.12 | 98.99 | 94.01 | 98.03 |
| Macro average | 93.08 | 99.71 | 93.76 | 98.84 | 93.65 | 97.42 |
| Weighted average | 93.08 | 99.71 | 93.76 | 98.84 | 93.65 | 97.42 |
Performance of InceptionV3.
| Class labels | Precision (%) | Recall (%) |
| |||
|---|---|---|---|---|---|---|
| Feature extractor | Fine tuner | Feature extractor | Fine tuner | Feature extractor | Fine tuner | |
| AMD | 90.12 | 95.62 | 91.81 | 92.31 | 90.60 | 93.27 |
| CNV | 88.15 | 96.71 | 90.81 | 95.91 | 87.96 | 90.06 |
| DME | 90.31 | 93.98 | 87.48 | 93.27 | 87.72 | 91.72 |
| CSR | 88.18 | 95.91 | 91.02 | 91.78 | 90.34 | 89.45 |
| DR | 87.99 | 93.11 | 90.18 | 93.91 | 90.06 | 91.62 |
| Drusen | 89.41 | 96.81 | 89.38 | 89.95 | 89.43 | 93.02 |
| MH | 90.01 | 96.21 | 91.64 | 96.81 | 89.23 | 91.62 |
| Normal | 88.62 | 95.47 | 89.74 | 95.99 | 90.82 | 93.49 |
| Macro average | 89.10 | 95.48 | 90.26 | 93.74 | 89.52 | 91.78 |
| Weighted average | ||||||
Performance of Xception.
| Class labels | Precision (%) | Recall (%) |
| |||
|---|---|---|---|---|---|---|
| Feature extractor | Fine tuner | Feature extractor | Fine tuner | Feature extractor | Fine tuner | |
| AMD | 92.17 | 98.23 | 89.23 | 98.56 | 90.12 | 96.99 |
| CNV | 90.56 | 98.18 | 91.10 | 97.34 | 90.06 | 95.61 |
| DME | 89.91 | 96.99 | 88.97 | 98.13 | 89.97 | 97.03 |
| CSR | 88.65 | 97.26 | 90.45 | 96.99 | 91.22 | 98.41 |
| DR | 92.11 | 98.61 | 89.34 | 95.09 | 90.23 | 96.21 |
| Drusen | 91.99 | 97.49 | 92.31 | 95.01 | 91.24 | 95.24 |
| MH | 92.05 | 98.12 | 91.81 | 96.38 | 90.03 | 95.02 |
| Normal | 90.51 | 94.78 | 89.45 | 95.73 | 91.45 | 96.12 |
| Macro average | 90.99 | 97.46 | 90.33 | 96.65 | 90.54 | 96.33 |
| Weighted average | ||||||
Validation and testing accuracy of the proposed models.
| Experiment scenario | VGG16 | DenseNet201 | InceptionV3 | Xception | ||||
|---|---|---|---|---|---|---|---|---|
| Valid (%) | Test (%) | Valid (%) | Test (%) | Valid (%) | Test (%) | Valid (%) | Test (%) | |
| Feature extractor | 80.64 | 79.36 | 94.57 | 93.81 | 91.63 | 89.73 | 92.11 | 90.99 |
| Fine-tuner | 95.21 | 95.25 | 99.23 | 99.71 | 96.92 | 96.78 | 98.12 | 97.92 |
Figure 5Confusion matrix. (a) VGG16 (feature extractor). (b) VGG16 (fine tuner).
Comparison of proposed models with other deep learning models.
| Models | Retinal diseases | Classification accuracy (%) |
|---|---|---|
|
| ||
| OctNET [ | DME, CNV, and Drusen | 99.7 |
| Layer guided CNN [ | DME, CNV, and Drusen | 89.9 |
| GAN [ | DME, CNV, MH and Drusen | 93.9 |
| Deep CNN [ | DMD and DME | 95.7 |
| CenterNet [ | DR | 98.1 |
| AlexNet, ResNet-18, GoogleNet [ | CSR | 99.6 |
| Capsule network [ | DME, Drusen, and CNV | 99.6 |
| CNN [ | DMD, DME, and CNV | 97.0 |
| Deep CNN [ | CSR | 93.8 |
|
| ||
|
| ||
| VGG16 | AMD, CNV, DME, CSE, DR, Drusen, MH | |
| (a) As a feature extractor | 79.36 | |
| (b) As a fine tuner | 95.25 | |
| Densenet201 | ||
| (a) As a feature extractor |
| |
| (b) As a fine tuner |
| |
| InceptionV3 | ||
| (a) As a feature extractor | 89.73 | |
| (b) As a fine tuner | 96.78 | |
| Xception | ||
| (a) As a feature extractor | 90.99 | |
| (b) As a fine tuner | 97.92 | |
Trainable parameters in proposed models.
| Model | Number of parameters retrained | |
|---|---|---|
| Feature extractor (M) | Fine-tuner (M) | |
| VGG16 | 4.7 | 5.5 |
| Xception | 8.3 | 14.4 |
| InceptionV3 | 1.1 | 2.3 |
| DenseNet201 | 2.31 | 3.9 |
Figure 6Error analysis. (a) An OCT image with CNV disease. (b) An OCT normal image.