| Literature DB >> 35204552 |
Mohamed Elsharkawy1, Ahmed Sharafeldeen1, Ahmed Soliman1, Fahmi Khalifa1, Mohammed Ghazal2, Eman El-Daydamony3, Ahmed Atwan3, Harpal Singh Sandhu1, Ayman El-Baz1.
Abstract
Early diagnosis of diabetic retinopathy (DR) is of critical importance to suppress severe damage to the retina and/or vision loss. In this study, an optical coherence tomography (OCT)-based computer-aided diagnosis (CAD) method is proposed to detect DR early using structural 3D retinal scans. This system uses prior shape knowledge to automatically segment all retinal layers of the 3D-OCT scans using an adaptive, appearance-based method. After the segmentation step, novel texture features are extracted from the segmented layers of the OCT B-scans volume for DR diagnosis. For every layer, Markov-Gibbs random field (MGRF) model is used to extract the 2nd-order reflectivity. In order to represent the extracted image-derived features, we employ cumulative distribution function (CDF) descriptors. For layer-wise classification in 3D volume, using the extracted Gibbs energy feature, an artificial neural network (ANN) is fed the extracted feature for every layer. Finally, the classification outputs for all twelve layers are fused using a majority voting schema for global subject diagnosis. A cohort of 188 3D-OCT subjects are used for system evaluation using different k-fold validation techniques and different validation metrics. Accuracy of 90.56%, 93.11%, and 96.88% are achieved using 4-, 5-, and 10-fold cross-validation, respectively. Additional comparison with deep learning networks, which represent the state-of-the-art, documented the promise of our system's ability to diagnose the DR early.Entities:
Keywords: 3-D optical coherence tomography (3-D OCT); Markov–Gibbs random field model (MGRF); computer-aided diagnosis (CAD); diabetic retinopathy (DR); majority voting; neural network (NN)
Year: 2022 PMID: 35204552 PMCID: PMC8871295 DOI: 10.3390/diagnostics12020461
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1The framework of the proposed CAD system for DR diagnosis using 3-D OCT images.
Figure 2A 3D-OCT visualization of the layer segmentation (left). On the right, a description of twelve layers starting from layer one (the nerve fiber layer (NFL)) and ending with layer 12 (retinal pigment epithelium (RPE)).
Figure 3An explanation of the segmentation process shows how the B-scan 3D OCT images are segmented beginning at the macula mid slice (i) and moving in two directions (A–C).
Figure 4A graphic representation of 26 neighborhood voxels (left panel) for the higher-order 3D-MGRF model; and examples of different order cliques of center voxel (blue) and its neighbors in the same plane (middle panel) and adjacent planes (right panel).
Figure 5A color-coded illustration for the higher order reflectivity feature (Gibbs energy) extracted from segmented layer (ONL) for a healthy case (left panel) against a DR subject.
The evaluation of the proposed CAD system for DR diagnosis. Note that Acc.: Accuracy, Sens.: Sensitivity, Spec.: Specificity.
| Layer | Four Fold | Five Fold | Ten Fold | Test Set | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | |
|
| 91.25% | 90.82% | 92.95% | 93.89% | 94.24% | 93.94% | 95.69% | 94.55% | 96.69% | 91% | 89.45% | 91.30% |
|
| 79.88% | 75.93% | 88.47% | 86.45% | 86.23% | 85.37% | 89.69% | 97.48% | 81.93% | 80.35% | 88.23% | 68.18% |
|
| 80.81% | 74.96% | 87.98% | 82.90% | 81.51% | 82.15% | 88.78% | 90.42% | 87.70% | 69.64% | 70.58% | 68.18% |
|
| 87.96% | 85.70% | 90.19% | 90.75% | 89.73% | 90.98% | 89.60% | 87.15% | 87.98% | 85.71% | 85.29% | 86.36% |
|
| 84.60% | 84.19% | 83.69% | 92.36% | 94.89% | 87.96% | 87.87% | 91.83% | 87.94% | 80.35% | 76.47% | 86.36% |
|
| 84% | 82.99% | 82.64% | 74.80% | 71.97% | 80.19% | 91.93% | 90.84% | 90.80% | 82.14% | 91.17% | 68.18% |
|
| 80.70% | 78.36% | 80.63% | 84.87% | 80.96% | 87.60% | 80.75% | 81.69% | 81.96% | 76.78% | 73.52% | 81.81% |
|
| 73.21% | 74.97% | 69.87% | 74.60% | 76.40% | 71.60% | 76.47% | 78.41% | 70.94% | 66.07% | 70.58% | 59.09% |
|
| 77.59% | 77.32% | 78.56% | 77.60% | 78.90% | 76.85% | 77.11% | 76.56% | 78.13% | 71.42% | 76.47% | 63.63% |
|
| 75.12% | 81.32% | 69.74% | 77.98% | 78.87% | 73.61% | 76.80% | 73.72% | 84.68% | 58.92% | 94.11% | 63.60% |
|
| 75.56% | 73.89% | 73.77 | 72.87% | 80.41% | 67.35% | 86.24% | 82.95% | 90.55% | 67.85% | 76.47% | 54.54% |
|
| 83.98% | 80.69% | 88.63% | 87.12% | 87.54% | 86.90% | 90.21% | 86.65% | 92.99% | 75% | 85.29% | 59.09% |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Comparison between the proposed system and other ML-based classification. Note that Acc.: Accuracy, Sens.: Sensitivity, Spec.: Specificity.
| Classifiers | Four Fold | Five Fold | Ten Fold | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | |
|
| 77.66% | 91% | 62.50% | 76.59% | 92% | 59.09% | 77.45% | 91.25% | 64.12% |
|
| 75.53% | 88% | 61.40% | 76.06% | 89% | 61.30% | 76.06% | 88% | 62.50% |
|
| 72.87% | 86% | 57.95% | 78.19% | 88% | 67.05% | 81.91% | 91% | 71.59% |
|
| 78.72% | 74.59% | 86.36% | 71.27% | 92% | 47.72% | 72.96% | 92.68% | 52.48% |
|
| 70.74% | 92% | 46.60% | 77.12% | 90% | 62.50% | 78.16% | 91.97% | 62.10% |
|
| 78.19% | 88% | 67% | 77.65% | 89% | 64.77% | 79.54% | 88.36% | 66.17% |
|
| 72.87% | 86% | 57.95% | 78.19% | 88% | 67.05% | 81.91% | 91% | 71.59% |
|
|
|
|
|
|
|
|
|
|
|
Comparison between the proposed system and other state-of-the-art deep learning approaches. Note that Acc.: Accuracy, Sens.: Sensitivity, Spec.: Specificity.
| Classifiers | Four Fold | Five Fold | Ten Fold | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | Acc. | Sens. | Spec. | |
|
| 89.12% | 97.75% | 85.10% | 90.91% | 95.63% | 85.14% | 94.90% | 92.8% | 90.21% |
|
| 88.87% | 95.74% | 88.37% | 89.65% | 93% | 85.90% | 92.33% | 93.54% | 91.90% |
|
| 89% | 96.34% | 89.30% | 90.53% | 94.67% | 84.30% | 95.60% | 96.33% | 94.99% |
|
|
|
|
|
|
|
|
|
|
|
Figure 6An illustrative ROC curves for the classification of 12 layers for the proposed system in comparison with other machine learning approaches using ten-fold cross-validation.