| Literature DB >> 33267131 |
Roberto Romero-Oraá1, Jorge Jiménez-García1, María García1, María I López-Gálvez1,2,3, Javier Oraá-Pérez2, Roberto Hornero1,4,5.
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in the working-age population in developed countries. Digital color fundus images can be analyzed to detect lesions for large-scale screening. Thereby, automated systems can be helpful in the diagnosis of this disease. The aim of this study was to develop a method to automatically detect red lesions (RLs) in retinal images, including hemorrhages and microaneurysms. These signs are the earliest indicators of DR. Firstly, we performed a novel preprocessing stage to normalize the inter-image and intra-image appearance and enhance the retinal structures. Secondly, the Entropy Rate Superpixel method was used to segment the potential RL candidates. Then, we reduced superpixel candidates by combining inaccurately fragmented regions within structures. Finally, we classified the superpixels using a multilayer perceptron neural network. The used database contained 564 fundus images. The DB was randomly divided into a training set and a test set. Results on the test set were measured using two different criteria. With a pixel-based criterion, we obtained a sensitivity of 81.43% and a positive predictive value of 86.59%. Using an image-based criterion, we reached 84.04% sensitivity, 85.00% specificity and 84.45% accuracy. The algorithm was also evaluated on the DiaretDB1 database. The proposed method could help specialists in the detection of RLs in diabetic patients.Entities:
Keywords: diabetic retinopathy; entropy rate superpixel segmentation; multilayer perceptron; red lesion; retinal imaging
Year: 2019 PMID: 33267131 PMCID: PMC7514906 DOI: 10.3390/e21040417
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1Block diagram of the proposed approach.
Figure 2Preprocessing stage: (a) original image, ; (b) image after bright border artifact removal, ; (c) effect of background extension, ; (d) result after illumination and color equalization, ; (e) final preprocessed image with contrast enhancement, ; (f) zoom of the final preprocessed image.
Figure 3Preprocessing for fundus images with different illumination, contrast and color. (a) Example 1; (b) example 2.
Figure 4Candidate segmentation. (a) Original image with some RLs; (b) dark pixels detection computed using the multiscale algorithm; (c) segmented superpixels using the Entropy Rate Superpixel method; (d) reduced candidates on ; (e) combined, reduced candidates shown over ; (f) zoom of combined, reduced candidates shown over .
Figure 5Entropy Rate Superpixel Segmentation with different parameter values. (a) , and ; (b) , and ; (c) , and ; (d) , and ; (e) , and ; (f) , and ; (g) , and ; (h) , and ; (i) , and .
Extracted features. The last column indicate the selected features.
| Feature Number | Description | Selected |
|---|---|---|
| 1 | Area of the region. | 1 |
| 2 | Width of the bounding box (smallest rectangle containing the region). | - |
| 3 | Heigh of the bounding box. | - |
| 4 | Area of the smallest convex hull (smallest convex polygon that can contain the region). | - |
| 5 | Eccentricity of the ellipse that has the same second-moments as the region. | 5 |
| 6 | Number of holes in the region. | 6 |
| 7 | Ratio of pixels in the region to pixels in the total bounding box. | 7 |
| 8 | Length of the major axis of the ellipse that with same normalized second central moments as the region. | 8 |
| 9 | Length of the minor axis of the ellipse that with same normalized second central moments as the region. | - |
| 10 | Distance around the boundary of the region (perimeter length). | - |
| 11 | Proportion of the pixels in the convex hull that are also in the region (solidity). | 11 |
| 12–14 | Mean of the pixels inside the region computed in the RGB channels of the image | 12,13 |
| 15–17 | Median of the pixels inside the region computed in the RGB channels of the image | - |
| 18–20 | Standard deviation of the pixels inside the region computed in the RGB channels of the image | 19 |
| 21–23 | Entropy of the pixels inside the region computed in the RGB channels of the image | 21 |
| 24–26 | Mean of the pixels inside the region computed in the RGB channels of the image | - |
| 27–29 | Median of the pixels inside the region computed in the RGB channels of the image | 27,28 |
| 30–32 | Standard deviation of the pixels inside the region computed in the RGB channels of the image | - |
| 33–35 | Entropy of the pixels inside the region computed in the RGB channels of the image | - |
| 36 | Mean of the pixels calculated in the border of the region applying Prewitt operator in the image | 36 |
| 37 | Mean of the pixels inside the region calculated in the result of applying multiscale line operator filters [ | 37 |
| 38 | Distance to the center of the optic disc, calculated using [ | 38 |
| 39 | Distance to the center of the fovea, calculated using [ | 39 |
Values of the parameters of the proposed method.
| Parameter | Value | Section |
|---|---|---|
| Denosing filter size | 3 pixels | Preprocessing-denoising |
|
|
| Dark pixel detection |
|
|
| Dark pixel detection |
|
| 2000 | Entropy Rate Superpixel Segmentation |
|
| 0.08 | Entropy Rate Superpixel Segmentation |
|
| 2 | Entropy Rate Superpixel Segmentation |
| Threshold | 0.3 | Candidate reduction |
| Color distance | 0.24 | Candidate reduction |
|
| 30 | MLP Configuration |
|
| 0.6 | MLP Configuration |
Figure 6Accuracy curves obtained from MLP training.
Results on the test set.
| Database | Pixel-Based Criterion | Image-Based Criterion | |||
|---|---|---|---|---|---|
|
|
|
|
|
| |
| Private | 81.43% | 86.59% | 84.04% | 85.00% | 84.45% |
| DiaretDB1 | 88.10% | 93.10% | 84.00% | 88.89% | 86.89% |
Figure 7Definitively detected RLs after classification stage. (a) Example; (b) zoom in previous example.
Performance comparison of some methods for the detection or RLs in fundus images according to the image-oriented criterion.
| Authors | Method | Database | Nb. Images |
|
|
|---|---|---|---|---|---|
| Seoud et al. 2015 [ | Flooding | Messidor | 1200 | 93.90% | 50.00% |
| Seoud et al. 2015 [ | Flooding | Erlangen | 45 | 93.30% | 93.03% |
| Seoud et al. 2015 [ | Flooding | CARA1006 | 1006 | 96.10% | 50.00% |
| García et al. 2010 [ | NNs | Private | 115 | 100% | 56.00% |
| Zhou et al. 2017 [ | SLIC Superpixel | DiaretDB1 | 89 | 83.30% | 97.30% |
| Orlando et al. 2018 [ | Deep Learning | Messidor | 1200 | 91.10% | 50.00% |
| Roychowdhuri et al. 2012 [ | Gaussian Mixture Models | DiaretDB1 | 89 | 75.50% | 93.73% |
| Sánchez et al. 2011 [ | Gaussian filter bank | Messidor | 1200 | 92.20% | 50.00% |
| Niemeijer et al. 2005 [ | Pixel classification | Private | 100 | 100% | 87.00% |
| Grisan and Ruggeri 2005 [ | Bayesian classification | Private | 260 | 71.00% | 99.00% |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 8Fundus image example corresponding to a retina with MAs. (a) Original image; (b) detected RLs over the original image.
Figure 9Fundus image example corresponding to a healthy retina. (a) Original image; (b) wrongly detected RLs over the original image.