| Literature DB >> 25825666 |
Djibril Kaba1, Chuang Wang1, Yongmin Li1, Ana Salazar-Gonzalez1, Xiaohui Liu1, Ahmed Serag2.
Abstract
The analysis of retinal blood vessels plays an important role in detecting and treating retinal diseases. In this review, we present an automated method to segment blood vessels of fundus retinal image. The proposed method could be used to support a non-intrusive diagnosis in modern ophthalmology for early detection of retinal diseases, treatment evaluation or clinical study. This study combines the bias correction and an adaptive histogram equalisation to enhance the appearance of the blood vessels. Then the blood vessels are extracted using probabilistic modelling that is optimised by the expectation maximisation algorithm. The method is evaluated on fundus retinal images of STARE and DRIVE datasets. The experimental results are compared with some recently published methods of retinal blood vessels segmentation. The experimental results show that our method achieved the best overall performance and it is comparable to the performance of human experts.Entities:
Keywords: Expectation maximisation; Retinal images; Vessel segmentation
Year: 2014 PMID: 25825666 PMCID: PMC4376494 DOI: 10.1186/2047-2501-2-2
Source DB: PubMed Journal: Health Inf Sci Syst ISSN: 2047-2501
Figure 1Bias correction results. (a) STARE image with intensity inhomogeneity. (b) Bias field. (c) Bias corrected image. (d) DRIVE image with intensity inhomogeneity. (e) Bias field. (f) Bias corrected image.
Figure 2Adaptive histogram equalisation results. (a) r=3, h=45. (b) r=6, 45. (c) r=3, h=81. (d) r=6, h=81.
Figure 3Distance map images. (a) STARE image. (b) STARE distance map. (c) DRIVE image. (d) DRIVE distance map.
Figure 4The EM algorithm summary. The steps of EM algorithm.
Figure 5The EM algorithm and length filter results. (a) Fundus retinal image. (b) The EM algorithm output image. (c) the Length filter output image.
The performance comparisons on STARE dataset (Healthy and unhealthy retinal images)
| Method | TPR | FPR | Accuracy |
|---|---|---|---|
| 2 | 0.8949 | 0.0610 | 0.9354 |
| Mendonca [ | 0.6996 | 0.0270 | 0.9440 |
| Staal [ | 0.6970 | 0.0190 | 0.9516 |
| Chaudhuri [ | 0.6134 | 0.0245 | 0.9384 |
| Maritiner-Perez [ | 0.7506 | 0.0431 | 0.9410 |
| Hoover [ | 0.6751 | 0.0433 | 0.9267 |
| Zhang [ | 0.7177 | 0.027 | 0.9484 |
| Our segmentation method |
|
|
|
The performance comparisons on STARE dataset (Healthy vs Unhealthy retinal images)
| Method | TPR | FPR | Accuracy |
|---|---|---|---|
|
| |||
| 2 | 0.8252 | 0.0456 | 0.9425 |
| Mendonca [ | 0.6733 | 0.0331 | 0.9388 |
| Hoover [ | 0.6736 | 0.0528 | 0.9211 |
| Chaudhuri [ | 0.5881 | 0.0384 | 0.9276 |
| Zhang [ | 0.7166 | 0.0327 | 0.9439 |
| Our segmentation method |
|
|
|
|
| |||
| 2 | 0.9646 | 0.0764 | 0.9283 |
| Mendonca [ | 0.7258 | 0.0209 | 0.9492 |
| Hoover [ | 0.6766 | 0.0338 | 0.9324 |
| Chaudhuri [ | 0.7335 | 0.0218 | 0.9486 |
| Zhang [ | 0.7526 | 0.0221 | 0.9510 |
| Our segmentation method |
|
|
|
The performance comparisons on DRIVE dataset
| Method | TPR | FPR | Accuracy |
|---|---|---|---|
| 2 | 0.7761 | 0.0275 | 0.9473 |
| Mendonca [ | 0.7344 | 0.0236 | 0.9452 |
| Staal [ | 0.7194 | 0.0227 | 0.9442 |
| Chaudhuri [ | 0.6168 | 0.0259 | 0.9284 |
| Maritiner-Perez [ | 0.7246 | 0.0345 | 0.9344 |
| Jiang [ | - | - | 0.9112 |
| Perfetti [ | - | - | 0.9261 |
| Zana [ | - | - | 0.9377 |
| Garq [ | - | - | 0.9361 |
| Marin [ | - | - | 0.9452 |
| Al-Rawi [ | - | - | 0.9510 |
| Cinsdikici [ | - | - | 0.9293 |
| Zhang [ | 0.7120 | 0.0276 | 0.9382 |
| Our segmentation method |
|
|
|
Figure 6The sample results of our method. (a) STRARE fundus image. (b) Our method result. (c) Ground truth. (d) DRIVE fundus image. (e) Our method result. (f) Ground truth.