Literature DB >> 27441646

A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

Khan BahadarKhan1, Amir A Khaliq1, Muhammad Shahid2.   

Abstract

Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts.

Entities:  

Mesh:

Year:  2016        PMID: 27441646      PMCID: PMC4956315          DOI: 10.1371/journal.pone.0158996

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

DR is a severe disease and is one of the main source of visual impairment among adults aged 20–74 years in the United States [1]. The most common indications of DR include dilated retinal veins, hemorrhages, hard exudates and cotton wool spots [2]. Variation in features of vasculature of retinal images lead to serious diseases such as stroke, diabetes and cardiovascular diseases [3]. Consequently, an investigation of retinal vessel features can help with recognizing these progressions and permit the patient to make an action while the sickness is still in its initial stage. Automated investigation of the retinal blood vessels turned into a dynamic research because of its diagnostic and prognostic in the field of medical imaging. Segmentation and review of retinal vasculature characteristics for example, tortuosity, normal or abnormal branching, shading and diameter as well as the optic disk morphology permits eye care experts and ophthalmologists to perform mass vision screening exams for early discovery of retinal ailments and treatment assessment. This could forestall and decrease vision debilitations, age-related diseases, and numerous cardiovascular ailments, and in addition diminish the expense of the screening [4, 5]. In manual assessment, segmentation and estimation accuracy also fluctuates relying upon nature of the retinal images, graders ability and experience. Moreover, manual segmentation and estimation procedures can take up to an hour for assessment of just a single eye. In this way, a completely automated framework extracting the vessel structures in retinal images could surely diminish the workload of eye clinicians. This work consists of automated segmentation of vasculature of retinal images which can be used in diagnostic of various eye diseases. Different retinal vessel segmentation methodologies have been published and assessed in literature but they still need some improvement. Existing systems should be enhanced in terms of at least one of the following drawbacks. Firstly, lack of adaptive capabilities under varying image conditions may cause poor quality of segmentation such as under and over segmentations. Secondly, complex preprocessing and postprocessing operations used in different methods for extracting retinal images vessels caused high computational cost. Thirdly, human participation is required to choose area of interest, which demonstrates that the systems are not totally automatic. In conclusion, segmentation and assessment procedures themselves need a large computational endeavors.

Related Works

In literature, many retinal segmentation methods are designed from the line detection methods, as vessel segmentation depends on line detection [6]. Generally, vessel segmentation methods consists of two steps: vessels enhancement and vessels classification. Some techniques may escape first step and use directly the second step. In the first step vessels are enhanced, noise and geometrical objects are removed. Chaudhuri et al. [7], first proposed matched filter to enhance and segment retinal vessels. Further improvements and similar techniques were proposed later on by various authors using threshold probing technique [8], double-sided thresholding [9] and the first order derivative of Gaussian image [10]. Matched filters application for segmentation, produced high quality results at the cost of long computational time. The quality of segmentation results mainly depend on the quality and size of the used vessel profile database. In [11], retinal blood vessels have been enhanced by using Gabor filter. This methodology has a great performance on normal retinal images. Lam and Yan [12], used laplacian operator and gradient vector fields to extract vessels. Staal et al. [13], proposed a framework depends on extraction of image ridges, which correspond roughly with vessel centerlines. Zana and Klein [14] and Mendonça and Campilho [15], used morphological filters to enhance vessels. Their proposed method showed better results than most of the existing techniques on the pathological retina.Martínez-Pérez et al. [16, 17], were also based on hessian matrix to extract multiscale feature for detection of vessels. In [18], vessel enhancement filter was designed on the base of hessian matrix. After vessel enhancement, second step is the classification of pixels into vessel and non-vessel pixels. This second step is also known as vessel tracking and tracing [19]. Pixels intensities based classification is used to find a suitable threshold. Jiang and Mojon [20], performed adaptive local thresholding to extract vessels. In [21], Support Vector Machine (SVM) is used along with adaptive local thresholding to classify vessel and non-vessel pixels. Maritiner-perez et al. [17], extract information about vessel topology by using 1st and 2nd spatial derivatives of the intensity image. Zhou et al. [22], method is based on prior knowledge about retinal vessel characteristics coupled with matched filtering technique to detect the vessel structure. Al-Diri [23], utilized two pairs of contours to detect vessel boundary and sustain width of vessels. Fraz et al. [24], used first-order derivative of Gaussian filter to extract centerlines of retinal vessels before mathematical morphology to quantify vessels in retina. Generally, all vessel extraction methods can be classified into supervised segmentation [11–13, 25–31] and unsupervised segmentation [7, 9, 14–16, 23–24, 32–41] with the reference to the overall system design and structure. Rest of the paper is arranged as follows: Section II, illustrate our suggested segmentation technique in detail. The preprocessing steps consist of CLAHE and morphological filters used for vessel enhancement and illumination corrections are discussed in detail. Further, hessian matrix and eigenvalues transformation are used in a modified form to compute the second derivative of the image at two different scales, for wide and thin vessels enhancement, separately. Otsu thresholding technique has been applied to classify vessel and non-vessel pixels. Finally, pixel count based thresholding has been applied to eliminate background noise, unwanted segments and erroneously detected vessel pixels. In section III, performance evaluation criteria is defined. Section IV, discussed the experimental Results. Finally, the proposed method is concluded in section V.

Proposed Technique

Fig 1 shows block diagram of our proposed segmentation framework. We extract green channel from input RGB retinal image for further processing.
Fig 1

Flow chart of the proposed segmentation framework.

The green band of input image is further analyzed by using following noteworthy steps: CLAHE and morphological filters have been used for vessel contrast enhancement and low frequency noise/geometrical objects removal respectively. Hessian matrix and eigenvalues transformation has been applied in a modified form at two different scales to extract wide and thin vessels enhanced images, separately. Global and local Otsu thresholding has been utilized in a modified way to classify vessel and non-vessel pixels from wide and thin vessel enhanced images, respectively. Postprocessing steps have been used for eliminating background noise, undesired segments and erroneously detected vessel pixels. We labelled vessel pixels ‘1’ and non-vessel pixels ‘0’ to obtain a final binary image.

Contrast Limited Adaptive Histogram Equalization (CLAHE)

We have used green channel of retinal image for analysis and segmentation of vessel structure. Fig 2 shows that in green channel, blood vessels seems more differentiated than the background as compared to red or blue channel. The DRIVE and the STARE datasets images are used for analysis and experiments of the proposed method.
Fig 2

Retinal RGB image and its channels visual inspection.

(a) RGB input image. (b) Red channel. (c) Green channel. (d) Blue channel.

Retinal RGB image and its channels visual inspection.

(a) RGB input image. (b) Red channel. (c) Green channel. (d) Blue channel. Generally, histogram equalization techniques can acquire contrast improvement by stretching the gray level values of a low-contrast image. We used the CLAHE operator to achieve a local contrast enhanced retinal image. CLAHE used a user-defined value called clip limit to constrain enhancement by clipping the histogram [42]. The clipping level specifies noise level to be smoothen and contrast level to be enhanced in histogram. In our case, clip limit is set from 0 to 0.01.

Morphological filters

Vessel enhancement based on morphological filters has already been published in literature [14]. Vessel structure appears in more prominent contrast than background intensity variations. However, a more local investigation of vessel intensities can indicate noteworthy changes that can adversely influence the whole vessel extraction process. To overcome such changes, we have proposed a morphological filter known as modified top-hat transform which has been applied on normalized green channel image. The thickest vessel width is set as a reference, which normally varies between 1 to 8 pixels covering all diameter ranges of vessels width for both image databases utilized as a part of our proposed scheme. Vessel diameter scale can be adapted according to image resolution variations. We have utilized morphological top-hat transformation to find difference between the input and the opened image. Closed image followed by the opened image to obtain inverse image. Implementation of top-hat transformation has an issue of noise sensitivity, which cause that pixel values in an opened image are always less than or equal to the input ones; in such conditions, the subtracted image holds little intensity variations that can be found in the data. To solve this problem we adapted from [15], modified top-hat transform by introducing two new steps: a closing operator followed by the opening, without using any minimum operator and comparison. The opening top-hat operator of an image with structuring element is given by The closing top-hat operator of an image with structuring element is given by Modified top-hat transform adapted from [15] is given by Eq 3 shows our modified top-hat transform in which is the input green channel image while and stand for the structuring elements for closing (⋅) and opening (∘) operators, respectively. In our case, we select disk type structuring elements for both opening and closing operator having radius eight pixels.

Hessian Matrix and Eigenvalues based approach

We have applied hessian matrix and eigenvalues transformation in a new way after morphological filter to obtain enhanced images of wide and thin vessel. We have computed the second derivative of the image at two different scales for wide and thin vessel enhancement, separately. This isolation of wide and thin vessels has been achieved by using hessian matrix and eigenvalues based approach. The vessels having variable width are highlighted and based on the analysis of second order derivative at two different scales. The eigenvalues of hessian matrix and the difference between them have been used for further contrast enhancement and suppression of non-vasculature structure. Hessian matrix of the directional image in the new coordinates is determined as [43] where We have applied eigenvalues transformation on hessian matrix to obtain eigenvalues λ1 and λ2, while σ is used to define scale of vessel enhancement. The filter response will be optimum, if the scale σ matches the size of the vessel. In our case, σ ranges from 1 to 2.5 for vessels enhancement. Our method reduced the complexity and computation by taking only difference of λ1 and λ2 opposed to other existing methods using Frangi’s filter [18]. Difference images are obtained as I = λ2 − λ1 at two different scales having values 1 and 2.5 for σ. We have tested the setting of parameter σ on different scalesas shown in Fig 3, which clearly indicates that setting of smaller scale increased False Positive Rate (FPR) considering background as a vessel pixels. In case of setting larger scale for σ, which is not able to detect thin vessel pixels decreasing the sensitivity of the proposed method.
Fig 3

Comparison of the setting of parameter σ on different scales.

(a) Thin vessel enhanced image. (b) Thin binary Image. (c) Thick vessel enhanced image. (d) Thick binary image.

Comparison of the setting of parameter σ on different scales.

(a) Thin vessel enhanced image. (b) Thin binary Image. (c) Thick vessel enhanced image. (d) Thick binary image.

Otsu Method for Vessel Segmentation

We have applied Otsu’s approach [44] in a modified way to suppress the unwanted noise and geometrical objects based on vessel structure. Usually Otsu’s approach is used locally or globally on the entire image to find a threshold for classification of vessel and non-vessel pixels. Applying Otsu threshold on the whole image at once does not give fruitful results that’s why we have applied it separately on wide and thin vessel images. We have used global threshold on wide vessels enhanced image and fused its resultant image into thin vessel enhanced image. In this way both thin and thick vessels become more prominent. We obtained a single enhanced image on which further local thresholding has been applied. In local thresholding, we used vessel based thresholding which depends upon vessel locality to define a new threshold. We added some offset in the global threshold to suppress the noise more effectively for vessels in the neighborhood of wide vessels. For other regions away from wide vessels, we set lower threshold than the global by subtracting some offset from it to extract the small or thin vessels from the background having low intensity. Further, postprocessing steps have been applied to obtain final segmented image. We used Otsu’s approach [44], to divide pixels of an image into two segments S0 and S1 (e.g., objects and background) at intensity level t, i.e, S0 = {0,1,2,…….,t} and S1 = {t + 1,t + 2,…….,L − 1}. As mentioned in [44], let σ2,σ2 and σ2 be the within-class variance, between-class variance, and the total variance, respectively. We have minimized σ2 to obtain optimal threshold. Following is the relation between different class variances. The optimal threshold t* in our case is obtained by maximizing α and can be defined as where where n is the total number of pixels with grey-level, i and n is the total number of pixels in the given image defined as . Probability of grey-level P is defined as .

Postprocessing Steps

We have used pixel/area based thresholding to eliminate unconnected non-vessel pixels. The segmentation results usually consist of some small isolated regions caused by noise, and these regions are sometimes wrongly detected as vessels. Based on the connectivity of the retinal vessels, we removed less than or equal to 30 unconnected pixels considered as a non-vessel or a part of the background noise.

Performance Evaluation Criteria

We have processed retinal images from two publically available datasets: DRIVE [45] and STARE [8] for the performance evaluation of the proposed segmentation framework. These datasets consists of manual segmented retinal images by experts considered as a gold standard for comparison. Accuracy (Acc), Sensitivity (Sn), Specificity (Sp), and the area under a Receiver Operating Characteristic (ROC) curve, also known as Area Under the Curve (AUC) are four commonly used parameters to compare the performance of the competing techniques [38]. Accuracy shows the overall segmentation performance. Sensitivity indicates effectiveness in detection of pixels with positive values: specificity measure the detection of pixels with negative values. These metrics are defined as follows: where TP, TN, FP and FN represents the True Positive, True Negative, False Positive, and False Negative pixels, respectively.

Experimental Results

In this section, we have analyzed the performance of retinal vessel segmentation methods on the DRIVE [45] and the STARE [8] databases. The manually segmented images provided in these databases are used for better evaluation of the proposed framework. The DRIVE and the STARE datasets consist of 40 and 20 retinal images, respectively classified into two sets: the training set and the test set. For performance evaluation, the proposed framework has beenapplied on 20 test images of the DRIVE and the STARE datasets. All the experiments of our proposed framework were executed utilizing MATLAB 2013a on a DELL Vostro 1540 (2.53 GHz Intel Core i3 Processor, 4GB RAM). Visual inspection of retinal blood vessel segmentation with major processing stages of our proposed framework using the DRIVE and the STARE datasets are depicted in Figs 4 and 5, respectively.
Fig 4

Proposed method main processing steps for retinal blood vessel segmentation.

(a) RGB image from DRIVE database. (b) Green Channel. (c) CLAHE. (d) Morphological filters. (e) Thin vessel enhanced image. (f) Wide vessel enhanced image. (g) Otsu global thresholding output image. (h) Fused image of thin enhanced image and Otsu global thresholding output image. (i) Otsu local thresholding to enhance thin vessels (j) Postprocessed final binary image.

Fig 5

Proposed method main processing steps for retinal blood vessel segmentation.

(a) RGB image from STARE database. (b) Green Channel. (c) CLAHE. (d) Morphological filters. (e) Thin vessel enhanced image. (f) Wide vessel enhanced image. (g) Otsu global thresholding output image. (h) Fused image of thin enhanced image and Otsu global thresholding output image. (i) Otsu local thresholding to enhance thin vessels (j) Postprocessed final binary image.

Proposed method main processing steps for retinal blood vessel segmentation.

(a) RGB image from DRIVE database. (b) Green Channel. (c) CLAHE. (d) Morphological filters. (e) Thin vessel enhanced image. (f) Wide vessel enhanced image. (g) Otsu global thresholding output image. (h) Fused image of thin enhanced image and Otsu global thresholding output image. (i) Otsu local thresholding to enhance thin vessels (j) Postprocessed final binary image. (a) RGB image from STARE database. (b) Green Channel. (c) CLAHE. (d) Morphological filters. (e) Thin vessel enhanced image. (f) Wide vessel enhanced image. (g) Otsu global thresholding output image. (h) Fused image of thin enhanced image and Otsu global thresholding output image. (i) Otsu local thresholding to enhance thin vessels (j) Postprocessed final binary image. We have compared our visual results with Bankhead et al. [30] (S1 Link), Azzopardi et al. [35] (S2 Link), Dai et al. [40] (S3 Link), and Vlachos and Dermatas [41] (S4 Link) by running their source codes on the DRIVE and the STARE datasets shown in Figs 6 and 7, respectively. Images results of Martinez-Perez et al. [17] (S5 Link) were obtained from their website. To find whether a vessel is detected correctly or not, the final binary image has been compared to the corresponding manually segmented image.
Fig 6

Visual inspection of different vessel segmentation methods using DRIVE database.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method final image. (d) Dai et al. [40]. (e) Azzopardi et al. [35]. (f) Bankhead et al. [30]. (g)Vlachos and Dermatas [41]. (h) Martinez-Perez et al. [17].

Fig 7

Visual inspection of different vessel segmentation methods using STARE database.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method final image. (d) Dai et al. [40]. (e) Azzopardi et al. [35]. (f) Bankhead et al. [30]. (g)Vlachos and Dermatas [41]. (h) Martinez-Perez et al. [17].

Visual inspection of different vessel segmentation methods using DRIVE database.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method final image. (d) Dai et al. [40]. (e) Azzopardi et al. [35]. (f) Bankhead et al. [30]. (g)Vlachos and Dermatas [41]. (h) Martinez-Perez et al. [17].

Visual inspection of different vessel segmentation methods using STARE database.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method final image. (d) Dai et al. [40]. (e) Azzopardi et al. [35]. (f) Bankhead et al. [30]. (g)Vlachos and Dermatas [41]. (h) Martinez-Perez et al. [17].

Average accuracy of the proposed method

First, we have calculated the average accuracy of 20 test images of the DRIVE dataset and 20 random images of the STARE dataset. The average accuracy indicates how to extract a binary image that matches the vessel images to a high degree. The accuracy is estimated by the ratio of the sum of the number of correctly classified foreground and background pixels, divided by the total number of pixels in the image. According to the results shown in Table 1, the average accuracy for the DRIVE dataset is 0.96075 and for the STARE dataset is 0.94585.
Table 1

Accuracy (Acc), Sensitivity (Sn) and Specificity (Sp) results of proposed method for 20 retinal images of the DRIVE and the STARE datasets.

ImagesDRIVESTARE
AccSnSpAccSnSp
10.9660.7190.9860.9420.7140.961
20.9630.7840.9800.9430.6010.968
30.9650.7350.9900.9160.8330.922
40.9590.7040.9840.8130.8680.810
50.9540.6970.9780.9520.7500.972
60.9700.7200.9920.9620.8060.974
70.9560.7040.9820.9550.8400.966
80.9620.7820.9750.9610.7960.974
90.9550.8000.9660.9550.8290.966
100.9630.7160.9840.9610.7750.977
110.9680.7530.9850.9640.6500.988
120.9610.7290.9820.9710.8340.982
130.9610.7760.9780.9570.7010.982
140.9590.7180.9780.9580.7150.983
150.9590.8210.9700.9540.7700.972
160.9580.7260.9770.9410.6450.974
170.9650.7540.9850.9670.8060.983
180.9530.8080.9640.9760.6040.996
190.9560.7600.9770.9240.8580.943
200.9620.7180.9890.9450.7660.961
Average0.960750.74620.98010.945850.758050.9627

Proposed Otsu algorithm comparison with different techniques

We have compared the proposed Otsu approach [44] with current thresholding algorithms, Technique of Iterative Local Thresholding (TILT) [46], K-means [47], Moment-preserving thresholding [48], Niblack local thresholding [49] and fuzzy ISODATA algorithms [50]. The pictorial results on the DRIVE dataset have been displayed in Fig 8. Their performance in the term of accuracy, sensitivity, specificity and AUC has been tabulated in Table 2.
Fig 8

Visual results of different thresholding techniques.

(a) Proposed Otsu method. (b) TILT. (c) K-means. (d) Moment-preserving thresholding. (e) Niblack local thresholding. (f) Fuzzy ISODATA algorithms.

Table 2

Performance evaluation of different thresholding techniques with proposed Otsu method.

MethodDRIVE
AUCAccSnSp
Proposed Otsu [44]0.8820.9630.7840.980
TILT [46]0.8490.9570.7140.985
K-means [47]0.8120.9560.6310.993
Moment-preserving thresholding [48]0.8910.9430.8260.956
Niblack local thresholding [49]0.8880.9460.8150.961
Fuzzy ISODATA algorithms [50]0.8120.9560.6300.993

Visual results of different thresholding techniques.

(a) Proposed Otsu method. (b) TILT. (c) K-means. (d) Moment-preserving thresholding. (e) Niblack local thresholding. (f) Fuzzy ISODATA algorithms.

Comparison with other techniques

In order to compare the efficiency of our proposed technique, we contrasted it with other existing vessel segmentation techniques on two commonly used databases: DRIVE [45] and STARE [8]. We have selected five latest supervised techniques [11–13,27, 31] and fourteen unsupervised techniques [7, 9, 14–16, 23–24, 30, 33, 35, 37–40]. The results are demonstrated in Table 3 which clearly indicates that our proposed framework is more efficient than many other methods.
Table 3

Performance evaluation of different retinal vessel segmentation techniques.

Method TypeMethodDRIVESTARE
AccSnSpAccSnSp
UnsupervisedSecond observer0.9470.7760.9720.9350.8950.939
Mendonça et al. [15]0.9450.7340.9760.9440.6990.973
Martinez-Perez et al. [17]0.9340.7250.9650.9410.7510.955
Palomera-Perez etal. [33]0.9220.6600.9610.9240.7790.940
Zhang et al. [9]0.9380.7120.9730.9480.7170.975
Fraz et al. [24]0.9430.7150.9760.9440.7310.968
Bankhead etal. [30]0.9370.7030.9710.9320.7580.950
Chaudhuri et al. [7]0.8770.336
Zana and Klein [14]0.9380.697
Al-Diri et al. [23]0.7280.9550.7520.968
Azzopardi et al. [35]0.9440.7660.9700.9490.7720.970
Mapayi et al. [37]0.9460.7630.9630.9510.7630.966
Zhao et al. [38]0.9530.7440.9780.9510.7860.975
Asad et al. [39]0.9340.7480.954
Dai et al. [40]0.9420.7360.9720.9360.7770.955
Proposed0.9610.7460.9800.9460.7580.963
SupervisedNiemeijer et al. [31]0.9420.714
Staal et al. [13]0.9440.7190.9770.9520.6970.981
Ricci and Perfetti [25]0.95950.9646
Soares et al. [11]0.9460.7230.9760.9460.7230.976
Lam et al. [12]0.9470.957
Marin etal. [27]0.9450.7060.9800.9520.6940.981

“—” Shows that this content was not available.

“—” Shows that this content was not available. Proposed framework shows highest results on the DRIVE images for both supervised and unsupervised methods, with Acc = 0.961, Sn = 0.746 and Sp = 0.980. Our proposed technique showed high efficiency in terms of sensitivity and accuracy among the unsupervised techniques on the STARE dataset. The specificity Sp = 0.963 which is also the highest value among the few unsupervised methods, and also behind the unsupervised techniques [9, 15, 24, 19, 35]. For supervised methods, accuracy is less 0.006 from [21, 27],0.0186 less from [25] and 0.011 behind [17], sensitivity is highest among all while specificity behind 0.018 than supervised methods [21, 27] and 0.013 less than [11]. One important factor of our proposed framework is to scale down the undesired segment, non-vessel pixels, erroneously detected segments and background noise that will frequently show up in the anomalous retinal images. For such pathological cases, we compared the performance of the proposed technique with various methods on the normal and abnormal images in the DRIVE and the STARE databases shown in Table 4, which evidently shows that for an abnormal cases, the proposed method achieve much better efficiency than Chaudhuri et al. [7], Mendonça and Campilho [15], Hoover et al. [8] and it records slightly better results than Soares et al. [11]. Figs 9 and 10 show visual appearance of an abnormal retinal images of the DRIVE and the STARE databases, respectively.
Table 4

Segmentation results comparison for normal versus abnormal cases of our proposed technique with different segmentation techniques.

Image TypeMethodDRIVE
TPRFPRAcc
NormalSecond observer0.9650.0760.928
Bankhead et al. [30]0.7520.0470.942
Mendonça et al. [15]0.7250.0200.949
Azzopardi et al. [35]0.7460.0280.958
Dai et al. [40]0.7380.0360.953
Proposed0.8550.0190.968
AbnormalSecond observer0.8250.0450.942
Bankhead et al. [30]0.7540.0430.939
Mendonca et al. [15]0.6730.0330.939
Azzopardi et al. [35]0.7530.0320.951
Dai et al. [40]0.7820.0350.948
Proposed0.7420.0210.959
Fig 9

Pictorial results of different retinal blood vessel segmentation techniques on pathological image of DRIVE dataset.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method. (d) Azzopardi et al. [35]. (e) Dai et al. [40]. (f) Bankhead et al. [30].

Fig 10

Pictorial results of different retinal blood vessel segmentation techniques on pathological image of STARE dataset.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method. (d) Azzopardi et al. [35]. (e) Dai et al. [40]. (f) Bankhead et al. [30].

Pictorial results of different retinal blood vessel segmentation techniques on pathological image of DRIVE dataset.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method. (d) Azzopardi et al. [35]. (e) Dai et al. [40]. (f) Bankhead et al. [30].

Pictorial results of different retinal blood vessel segmentation techniques on pathological image of STARE dataset.

(a) RGB input image. (b) Manual segmented image. (c) Proposed method. (d) Azzopardi et al. [35]. (e) Dai et al. [40]. (f) Bankhead et al. [30]. Another important parameter is ROC curve, also known as AUC, it has the ability to reflect the trade-offs between the sensitivity and specificity. Note that an AUC of 0.50 means that the classification is equivalent to a pure random guess, and an AUC of 1.0 means that the classifier distinguishes class examples perfectly. The most frequently used performance measure extracted from the ROC curve is the value of the AUC which is 1 for an optimal system. The AUC achieved by the proposed method has been contrasted with existing segmentation techniques on the DRIVE and the STARE databases shown in Table 5. The proposed framework AUC results are highest than others on the DRIVE dataset except Mapayi et al. [37] and Ricci and Perfetti [25] which is 0.001 and 0.1003 little better than ours, respectively. On the STARE dataset, AUC of the proposed method is the highest among all, only [25], [38], [37] and [35] methods are 0.107, 0.02, 0.004 and 0.001 better than ours, respectively. Table 6, represents computation time comparison of various retinal vessel segmentation techniques. Computation time of Bankhead et al. [30], Azzopardi et al. [35], Dai et al. [40], and Vlachos and Dermatas [41] have been calculated by running their source codes on the PC (HP Intel Core i3 CPU, 2.53 GHz, 4 GB RAM), while computation time of Zhao et al. [38], Nguyen et al. [34], Mapayi et al. [37] and Asad et al. [39] have been collected from their published papers. The proposed framework is computationally very fast and efficient than other published methods.
Table 5

Performance comparison of AUC with existing techniques.

MethodAUC (DRIVE)AUC (STARE)
Second observer0.8740.917
Azzopardi et al. [35]0.8620.862
Martinez-Perez et al. [17]0.8450.853
Palomera-Perez et al. [33]0.8110.860
Bankhead et al. [30]0.8370.854
Fraz et al. [24]0.8460.850
Ricci and Perfetti [25]0.9630.968
Zhao et al. [38]0.8610.881
Al-Diri et al. [23]0.8420.860
Marin et al [27]0.8430.838
Mapayi et al. [37]0.8640.865
Proposed0.8630.861
Table 6

Computation time comparison of various techniques.

MethodProcessing TimeComputer SpecificationsSoftware
Proposed1.56 SecHP Intel Core i3 CPU, 2.53 GHz, 4 GB RAMMATLAB
Bankhead et al. [30]22.45 SecMATLAB
Azzopardi et al. [35]11.83 SecMATLAB
Dai et al. [40]1 min and 46 SecMATLAB
Vlachos et al.[41]9.3 SecMATLAB
Zhao et al. [38]4.6 SecHP Intel Core i3 CPU, 3.1 GHz, 8 GB RAMMATLAB & C++
Mapayi et al. [37]1.9 to 2.6 SecIntel Core i5 CPU, 2.30GHz, 4GB RAM.MATLAB
Asad et al. [39]2 mins and 45 SecIntel Core i3 CPU, 2.53 GHz, 3 GB RAMMATLAB

Conclusion

The automatic segmentation of blood vessels in retinal image is an important step in diagnosing causes of visual impairment. In our proposed framework, CLAHE and morphological filter has been used for vessel enhancement and low frequency noise/object removal along with hessian matrix and eigenvalues transformation to classify retinal image into wide and thin vessels enhanced images. Otsu thresholding has been utilized to extract vessel attributes and region properties based thresholding has been used set optimal threshold value to segregate vessel and non-vessel pixels. Proposed method has been applied to different databases like DRIVE and STARE and assessed based on performance measures such as sensitivity, specificity and accuracy. Further, our proposed method has been contrasted with different existing techniques to evaluate its efficiency and reliability. The proposed framework performs efficiently against noise and extract thin vessels. The proposed method is robust and computationally efficient.

Data used to test the algorithm.

(ZIP) Click here for additional data file.

MATLAB implementation of Bankhead et al. [30] available at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0032435.

(ZIP) Click here for additional data file.

MATLAB source code of Azzopardi et al. [35] available at: http://www.mathworks.com/matlabcentral/fileexchange/49172-trainable-cosfire-filters-for-vessel-delineation-with-application-to-retinal- images.

(ZIP) Click here for additional data file.

MATLAB implementation of Dai et al. [40] available at: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127748.

(ZIP) Click here for additional data file.

MATLAB source code of Vlachos and Dermatas [41] available at: https://matlabfreecode.wordpress.com/2013/02/27/detection-of-vessels-in-eye-retina-using-line-tracking-algorithm-with-matlab-code.

(RAR) Click here for additional data file.

Images of Martinez-Perez et al. [17] are available at: http://turing.iimas.unam.mx/~elena/Projects/segmenta/VesselSegment.html.

(RAR) Click here for additional data file.
  28 in total

1.  Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response.

Authors:  A Hoover; V Kouznetsova; M Goldbaum
Journal:  IEEE Trans Med Imaging       Date:  2000-03       Impact factor: 10.048

2.  Ridge-based vessel segmentation in color images of the retina.

Authors:  Joes Staal; Michael D Abràmoff; Meindert Niemeijer; Max A Viergever; Bram van Ginneken
Journal:  IEEE Trans Med Imaging       Date:  2004-04       Impact factor: 10.048

3.  A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features.

Authors:  Diego Marin; Arturo Aquino; Manuel Emilio Gegundez-Arias; José Manuel Bravo
Journal:  IEEE Trans Med Imaging       Date:  2010-08-09       Impact factor: 10.048

4.  Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification.

Authors:  João V B Soares; Jorge J G Leandro; Roberto M Cesar Júnior; Herbert F Jelinek; Michael J Cree
Journal:  IEEE Trans Med Imaging       Date:  2006-09       Impact factor: 10.048

5.  Multi-scale retinal vessel segmentation using line tracking.

Authors:  Marios Vlachos; Evangelos Dermatas
Journal:  Comput Med Imaging Graph       Date:  2009-11-04       Impact factor: 4.790

6.  Retinal vessel extraction by matched filter with first-order derivative of Gaussian.

Authors:  Bob Zhang; Lin Zhang; Lei Zhang; Fakhri Karray
Journal:  Comput Biol Med       Date:  2010-03-03       Impact factor: 4.589

7.  Learning fully-connected CRFs for blood vessel segmentation in retinal images.

Authors:  José Ignacio Orlando; Matthew Blaschko
Journal:  Med Image Comput Comput Assist Interv       Date:  2014

8.  A novel method for blood vessel detection from retinal images.

Authors:  Lili Xu; Shuqian Luo
Journal:  Biomed Eng Online       Date:  2010-02-28       Impact factor: 2.819

9.  A modified matched filter with double-sided thresholding for screening proliferative diabetic retinopathy.

Authors:  Lei Zhang; Qin Li; Jane You; David Zhang
Journal:  IEEE Trans Inf Technol Biomed       Date:  2009-04-21

10.  The detection and quantification of retinopathy using digital angiograms.

Authors:  L Zhou; M S Rzeszotarski; L J Singerman; J M Chokreff
Journal:  IEEE Trans Med Imaging       Date:  1994       Impact factor: 10.048

View more
  14 in total

1.  Hessian filter-assisted full diameter at half maximum (FDHM) segmentation and quantification method for optical-resolution photoacoustic microscopy.

Authors:  Dong Zhang; Ran Li; Xin Lou; Jianwen Luo
Journal:  Biomed Opt Express       Date:  2022-08-09       Impact factor: 3.562

2.  Correction: A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

Authors:  Khan Bahadar Khan; Amir A Khaliq; Muhammad Shahid
Journal:  PLoS One       Date:  2016-09-01       Impact factor: 3.240

3.  Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

Authors:  Nogol Memari; Abd Rahman Ramli; M Iqbal Bin Saripan; Syamsiah Mashohor; Mehrdad Moghbel
Journal:  PLoS One       Date:  2017-12-11       Impact factor: 3.240

4.  A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

Authors:  Khan Bahadar Khan; Amir A Khaliq; Abdul Jalil; Muhammad Shahid
Journal:  PLoS One       Date:  2018-02-12       Impact factor: 3.240

5.  Enhanced Clean-In-Place Monitoring Using Ultraviolet Induced Fluorescence and Neural Networks.

Authors:  Alessandro Simeone; Bin Deng; Nicholas Watson; Elliot Woolley
Journal:  Sensors (Basel)       Date:  2018-11-02       Impact factor: 3.576

6.  Reproducibility of Macular Vessel Density Calculations Via Imaging With Two Different Swept-Source Optical Coherence Tomography Angiography Systems.

Authors:  Takuhei Shoji; Yuji Yoshikawa; Junji Kanno; Hirokazu Ishii; Hisashi Ibuki; Kimitake Ozaki; Itaru Kimura; Kei Shinoda
Journal:  Transl Vis Sci Technol       Date:  2018-12-21       Impact factor: 3.283

7.  A Hybrid Unsupervised Approach for Retinal Vessel Segmentation.

Authors:  Khan Bahadar Khan; Muhammad Shahbaz Siddique; Muhammad Ahmad; Manuel Mazzara
Journal:  Biomed Res Int       Date:  2020-12-10       Impact factor: 3.411

8.  Towards Automated Eye Diagnosis: An Improved Retinal Vessel Segmentation Framework Using Ensemble Block Matching 3D Filter.

Authors:  Khuram Naveed; Faizan Abdullah; Hussain Ahmad Madni; Mohammad A U Khan; Tariq M Khan; Syed Saud Naqvi
Journal:  Diagnostics (Basel)       Date:  2021-01-12

Review 9.  Corneal Vibrations during Intraocular Pressure Measurement with an Air-Puff Method.

Authors:  Robert Koprowski; Sławomir Wilczyński
Journal:  J Healthc Eng       Date:  2018-02-11       Impact factor: 2.682

10.  Automated thresholding algorithms outperform manual thresholding in macular optical coherence tomography angiography image analysis.

Authors:  Jan Henrik Terheyden; Maximilian W M Wintergerst; Peyman Falahat; Moritz Berger; Frank G Holz; Robert P Finger
Journal:  PLoS One       Date:  2020-03-20       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.