Literature DB >> 23662164

Retinal image graph-cut segmentation algorithm using multiscale Hessian-enhancement-based nonlocal mean filter.

Jian Zheng1, Pei-Rong Lu, Dehui Xiang, Ya-Kang Dai, Zhao-Bang Liu, Duo-Jie Kuai, Hui Xue, Yue-Tao Yang.   

Abstract

We propose a new method to enhance and extract the retinal vessels. First, we employ a multiscale Hessian-based filter to compute the maximum response of vessel likeness function for each pixel. By this step, blood vessels of different widths are significantly enhanced. Then, we adopt a nonlocal mean filter to suppress the noise of enhanced image and maintain the vessel information at the same time. After that, a radial gradient symmetry transformation is adopted to suppress the nonvessel structures. Finally, an accurate graph-cut segmentation step is performed using the result of previous symmetry transformation as an initial. We test the proposed approach on the publicly available databases: DRIVE. The experimental results show that our method is quite effective.

Entities:  

Mesh:

Year:  2013        PMID: 23662164      PMCID: PMC3639648          DOI: 10.1155/2013/927285

Source DB:  PubMed          Journal:  Comput Math Methods Med        ISSN: 1748-670X            Impact factor:   2.238


1. Introduction

The retina is the only tissue in human body from which the information of blood vessel can be directly obtained in vivo. The information of retinal vessel plays an important role in the diagnosis and treatment of various diseases such as glaucoma [1], age-related macular degeneration [2], degenerative myopia, and diabetic retinopathy [3]. Recently, it is also found that the detection of vascular geometric change might be meaningful to judge whether people have high blood pressure or cardiovascular disease [4]. Retinal vessel extraction is quite essential for ophthalmologists to diagnose various eye diseases. Moreover, accurately segmented vessels could be very helpful for feature-based retinal image registration. Generally, existing retinal vessel segmentation methods can be roughly divided into two categories. The first category is supervised learning-based method. Such methods require tremendous manual segmentations of vasculature to train the classifier. Staal et al. [5] first adopt a ridge-extraction method to separate the image into numerous patches. For each pixel in the patch, feature vector that contains profile information such as width, height, and edge strength is computed. Feature vectors are then classified using a k nearest neighbors classifier with sequential forward feature selection strategy. Soares et al. [6] selected pixel value and multiscale 2D Gabor wavelet coefficients to construct the feature vector. A Bayesian classifier based on class-conditional probability density functions was adopted to perform a fast classification and model complex decision surfaces. Marín et al. [7] proposed a new supervised method to segment retinal vessels. For each pixel, 5 gray-level descriptors and 2 moment invariants-based features of a squared window were computed as the feature vector. A multilayer neural network scheme was then adopted to classify each pixel. The performance of such method depends on the correlation between training data and test data. If these two datasets are quite different, the segmentation results may be less than ideal. Besides that, the manual segmentation step would bring additional burden to ophthalmologists. Therefore, such methods have not been widely used in clinic yet. The second category is rule-based method. Such methods usually need to extract neighborhood information of each pixel. The information is then used to label the pixel according to some f preset rules. Matched filter [8], model-based method [9], and morphology-based method [10] all belong to this category. In the literature [8], Sofka and Stewart proposed a multiscale Gaussian and Gaussian-derivative profile kernel to detect vessels at a variety of widths. Lam et al. [9] proposed a multiconcavity modeling approach to handle unhealthy retinal images. In Lam's approach, a lineshape concavity measure was used to remove dark lesions and a locally normalized concavity measure is designed to remove spherical intensity variation. The two concavity measures are combined together according to their statistical distributions to detect vessels in general retinal images. Mendonça and Campilho [10] proposed a region growing algorithm in the morphological processed image to extract vascular centerline. First, four-direction-based differential operator was adopted to detect vascular centerlines. These centerlines were then merged to complete vascular centerlines and selected as seed points. Finally, region growing step was performed to reconstruct the vasculature. These methods do not require training steps and the interactions with doctors are also minimized; thus rule-based methods have been widely used in clinical application. However, ideal automatic retinal vessel segmentation is still not easy. This can mainly be ascribed to two reasons. One is that the contrast of some tiny vessels may be quite low, especially in some pathologies affected images. The other reason is that the image noise such as edge blurring may disturb the final segmentation results. Recently, graph-cut methods [11-15] have been very popular in image segmentation. This is because graph-cut methods could reach the global optimal value of the predefined energy function. Besides that, the user interactions of such methods are also very simple. Several graph-cut-based methods have been proposed to solve retina image segmentations. Chen et al. [16] proposed a 3D graph-search-graph-cut method to segment multilayers of 3D OCT retinal images. The multi-layers of retina and the symptomatic exudate-associated derangements (SEAD) are successfully segmented. Inspired by the superior performance of graph-cut method, we propose a new method to extract the blood vessels in retinal images following our previous work [17]. First, we perform a novel multiscale Hessian-based filter to compute the maximum response of vessel likeness function for each pixel, which is used to enhance the blood vessels of gray retinal images. Then, we adopt a nonlocal mean filter to suppress the noise of the enhanced image. After that, a radial gradient symmetry transformation is adopted to improve the detection of vessel structures and suppress the nonvessel structures. Finally, an accurate graph-cut segmentation is performed using previous symmetry transformation as an initial.

2. Methods

2.1. Multiscale Hessian-Based Enhancement

In order to improve the contrast of retinal vasculature with different widths, we propose a multiscale Hessian-based enhancement. Frangi et al. [18] have proposed a method to detect the tubular structure based on the eigenvalues of Hessian matrix. We denote by λ 1, and λ 2, the eigenvalues of scale-related Hessian matrix, which is defined as follows: where I (X, s) is the second order differential of input image. For an ideal tubular structure, the eigenvalues generally will meet the following conditions: We then define a new scale-related vessel likeness function: where V(s) is defined as follow: where c is an experiential parameter and we set it to . The function value of V(s) indicates the saliency of tubular structure for each pixel. We search in the scale range [s min⁡, s max⁡] to find the maximum response of vessel likeness function. Figure 2 shows the optimal scale property of the input image. The pixel value stands for the scale that is corresponding to the maximal function value of V(s). As seen from Figure 2, the positive correlation between vessel width and optimal scale is obvious. The primary vessel owns a large scale property while the tiny vessel owns a small scale property.
Figure 2

An example of the scale image of Figure 1; the pixel value stands for the scale that is corresponding to the maximal function value of V(s).

Figure 3 gives an example of multiscale Hessian-based enhancement, in which the pixel value stands for the vessel likeness function value. As shown in Figure 3, the whole retinal vasculature is prominently enhanced. However, some nonvessel structures are also enhanced including optic disk, yellow spots, and speckles. These nonvessel structures need to be removed in the following step.
Figure 3

An example of the multiscale Hessian-based enhancement of Figure 1; the pixel value stands for the vessel likeness function value.

2.2. Nonlocal Mean Filtering

The effect of multiscale hessian based enhancement is obvious. As shown in Figure 3, the whole vasculature is significantly strengthened. However, some nonvessel structures are also enhanced and cause the enhanced image to be noisy. In order to suppress image noise and maintain the structural information, we employ a nonlocal mean filtering [19] step. We denote the enhanced image as VL(X) and the filtered image as NL(X). The detailed filtering step can be described as in the following equation: where N denotes the filtering neighborhood and w(i, j) is the weighting factor, which is defined as follow: where v(N ) denotes a feature vector which is composed of the pixel values of the nearby area around pixel i and h is a filtering parameter. The parameters of filtering neighborhood and nearby area need to be selected suitably so as to achieve a balance between filtering performance and computation cost. Figure 4 shows a result of nonlocal mean filtering. We can find that the noise of enhanced image is suppressed effectively while the vasculature is maintained at the same time.
Figure 4

An output of the nonlocal mean filter of Figure 3, in which the image noise is effectively suppressed while the vasculature is maintained.

2.3. Radial Gradient Symmetry Transform

In order to remove the nonvessel structures, we propose a radial gradient symmetry transform method based on Loy's work [20]. An ideal vessel structure is shown in Figure 5. As shown in Figure 5, we notice that the gradient vectors own a symmetric property in both the magnitude and direction. For those nonvessel structures, there is no such property. Therefore, we propose a symmetric vessel likeness function as follows: where VL(X) stands for a vessel likeness function of a given point X along the direction of gradient vector G(X) and Flag(X) is an indicator function that indicates whether point X owns the gradient symmetric property. The computation of VL(X) and Flag(X) consists of the following steps.
Figure 5

An example of radial gradient symmetry transform; we can find that the pixel in the vessel area is generally located between two symmetric gradient vectors.

For each pixel in the filtered image as shown in Figure 4, we compute its vessel contribution along the gradient direction. The normalized gradient vector is denoted by g(X) = G(X)/||G(X)||. The coordinates of pixels that are affected by pixel p are computed as follows: where r = 0, Δr,…, 2 · s(p) and s(p) is the optimal scale parameter of pixel p, as shown in Figure 2. For these affected pixels c(p), we compute the vessel accumulation image A by the following equation: Then the VL(X) is given by where M is a normalization factor and q is a radial strictness parameter. For each pixel p, we search in its neighborhood [p − Δp, p + Δp] along the gradient direction with Δp = 2 · s(p) · g(p). As shown in Figure 5, if there exist points X 1 and X 2 that meet the following condition, Flag(p) = 1, otherwise; Flag(p) = 0: For symmetric vessel likeness function VLSym(X), we set a vessel likeness threshold T to extract a coarse vasculature. After that, we take an erosion step to get a narrowed retinal vasculature. We then compute the pixel number of connected vessels. If the pixel number is smaller than the preset threshold T num, we consider it as the background noise and it should be removed.

2.4. Graph-Cut Step

Throughout the previous processing steps, we have got a coarsely extracted vasculature. The final step is to accurately segment vessels from previous results. This can be described as a pixel labeling problem which can be formulated using the energy function: where L is a labeling set and E (L) = ∑ U (l ) is the data prior energy, which measures the cost of giving a label l ∈ L to a given pixel p according to prior information. E (L) = ∑∑ V (l , l ) is the potential energy, which measures the smoothness of a neighboring pixel system N and k is a weighting parameter. We adopt the graph-cut algorithm to optimize the energy function (12). The centerline is used as shape prior to guide the extraction process. In our framework, a graph G = 〈V, ε〉 is created with nodes corresponding to pixels p ∈ P of a retinal image, where V is the set of all nodes and ε is the set of all links connecting neighboring nodes. The neighboring pixel system is constructed with eight neighboring pixels. The terminal nodes are defined as source S and sink T. As an initial, we extract the centerline of previously extracted vasculature. The pixels on the centerline are considered as definite foreground F and the pixels with VLSym(X) > T are classified as candidate foreground F . The pixels with VLSym(X) < T are classified as background B and others are classified as the candidate background B . That is, L = {f , f , b , b } and V = {S, T}∪{F , B }∪{F , B }. For each pixel p ∈ {F , B }, we compute its minimum distances to S and T according to the literature [22], which are denoted as d (p) and d (p), respectively. The cost of t-links can be computed as follows: where D (p) = d (p)/(d (p) + d (p)), D (p) = 1 − D (p), and w 1 > 1 > w 2 > 0. The nodes in F and B are definitely labeled as f and b , respectively. The weight of n-link describes the labeling coherence of a pixel with its neighbors. We utilize the pixel value information as the neighborhood penalty item. The cost of n-links can be defined as follows: where η is a tiny number to avoid division by 0. When the graph-cut algorithm terminates, we encourage candidate foreground pixels to be labeled as foreground and discourage candidate background to be classified as background pixels.

3. Experiments and Conclusion

We test our method on the publicly available databases: DRIVE [5]. Three measures sensitivity (SE), specificity (SP), and accuracy (AC) are used to evaluate the performance of our method in the image field of view. They are defined as follows: where N_correct_vessel is the number of correctly classified vessel pixels and N_vessel is the number of the vessel pixels in ground truth. N_correct_nonvessel is the number of correctly classified nonvessel pixels and N_nonvessel is the number of nonvessel pixels. N_correct_total is the total number of correctly classified pixels and N_total is total number of pixels. We test the proposed method on 40 images and compare it with the methods developed by Staal et al. [5], Mendonça and Campilho [10], Wang et al. [21], and Marín et al. [7]. The mean values and standard deviations (sd.) of these methods are shown in Table 1. Some values that cannot be obtained from the literature are denoted by NA.
Table 1

Comparison of different segmentation methods.

STARESE (mean/sd.)SP (mean/sd.)AC (mean/sd.)
Staal et al. [5]0.7194/0.06940.9773/0.00870.9441/0.0065
Mendonça and Campilho [10]0.7315/NA0.9781/NA0.9463/NA
Wang et al. [21]0.7810/0.03400.9770/0.0071NA
Marín et al. [7]0.7067/0.06280.9801/0.01040.9452/0.0064
Ours0.9074/0.03320.9119/0.03200.9113/0.0280
From Table 1 we can see that there is a prominent improvement in sensitivity. This means that our method is quite effective in extracting some tiny vessels. On the other hand, our method is not as well as others' in both specificity and accuracy. For clinical application, the accuracy should be better than 90%. Our method is over the qualified standard, but there are still large improvements need to be done. Some of the extraction results are shown in Figures 6 and 7. As it can be seen from Figure 6, the majority of the vessels in the retina can be finely extracted; however, some nonvessel structures are also extracted and some tiny vessels are not correctly segmented. This is mainly because our method focused more on tiny vessel extraction, which is not easy for ophthalmologists to extract manually. Therefore some boundaries between retinal region and the background are mistakenly recognized as vessels. Besides that, the optic disk also cannot be fully excluded from the final result. Figure 7 shows two segmentation results of ill-conditioned retinal images, where some speckles are hard to be differentiated from tiny vessels. Our method fails to extract an accurate vasculature, where the AC values are 0.8720 and 0.8814, respectively. Furthermore, we exchange the order of nonlocal mean filtering and multiscale hessian-based enhancement. Experimental results show that there is a little difference between the performances. This also indicates that nonlocal mean filter is very robust and could be widely used in image processing area.
Figure 6

An example of 3 extraction results: the first row shows 3 input retinal images and the second row shows the segment results of the retinal vessels.

Figure 7

Two segmentation results of ill-conditioned retinal images; the speckles greatly reduce the performance of our algorithm.

In summary, this paper first presents a multiscale hessian-based enhancement for retinal images. Next, we adopt an effective nonlocal mean filtering step to suppress noise of the enhanced image. Then, we propose a radial gradient symmetry transform method to suppress the nonvessel artifacts. Finally, a graph-cut step is taken to accurately segment the retinal vessels. Experiments show that our method is very sensitive for the vessels segmentation, but the performance for some tiny vessel extraction and speckles exclusion is still needed to be improved. This will be our further work. We will make further studies to improve the performance.
  15 in total

1.  Ridge-based vessel segmentation in color images of the retina.

Authors:  Joes Staal; Michael D Abràmoff; Meindert Niemeijer; Max A Viergever; Bram van Ginneken
Journal:  IEEE Trans Med Imaging       Date:  2004-04       Impact factor: 10.048

2.  Retinal vessel extraction by combining radial symmetry transform and iterated graph cuts.

Authors:  Dehui Xiang; Jie Tian; Kexin Deng; Xing Zhang; Fei Yang; Xiaonan Wan
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2011

3.  A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features.

Authors:  Diego Marin; Arturo Aquino; Manuel Emilio Gegundez-Arias; José Manuel Bravo
Journal:  IEEE Trans Med Imaging       Date:  2010-08-09       Impact factor: 10.048

4.  General retinal vessel segmentation using regularization-based multiconcavity modeling.

Authors:  Benson S Y Lam; Yongsheng Gao; Alan Wee-Chung Liew
Journal:  IEEE Trans Med Imaging       Date:  2010-03-18       Impact factor: 10.048

5.  An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision.

Authors:  Yuri Boykov; Vladimir Kolmogorov
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2004-09       Impact factor: 6.226

6.  Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction.

Authors:  Ana Maria Mendonça; Aurélio Campilho
Journal:  IEEE Trans Med Imaging       Date:  2006-09       Impact factor: 10.048

7.  Medical image segmentation by combining graph cuts and oriented active appearance models.

Authors:  Xinjian Chen; Jayaram K Udupa; Ulas Bagci; Ying Zhuge; Jianhua Yao
Journal:  IEEE Trans Image Process       Date:  2012-01-31       Impact factor: 10.856

8.  Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut.

Authors:  Xinjian Chen; Meindert Niemeijer; Li Zhang; Kyungmoo Lee; Michael D Abramoff; Milan Sonka
Journal:  IEEE Trans Med Imaging       Date:  2012-03-19       Impact factor: 10.048

9.  Using optical imaging summary data to detect glaucoma.

Authors:  C Sanchez-Galeana; C Bowd; E Z Blumenthal; P A Gokhale; L M Zangwill; R N Weinreb
Journal:  Ophthalmology       Date:  2001-10       Impact factor: 12.079

10.  Reduced response of retinal vessel diameters to flicker stimulation in patients with diabetes.

Authors:  G Garhöfer; C Zawinka; H Resch; P Kothy; L Schmetterer; G T Dorner
Journal:  Br J Ophthalmol       Date:  2004-07       Impact factor: 4.638

View more
  2 in total

1.  Recent Advancements in Retinal Vessel Segmentation.

Authors:  Chetan L Srinidhi; P Aparna; Jeny Rajan
Journal:  J Med Syst       Date:  2017-03-11       Impact factor: 4.460

2.  A New Approach to Segment Both Main and Peripheral Retinal Vessels Based on Gray-Voting and Gaussian Mixture Model.

Authors:  Peishan Dai; Hanyuan Luo; Hanwei Sheng; Yali Zhao; Ling Li; Jing Wu; Yuqian Zhao; Kenji Suzuki
Journal:  PLoS One       Date:  2015-06-05       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.