| Literature DB >> 36159307 |
Wenjing Li1, Yalong Xiao2,3, Hangyu Hu4, Chengzhang Zhu2,3, Han Wang3,5, Zixi Liu3,5, Arun Kumar Sangaiah6.
Abstract
Retinal vessel extraction plays an important role in the diagnosis of several medical pathologies, such as diabetic retinopathy and glaucoma. In this article, we propose an efficient method based on a B-COSFIRE filter to tackle two challenging problems in fundus vessel segmentation: (i) difficulties in improving segmentation performance and time efficiency together and (ii) difficulties in distinguishing the thin vessel from the vessel-like noise. In the proposed method, first, we used contrast limited adaptive histogram equalization (CLAHE) for contrast enhancement, then excerpted region of interest (ROI) by thresholding the luminosity plane of the CIELab version of the original RGB image. We employed a set of B-COSFIRE filters to detect vessels and morphological filters to remove noise. Binary thresholding was used for vessel segmentation. Finally, a post-processing method based on connected domains was used to eliminate unconnected non-vessel pixels and to obtain the final vessel image. Based on the binary vessel map obtained, we attempt to evaluate the performance of the proposed algorithm on three publicly available databases (DRIVE, STARE, and CHASEDB1) of manually labeled images. The proposed method requires little processing time (around 12 s for each image) and results in the average accuracy, sensitivity, and specificity of 0.9604, 0.7339, and 0.9847 for the DRIVE database, and 0.9558, 0.8003, and 0.9705 for the STARE database, respectively. The results demonstrate that the proposed method has potential for use in computer-aided diagnosis.Entities:
Keywords: COSFIRE; computer-aided diagnosis; medical image segmentation; postprocess; retinal vessel segmentation
Mesh:
Year: 2022 PMID: 36159307 PMCID: PMC9500397 DOI: 10.3389/fpubh.2022.914973
Source DB: PubMed Journal: Front Public Health ISSN: 2296-2565
Figure 1(A) Color fundus image from DRIVE database. (B) Corresponding manual segmented image.
Related work of retinal segmentation methods.
|
|
|
|
| |
|---|---|---|---|---|
| Supervised methods | Soares et al. ( | 2006 | Combining two-dimensional Gabor wavelet transform multi-scale analysis and Bayesian classifier | 2-D Gabor wavelet, bayesian classifier with Gaussian mixtures |
| Supervised methods | Marin et al. ( | 2011 | a multi-layer neural network to classify pixels based on moment-invariant features | A multi-layer feedforward network |
| Supervised | Fra et al. ( | 2012 | a classification scheme based on an ensemble of boosted and bagged decision trees | Boosted and bagged decision trees |
| Supervised methods | Aslani and Sarnel ( | 2016 | using the random forest classifier based on a hybrid feature vector for pixel characterization | Random forest classifier, morphological top-hat, B-COSFIRE filter, multi-scale |
| Supervised methods | Strisciuglio et al. ( | 2016 | transforming and re-scaling the features composed of the responses of the bank of selected B-COSFIRE filters to train support vector machine classifier | B-COSFIRE filters, SVM classifier, GMLVQ, Genetic algorithm |
| Supervised methods | Zhu et al. ( | 2016 | extracting a 39-dimensional feature vector to train the extreme learning machine classifier | Classification and regression tree (CART) |
| Unsupervised methods | Al-Diri et al. ( | 2009 | An active contour model to achieve retinal vessel measurement and segmentation. | Active contour model, growing algorithm, junction resolution algorithm |
| Unsupervised methods | Liu and Sun ( | 1993 | a multi-concavity modeling approach with differentiable concavity measure | Adaptive tracking algorithm |
| Unsupervised methods | Azzopardi et al. ( | 2015 | proposing B-COSFIRE approach for retinal vessel segmentation | B-COSFIRE filters, CLAHE, Masking |
| Unsupervised methods | Khan et al. ( | 2016 | a vasculature segmentation method by applying pixel AND operation between vessel location map and B-COSFIRE segmentation image | CLAHE, morphological filters, difference image of low pass filter, adaptive thresholding, VLM, B-COSFIRE filters |
| Unsupervised methods | Bahadarkhan et al. ( | 2016 | Using morphological hessian-based approach and region-based Otsu thresholding | CLAHE, morphological filters, Hessian matrix and eigenvalues transformation, Otsu thresholding |
| Unsupervised methods | Khan et al. ( | 2016 | proposing Modified Iterative Self Organizing Data Analysis Technique (MISODATA) for vessel segmentation | CLAHE, MISODATA algorithm, postprocessing |
| Unsupervised methods | Khan et al. ( | 2020 | proposing a framework applicating MISODATA and B-COSFIRE filters with fast execution and competing outcomes. | CLAHE, MISODATA algorithm, B-COSFIRE filters |
| Unsupervised methods | Ooi et al. ( | 2021 | Applying edge detection technology based on Canny algorithm. | CLAHE, Canny algorithm |
| Deep learning methods | Melinsca et al. ( | 2015 | using deep neural network to segment retinal vessels | Deep max-pooling convolutional neural networks |
| Deep learning methods | Khalaf et al. ( | 2016 | Using convolutional neural networks for deep feature learning | Deep Convolutional Neural Networks |
| Deep learning methods | Fu et al. ( | 2016 | employing deep learning network and fully- connected conditional random fields to accomplish retinal vessel segmentation. | FCN, Conditional Random Fields (CRFs) |
| Deep learning methods | Yue et al. ( | 2019 | Improved U-net with Multi-scale input layer and dense block introduced | U-net |
| Deep learning methods | Cheng et al. ( | 2020 | Adding dense block to U-Net network | U-net |
| Deep learning methods | Li et al. ( | 2021 | A scheme based on the combination of U-Net and Dense-Net is proposed | U-net, Dense-net, CLAHE |
| Deep learning methods | Feng et al. ( | 2022 | encoder-decoder structure | Inception, Multiple pyramid pooling |
Figure 2Flowchart of proposed method: first, we using contrast limited adaptive histogram equalization (CLAHE) for contrast enhancement. Second, we threshold the luminance plane of the CIELab version of the original RGB image to produce a mask. Third, we apply the B-COSFIRE filter and morphological filter to detect blood vessels and remove noise. Fourth, the binary threshold was used for vessel segmentation. Finally, unconnected non-vessel pixels are eliminated by post-processing to obtain the final segmentation map.
Performance measures of vessel segmentation.
|
|
|
|---|---|
| Sensitivity(Se) | TP/(TP+FN) |
| Specificity(Sp) | TN/(TN+FP) |
| Accuracy(Acc) | (TP+TN)/(TP+FP+TN+FN) |
Performance of proposed method (DRIVE).
|
|
|
|
| |
|---|---|---|---|---|
| Average | 9.1852 | 0.7339 | 0.9847 | 0.9604 |
| Maximum | 10.1094 | 0.8455 | 0.9965 | 0.9696 |
| Minimum | 8.2813 | 0.5006 | 0.9737 | 0.9543 |
Performance of proposed method (STARE).
|
|
|
|
| |
|---|---|---|---|---|
| Average | 12.2773 | 0.8003 | 0.9705 | 0.9558 |
| Maximum | 15.5938 | 0.9250 | 0.9900 | 0.9764 |
| Minimum | 10.7118 | 0.6426 | 0.9313 | 0.9245 |
Performance of proposed method (CHASEDB1).
|
|
|
|
|
| |
|---|---|---|---|---|---|
| Average | 9.3600 | 0.5921 | 0.9842 | 0.9606 | 0.5886 |
| Maximum | 10.4271 | 0.7864 | 0.9959 | 0.9717 | 0.75710 |
| Minimum | 7.7188 | 0.4959 | 0.9313 | 0.9554 | 0.3970 |
Figure 3Stepwise illustration of proposed method. (A) Color retinal fundus image from DRIVE database. (B) Green channel retinal fundus image. (C) Processing diagram after CLAHE. (D) Processing diagram after B-COSFIRE filters. (E) Processing diagram after morphological filters. (F) Processing diagram after binary threshold. (G) Final segmented image.
Figure 5Stepwise illustration of the proposed method. (A) Color retinal fundus image from CHASEDB1 database. (B) Green channel retinal fundus image. (C) Processing diagram after CLAHE. (D) Processing diagram after B-COSFIRE filters. (E) Processing diagram after morphological filters. (F) Processing diagram after binary threshold. (G) Final segmented image.
Comparison with other methods (DRIVE).
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| Supervised | Soares et al. ( | 2006 | 3 min | 0.7332 | 0.9782 | 0.9614 |
| Supervised | Marin et al. ( | 2011 | 1.5 min | 0.7067 | 0.9801 | 0.9588 |
| Supervised | Fraz et al. ( | 2012 | 2 min | 0.7406 | 0.9807 | 0.9747 |
| Supervised | Aslani and Sarnel ( | 2016 | - | 0.7545 | 0.9801 | 0.9513 |
| Supervised | Strisciuglio et al. ( | 2016 | 1.5 min | 0.7731 | 0.9708 | 0.9453 |
| Supervised | Zhu et al. ( | 2016 | 51 s | 0.7462 | 0.9838 | 0.9618 |
| Unsupervised | Al-Diri et al. ( | 2009 | 11 min | 0.7282 | 0.9551 | - |
| Unsupervised | Liu and Sun ( | 2010 | 13 min | - | - | 0.9472 |
| Unsupervised | Azzopardi et al. ( | 2015 | 10 s | 0.7655 | 0.9704 | 0.9442 |
| Unsupervised | Khan et al. ( | 2016 | 10.6 s | 0.7155 | 0.9805 | 0.9579 |
| Unsupervised | Bahadarkhan et al. ( | 2016 | 1.56 s | 0.776 | 0.972 | 0.947 |
| Unsupervised | Khan et al. ( | 2016 | - | 0.780 | 0.972 | 0.952 |
| Unsupervised | Khan et al. ( | 2016 | 6.1 s | 0.747 | 0.980 | 0.960 |
| Unsupervised | Khan et al. ( | 2020 | 5.5 s | 0.766 | 0.972 | 0.954 |
| Deep learning | Melinsca et al. ( | 2015 | - | 0.7276 | - | 0.9466 |
| Deep learning | Khalaf et al. ( | 2016 | - | 0.8467 | 0.9494 | 0.9403 |
| Deep learning | Fu et al. ( | 2016 | - | 0.7294 | - | 0.9470 |
| Deep learning | Cheng et al. ( | 2020 | - | 0.7676 | 0.9834 | 0.9559 |
| Unsupervised | Proposed Method | 2022 | 9.19 s | 0.7339 | 0.9847 | 0.9604 |
Comparison with other methods (STARE).
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| Supervised | Soares et al. ( | 2006 | 3 min | 0.7207 | 0.9747 | 0.9480 |
| Supervised | Marin et al. ( | 2011 | 1.5 min | 0.6944 | 0.9819 | 0.9526 |
| Supervised | Fraz et al. ( | 2012 | 2 min | 0.7548 | 0.9763 | 0.9534 |
| Supervised | Aslani et al. ( | 2016 | - | 0.7556 | 0.9837 | 0.9789 |
| Supervised | Strisciuglio et al. ( | 2016 | 2.5 min | 0.7668 | 0.9711 | 0.9545 |
| Supervised | Zhu et al. ( | 2016 | 51 s | - | - | - |
| Unsupervised | Al-Diri et al. ( | 2009 | 11 min | 0.7251 | 0.9681 | - |
| Unsupervised | Liu and Sun ( | 2010 | 13 min | - | - | 0.9567 |
| Unsupervised | Azzopardi et al. ( | 2015 | 10 s | 0.7716 | 0.9701 | 0.9497 |
| Unsupervised | Khan et al. ( | 2016 | 10.6 s | 0.7728 | 0.9649 | 0.9518 |
| Unsupervised | Bahadarkhan et al. ( | 2016 | 1.56 s | 0.895 | 0.939 | 0.935 |
| Unsupervised | Khan et al. ( | 2016 | - | 0.745 | 0.74 | 0.957 |
| Unsupervised | Khan et al. ( | 2016 | 6.1 s | 0.778 | 0.966 | 0.951 |
| Unsupervised | Khan et al. ( | 2020 | 5.5 s | 0.792 | 0.997 | 0.996 |
| Deep learning | Melinsca et al. ( | 2015 | - | 0.7276 | - | 0.9466 |
| Deep learning | Khalaf et al. ( | 2016 | - | 0.8467 | 0.9494 | 0.9403 |
| Deep learning | Fu et al. ( | 2016 | - | 0.7294 | - | 0.9470 |
| Deep learning | Cheng et al. ( | 2020 | - | - | - | - |
| Unsupervised | Proposed method | 2022 | 12.28 s | 0.8003 | 0.9705 | 0.9558 |
B-COSFIRE vessel segmentation algorithm.
| 1 |
| 2 Extract green band image G form Original RGB retinal images. |
| 3 Extract CIELab version image L form Original RGB retinal images. |
| 4 |
| 5 Using CLAHE algorithm to green band image G. |
| 6 Thresholding the luminosity plane of L and produce the mask M. |
| 7 |
| 8 Perform masking operation on G and M. |
| 9 Produce the input I of the following vessel segmentation operation. |
| 10 |
| 11 Applicate B-COSFIRE filter on I and produce I1. |
| 12 |
| 13 Perform Morphological filters (top-hat) on I1 and produce I2. |
| 14 |
| 15 Operating binary threshold on I2 and produce I3. |
| 16 |
| 17 Post processing on I3. |
| 18 Return: Final segmented vessel maps |