| Literature DB >> 33266884 |
Chang Wang1, Zongya Zhao1, Qiongqiong Ren1, Yongtao Xu2, Yi Yu1.
Abstract
Various retinal vessel segmentation methods based on convolutional neural networks were proposed recently, and Dense U-net as a new semantic segmentation network was successfully applied to scene segmentation. Retinal vessel is tiny, and the features of retinal vessel can be learned effectively by the patch-based learning strategy. In this study, we proposed a new retinal vessel segmentation framework based on Dense U-net and the patch-based learning strategy. In the process of training, training patches were obtained by random extraction strategy, Dense U-net was adopted as a training network, and random transformation was used as a data augmentation strategy. In the process of testing, test images were divided into image patches, test patches were predicted by training model, and the segmentation result can be reconstructed by overlapping-patches sequential reconstruction strategy. This proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and convolutional neural network (CNN) methods, the result demonstrated that our approach is competitive in these evaluation metrics. This method can obtain a better segmentation result than specialists, and has clinical application value.Entities:
Keywords: Dense U-net; Retinal vessel segmentation; convolutional neural network; data augmentation; patch-based learning strategy
Year: 2019 PMID: 33266884 PMCID: PMC7514650 DOI: 10.3390/e21020168
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1Overview of the proposed method.
Figure 2Patches extraction (a) patches extraction strategy (b) patches extraction result.
Figure 3Patches training for semantic segmentation. (a) Dense U-Net architecture; (b) dense block; (c) transition down.
Figure 4Data augmentation result. (a) Image patches; (b) corresponding ground truths.
Figure 5Segmentation results of DRIVE by the proposed method. (a) Color fundus image; (b) ground truth; (c) probability map; (d) binarization result.
Figure 6Segmentation results of STARE by the proposed method. (a) Color fundus image; (b) ground truth; (c) probability map; (d) binarization result.
Performance of proposed method on DRIVE and STARE.
| DRIVE | STARE | |||||||
|---|---|---|---|---|---|---|---|---|
| Proposed method | Se | Sp | Acc | AUC | Se | Sp | Acc | AUC |
| Second human observer | 0.7760 | 0.9724 | 0.9472 | 0.8952 | 0.9384 | 0.9349 | ||
| 40000 real | 0.7886 | 0.9716 | 0.9483 | 0.9686 | 0.7904 | 0.9716 | 0.9508 | 0.9684 |
| 40000 real | 0.7986 | 0.9736 | 0.9511 | 0.9740 | 0.7914 | 0.9722 | 0.9538 | 0.9704 |
Se: sensitivity; Sp: specificity; Acc: accuracy; AUC: the area under each curve
Figure 7Comparison of segmentation result by Dense U-net and U-net. (a) The local region of fundus images; (b) ground truth; (c) binarization results by Dense U-net with dice loss function; (d) binarization results by U-net with dice loss function.
The performance of the proposed method in DRIVE and STARE.
| DRIVE | STARE | |||||||
|---|---|---|---|---|---|---|---|---|
| Method | Se | Sp | Acc | AUC | Se | Sp | Acc | AUC |
| Second human observer | 0.7760 | 0.9724 | 0.9472 | 0.8952 | 0.9384 | 0.9349 | ||
| U-net (dice-loss) | 0.7937 | 0.9747 | 0.9517 | 0.9745 | 0.7882 | 0.9729 | 0.9547 | 0.9740 |
| U-net (cross-entropy) | 0.7758 | 0.9755 | 0.9500 | 0.9742 | 0.7838 | 0.9780 | 0.9535 | 0.9673 |
| Dense U-net (dice-loss) | 0.7986 | 0.9736 | 0.9511 | 0.9740 | 0.7914 | 0.9722 | 0.9538 | 0.9704 |
| Dense U-net (cross-entropy) | 0.7886 | 0.9736 | 0.9483 | 0.9716 | 0.7896 | 0.9734 | 0.9475 | 0.9682 |
The performance of proposed method on DRIVE and STARE.
| DRIVE | STARE | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Type | Method | Se | Sp | Acc | AUC | Se | Sp | Acc | AUC |
| Second expert observer | 0.7760 | 0.9724 | 0.9472 | 0.8952 | 0.9384 | 0.9349 | |||
| Unsupervised | Zhao [ | 0.7420 | 0.9820 | 0.9540 | 0.8620 | 0.7800 | 0.9780 | 0.9560 | 0.9673 |
| Azzopardi [ | 0.7655 | 0.9704 | 0.9442 | 0.9614 | 0.7716 | 0.9701 | 0.9497 | 0.9563 | |
| Zhang [ | 0.7743 | 0.9725 | 0.9776 | 0.9636 | 0.7791 | 0.9758 | 0.9554 | 0.9748 | |
| Supervised | Orlando [ | 0.7897 | 0.9684 | 0.9454 | 0.9506 | 0.7680 | 0.9738 | 0.9519 | 0.9570 |
| Zhang [ | 0.7861 | 0.9712 | 0.9466 | 0.9703 | 0.7882 | 0.9729 | 0.9547 | 0.9740 | |
| Deep learning | Hu [ | 0.7772 | 0.9793 | 0.9533 | 0.9759 | 0.7543 | 0.9814 | 0.9632 | 0.9751 |
| Guo [ | 0.8990 | 0.9283 | 0.9199 | 0.9652 | |||||
| U-net | 0.7937 | 0.9747 | 0.9517 | 0.9745 | 0.7882 | 0.9729 | 0.9547 | 0.9740 | |
| Our proposed |
|
|
|
|
|
|
|
| |