| Literature DB >> 35844462 |
Jixun Gao1, Quanzhen Huang2, Zhendong Gao2, Suxia Chen1.
Abstract
Aiming at the problem of insufficient details of retinal blood vessel segmentation in current research methods, this paper proposes a multiscale feature fusion residual network based on dual attention. Specifically, a feature fusion residual module with adaptive calibration weight features is designed, which avoids gradient dispersion and network degradation while effectively extracting image details. The SA module and ECA module are used many times in the backbone feature extraction network to adaptively select the focus position to generate more discriminative feature representations; at the same time, the information of different levels of the network is fused, and long-range and short-range features are used. This method aggregates low-level and high-level feature information, which effectively improves the segmentation performance. The experimental results show that the method in this paper achieves the classification accuracy of 0.9795 and 0.9785 on the STARE and DRIVE datasets, respectively, and the classification performance is better than the current mainstream methods.Entities:
Mesh:
Year: 2022 PMID: 35844462 PMCID: PMC9279073 DOI: 10.1155/2022/8111883
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.809
Figure 1The training and testing process of the proposed network.
Figure 2Spatial channel attention network (SA).
Figure 3Lightweight attention network (ECA-Net).
Figure 4Residual networks with lightweight attention modules.
Evaluation metrics of different methods in DRIVE and STARE datasets.
| Methods | DRIVE | STARE | ||||
|---|---|---|---|---|---|---|
| Acc | Sen | Spe | Acc | Sen | Spe | |
| Meng et.al. [ | 0.9383 | 0.7871 | 0.9664 | 0.8871 | 0.7372 | 0.9391 |
| Zhou et.al. [ | 0.9469 | 0.8078 | 0.9674 | 0.9585 | 0.8065 | 0.9761 |
| Jiang et.al. [ | 0.9642 | 0.8201 | 0.9843 | 0.9667 | 0.7991 | 0.9854 |
| Jiang et.al. [ | 0.9608 | 0.8274 | 0.9775 | 0.9771 |
| 0.9878 |
| Li et.al. [ | 0.9678 | 0.7921 | 0.9810 | 0.9678 | 0.8392 | 0.9823 |
| Proposed |
|
|
|
| 0.8368 |
|
Figure 5Visualization results of different methods on DRIVE and STARE datasets: (a) original image; (b) ground truth; (c) literature [29]; (d) literature [30]; (e) literature [31]; (f) literature [32]; (g) literature [33]; (h) proposed.