| Literature DB >> 33798993 |
Hai Xie1, Xianlu Zeng2, Haijun Lei3, Jie Du1, Jiantao Wang2, Guoming Zhang4, Jiuwen Cao5, Tianfu Wang1, Baiying Lei6.
Abstract
Fundus diseases classification is vital for the health of human beings. However, most of existing methods detect diseases by means of single angle fundus images, which lead to the lack of pathological information. To address this limitation, this paper proposes a novel deep learning method to complete different fundus diseases classification tasks using ultra-wide field scanning laser ophthalmoscopy (SLO) images, which have an ultra-wide field view of 180-200˚. The proposed deep model consists of multi-branch network, atrous spatial pyramid pooling module (ASPP), cross-attention and depth-wise attention module. Specifically, the multi-branch network employs the ResNet-34 model as the backbone to extract feature information, where the ResNet-34 model with two-branch is followed by the ASPP module to extract multi-scale spatial contextual features by setting different dilated rates. The depth-wise attention module can provide the global attention map from the multi-branch network, which enables the network to focus on the salient targets of interest. The cross-attention module adopts the cross-fusion mode to fuse the channel and spatial attention maps from the ResNet-34 model with two-branch, which can enhance the representation ability of the disease-specific features. The extensive experiments on our collected SLO images and two publicly available datasets demonstrate that the proposed method can outperform the state-of-the-art methods and achieve quite promising classification performance of the fundus diseases.Entities:
Keywords: ASPP; Cross-attention; Depth-wise attention; Fundus diseases classification; Multi-branch network; SLO
Year: 2021 PMID: 33798993 DOI: 10.1016/j.media.2021.102031
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545