Literature DB >> 33798993

Cross-attention multi-branch network for fundus diseases classification using SLO images.

Hai Xie1, Xianlu Zeng2, Haijun Lei3, Jie Du1, Jiantao Wang2, Guoming Zhang4, Jiuwen Cao5, Tianfu Wang1, Baiying Lei6.   

Abstract

Fundus diseases classification is vital for the health of human beings. However, most of existing methods detect diseases by means of single angle fundus images, which lead to the lack of pathological information. To address this limitation, this paper proposes a novel deep learning method to complete different fundus diseases classification tasks using ultra-wide field scanning laser ophthalmoscopy (SLO) images, which have an ultra-wide field view of 180-200˚. The proposed deep model consists of multi-branch network, atrous spatial pyramid pooling module (ASPP), cross-attention and depth-wise attention module. Specifically, the multi-branch network employs the ResNet-34 model as the backbone to extract feature information, where the ResNet-34 model with two-branch is followed by the ASPP module to extract multi-scale spatial contextual features by setting different dilated rates. The depth-wise attention module can provide the global attention map from the multi-branch network, which enables the network to focus on the salient targets of interest. The cross-attention module adopts the cross-fusion mode to fuse the channel and spatial attention maps from the ResNet-34 model with two-branch, which can enhance the representation ability of the disease-specific features. The extensive experiments on our collected SLO images and two publicly available datasets demonstrate that the proposed method can outperform the state-of-the-art methods and achieve quite promising classification performance of the fundus diseases.
Copyright © 2021 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  ASPP; Cross-attention; Depth-wise attention; Fundus diseases classification; Multi-branch network; SLO

Year:  2021        PMID: 33798993     DOI: 10.1016/j.media.2021.102031

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  2 in total

1.  Multi-Label Fundus Image Classification Using Attention Mechanisms and Feature Fusion.

Authors:  Zhenwei Li; Mengying Xu; Xiaoli Yang; Yanqi Han
Journal:  Micromachines (Basel)       Date:  2022-06-15       Impact factor: 3.523

2.  Predicting Optical Coherence Tomography-Derived High Myopia Grades From Fundus Photographs Using Deep Learning.

Authors:  Zhenquan Wu; Wenjia Cai; Hai Xie; Shida Chen; Yanbing Wang; Baiying Lei; Yingfeng Zheng; Lin Lu
Journal:  Front Med (Lausanne)       Date:  2022-03-03
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.