Literature DB >> 33460386

DSAL: Deeply Supervised Active Learning From Strong and Weak Labelers for Biomedical Image Segmentation.

Ziyuan Zhao, Zeng Zeng, Kaixin Xu, Cen Chen, Cuntai Guan.   

Abstract

Image segmentation is one of the most essential biomedical image processing problems for different imaging modalities, including microscopy and X-ray in the Internet-of-Medical-Things (IoMT) domain. However, annotating biomedical images is knowledge-driven, time-consuming, and labor-intensive, making it difficult to obtain abundant labels with limited costs. Active learning strategies come into ease the burden of human annotation, which queries only a subset of training data for annotation. Despite receiving attention, most of active learning methods still require huge computational costs and utilize unlabeled data inefficiently. They also tend to ignore the intermediate knowledge within networks. In this work, we propose a deep active semi-supervised learning framework, DSAL, combining active learning and semi-supervised learning strategies. In DSAL, a new criterion based on deep supervision mechanism is proposed to select informative samples with high uncertainties and low uncertainties for strong labelers and weak labelers respectively. The internal criterion leverages the disagreement of intermediate features within the deep learning network for active sample selection, which subsequently reduces the computational costs. We use the proposed criteria to select samples for strong and weak labelers to produce oracle labels and pseudo labels simultaneously at each active learning iteration in an ensemble learning manner, which can be examined with IoMT Platform. Extensive experiments on multiple medical image datasets demonstrate the superiority of the proposed method over state-of-the-art active learning methods.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 33460386     DOI: 10.1109/JBHI.2021.3052320

Source DB:  PubMed          Journal:  IEEE J Biomed Health Inform        ISSN: 2168-2194            Impact factor:   5.772


  2 in total

1.  HARNU-Net: Hierarchical Attention Residual Nested U-Net for Change Detection in Remote Sensing Images.

Authors:  Haojin Li; Liejun Wang; Shuli Cheng
Journal:  Sensors (Basel)       Date:  2022-06-19       Impact factor: 3.847

2.  Active deep learning from a noisy teacher for semi-supervised 3D image segmentation: Application to COVID-19 pneumonia infection in CT.

Authors:  Mohammad Arafat Hussain; Zahra Mirikharaji; Mohammad Momeny; Mahmoud Marhamati; Ali Asghar Neshat; Rafeef Garbi; Ghassan Hamarneh
Journal:  Comput Med Imaging Graph       Date:  2022-10-07       Impact factor: 7.422

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.