Literature DB >> 33197863

Pairwise learning for medical image segmentation.

Renzhen Wang1, Shilei Cao2, Kai Ma2, Yefeng Zheng2, Deyu Meng3.   

Abstract

Fully convolutional networks (FCNs) trained with abundant labeled data have been proven to be a powerful and efficient solution for medical image segmentation. However, FCNs often fail to achieve satisfactory results due to the lack of labelled data and significant variability of appearance in medical imaging. To address this challenging issue, this paper proposes a conjugate fully convolutional network (CFCN) where pairwise samples are input for capturing a rich context representation and guide each other with a fusion module. To avoid the overfitting problem introduced by intra-class heterogeneity and boundary ambiguity with a small number of training samples, we propose to explicitly exploit the prior information from the label space, termed as proxy supervision. We further extend the CFCN to a compact conjugate fully convolutional network (C2FCN), which just has one head for fitting the proxy supervision without incurring two additional branches of decoders fitting ground truth of the input pairs compared to CFCN. In the test phase, the segmentation probability is inferred by the learned logical relation implied in the proxy supervision. Quantitative evaluation on the Liver Tumor Segmentation (LiTS) and Combined (CT-MR) Healthy Abdominal Organ Segmentation (CHAOS) datasets shows that the proposed framework achieves a significant performance improvement on both binary segmentation and multi-category segmentation, especially with a limited amount of training data. The source code is available at https://github.com/renzhenwang/pairwise_segmentation.
Copyright © 2020 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Conjugate fully convolutional network; Medical image segmentation; Pairwise segmentation; Proxy supervision

Mesh:

Year:  2020        PMID: 33197863     DOI: 10.1016/j.media.2020.101876

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  2 in total

1.  O-Net: A Novel Framework With Deep Fusion of CNN and Transformer for Simultaneous Segmentation and Classification.

Authors:  Tao Wang; Junlin Lan; Zixin Han; Ziwei Hu; Yuxiu Huang; Yanglin Deng; Hejun Zhang; Jianchao Wang; Musheng Chen; Haiyan Jiang; Ren-Guey Lee; Qinquan Gao; Ming Du; Tong Tong; Gang Chen
Journal:  Front Neurosci       Date:  2022-06-02       Impact factor: 5.152

2.  Fully-automated root image analysis (faRIA).

Authors:  Narendra Narisetti; Michael Henke; Christiane Seiler; Astrid Junker; Jörn Ostermann; Thomas Altmann; Evgeny Gladilin
Journal:  Sci Rep       Date:  2021-08-06       Impact factor: 4.379

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.