Literature DB >> 33744750

Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation.

Lei Bi1, Michael Fulham2, Nan Li3, Qiufang Liu3, Shaoli Song3, David Dagan Feng4, Jinman Kim5.   

Abstract

BACKGROUND AND
OBJECTIVE: [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures.
METHODS: we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning.
RESULTS: we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets.
CONCLUSIONS: we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.
Copyright © 2021. Published by Elsevier B.V.

Entities:  

Keywords:  Fully convolutional networks (fcns); Positron emission tomography–computed tomography (pet-ct); Segmentation

Year:  2021        PMID: 33744750     DOI: 10.1016/j.cmpb.2021.106043

Source DB:  PubMed          Journal:  Comput Methods Programs Biomed        ISSN: 0169-2607            Impact factor:   5.428


  2 in total

1.  A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions.

Authors:  Sergios Gatidis; Tobias Hepp; Marcel Früh; Christian La Fougère; Konstantin Nikolaou; Christina Pfannenberg; Bernhard Schölkopf; Thomas Küstner; Clemens Cyran; Daniel Rubin
Journal:  Sci Data       Date:  2022-10-04       Impact factor: 8.501

2.  Multi-Focus Image Fusion Based on Convolution Neural Network for Parkinson's Disease Image Classification.

Authors:  Yin Dai; Yumeng Song; Weibin Liu; Wenhe Bai; Yifan Gao; Xinyang Dong; Wenbo Lv
Journal:  Diagnostics (Basel)       Date:  2021-12-17
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.