Literature DB >> 34274689

Incorporating the hybrid deformable model for improving the performance of abdominal CT segmentation via multi-scale feature fusion network.

Xiaokun Liang1, Na Li2, Zhicheng Zhang3, Jing Xiong4, Shoujun Zhou5, Yaoqin Xie6.   

Abstract

Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.
Copyright © 2021. Published by Elsevier B.V.

Entities:  

Keywords:  Abdominal CT; Attention U-net; Data augmentation; Segmentation

Year:  2021        PMID: 34274689     DOI: 10.1016/j.media.2021.102156

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  2 in total

1.  Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy.

Authors:  Xiaokun Liang; Maxime Bassenne; Dimitre H Hristov; Md Tauhidul Islam; Wei Zhao; Mengyu Jia; Zhicheng Zhang; Michael Gensheimer; Beth Beadle; Quynh Le; Lei Xing
Journal:  Comput Biol Med       Date:  2021-12-17       Impact factor: 4.589

2.  Side channel analysis based on feature fusion network.

Authors:  Feng Ni; Junnian Wang; Jialin Tang; Wenjun Yu; Ruihan Xu
Journal:  PLoS One       Date:  2022-10-17       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.