Literature DB >> 34280670

United adversarial learning for liver tumor segmentation and detection of multi-modality non-contrast MRI.

Jianfeng Zhao1, Dengwang Li2, Xiaojiao Xiao3, Fabio Accorsi4, Harry Marshall4, Tyler Cossetto4, Dongkeun Kim4, Daniel McCarthy4, Cameron Dawson4, Stefan Knezevic4, Bo Chen5, Shuo Li6.   

Abstract

Simultaneous segmentation and detection of liver tumors (hemangioma and hepatocellular carcinoma (HCC)) by using multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for the clinical diagnosis. However, it is still a challenging task due to: (1) the HCC information on NCMRI is insufficient makes extraction of liver tumors feature difficult; (2) diverse imaging characteristics in multi-modality NCMRI causes feature fusion and selection difficult; (3) no specific information between hemangioma and HCC on NCMRI cause liver tumors detection difficult. In this study, we propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI. The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection. In this encoder, a novel edge dissimilarity feature pyramid module is designed to facilitate the complementary multi-modality feature extraction. Secondly, the newly designed fusion and selection channel is used to fuse the multi-modality feature and make the decision of the feature selection. Then, the proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator. Lastly, an innovative multi-phase radiomics guided discriminator exploits the clear and specific tumor information to improve the multi-task performance via the adversarial learning strategy. The UAL is validated in corresponding multi-modality NCMRI (i.e. T1FS pre-contrast MRI, T2FS MRI, and DWI) and three phases contrast-enhanced MRI of 255 clinical subjects. The experiments show that UAL gains high performance with the dice similarity coefficient of 83.63%, the pixel accuracy of 97.75%, the intersection-over-union of 81.30%, the sensitivity of 92.13%, the specificity of 93.75%, and the detection accuracy of 92.94%, which demonstrate that UAL has great potential in the clinical diagnosis of liver tumors.
Copyright © 2021. Published by Elsevier B.V.

Entities:  

Keywords:  Liver tumors segmentation and detection; Multi-modality NCMRI; Multi-phase radiomics feature; United adversarial learning

Year:  2021        PMID: 34280670     DOI: 10.1016/j.media.2021.102154

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  3 in total

1.  Improving automatic liver tumor segmentation in late-phase MRI using multi-model training and 3D convolutional neural networks.

Authors:  Annika Hänsch; Grzegorz Chlebus; Hans Meine; Felix Thielke; Farina Kock; Tobias Paulus; Nasreddin Abolmaali; Andrea Schenk
Journal:  Sci Rep       Date:  2022-07-18       Impact factor: 4.996

Review 2.  Deep Learning With Radiomics for Disease Diagnosis and Treatment: Challenges and Potential.

Authors:  Xingping Zhang; Yanchun Zhang; Guijuan Zhang; Xingting Qiu; Wenjun Tan; Xiaoxia Yin; Liefa Liao
Journal:  Front Oncol       Date:  2022-02-17       Impact factor: 6.244

3.  Edge Constraint and Location Mapping for Liver Tumor Segmentation from Nonenhanced Images.

Authors:  Jina Zhang; Shichao Luo; Yan Qiang; Yuling Tian; Xiaojiao Xiao; Keqin Li; Xingxu Li
Journal:  Comput Math Methods Med       Date:  2022-03-09       Impact factor: 2.238

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.