| Literature DB >> 32544841 |
Wenguang Yuan1, Jia Wei2, Jiabing Wang1, Qianli Ma1, Tolga Tasdizen3.
Abstract
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.Entities:
Keywords: Generative adversarial network; Multimodal segmentation; Multitask learning; Unpaired medical image
Mesh:
Year: 2020 PMID: 32544841 DOI: 10.1016/j.media.2020.101731
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545