Literature DB >> 31217099

Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer.

Ashnil Kumar, Michael Fulham, Dagan Feng, Jinman Kim.   

Abstract

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.

Entities:  

Year:  2019        PMID: 31217099     DOI: 10.1109/TMI.2019.2923601

Source DB:  PubMed          Journal:  IEEE Trans Med Imaging        ISSN: 0278-0062            Impact factor:   10.048


  9 in total

Review 1.  Applications of artificial intelligence in nuclear medicine image generation.

Authors:  Zhibiao Cheng; Junhai Wen; Gang Huang; Jianhua Yan
Journal:  Quant Imaging Med Surg       Date:  2021-06

2.  Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks.

Authors:  Manjit Kaur; Dilbag Singh
Journal:  J Ambient Intell Humaniz Comput       Date:  2020-08-08

3.  A fine-grained network for human identification using panoramic dental images.

Authors:  Hu Chen; Che Sun; Peixi Liao; Yancun Lai; Fei Fan; Yi Lin; Zhenhua Deng; Yi Zhang
Journal:  Patterns (N Y)       Date:  2022-04-01

4.  Anatomically aided PET image reconstruction using deep neural networks.

Authors:  Zhaoheng Xie; Tiantian Li; Xuezhu Zhang; Wenyuan Qi; Evren Asma; Jinyi Qi
Journal:  Med Phys       Date:  2021-07-28       Impact factor: 4.506

5.  Intelligent Labeling of Tumor Lesions Based on Positron Emission Tomography/Computed Tomography.

Authors:  Shiping Ye; Chaoxiang Chen; Zhican Bai; Jinming Wang; Xiaoxaio Yao; Olga Nedzvedz
Journal:  Sensors (Basel)       Date:  2022-07-10       Impact factor: 3.847

6.  Anomaly detection in chest 18F-FDG PET/CT by Bayesian deep learning.

Authors:  Takahiro Nakao; Shouhei Hanaoka; Yukihiro Nomura; Naoto Hayashi; Osamu Abe
Journal:  Jpn J Radiol       Date:  2022-01-30       Impact factor: 2.701

7.  An End-to-End Recurrent Neural Network for Radial MR Image Reconstruction.

Authors:  Changheun Oh; Jun-Young Chung; Yeji Han
Journal:  Sensors (Basel)       Date:  2022-09-26       Impact factor: 3.847

8.  Automatic segmentation of lung tumors on CT images based on a 2D & 3D hybrid convolutional neural network.

Authors:  Wutian Gan; Hao Wang; Hengle Gu; Yanhua Duan; Yan Shao; Hua Chen; Aihui Feng; Ying Huang; Xiaolong Fu; Yanchen Ying; Hong Quan; Zhiyong Xu
Journal:  Br J Radiol       Date:  2021-08-04       Impact factor: 3.629

9.  Two-Stage Deep Learning Framework for Discrimination between COVID-19 and Community-Acquired Pneumonia from Chest CT scans.

Authors:  Mohamed Abdel-Basset; Hossam Hawash; Nour Moustafa; Osama M Elkomy
Journal:  Pattern Recognit Lett       Date:  2021-10-29       Impact factor: 4.757

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.