Literature DB >> 33348893

Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space.

Kai Guo1,2, Xiongfei Li1,2, Hongrui Zang3, Tiehu Fan4.   

Abstract

In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.

Entities:  

Keywords:  SeLU activation function; YIQ color space; capture image details network; image entropy and cross entropy; intuitive fuzzy processing; trace of a feature map

Year:  2020        PMID: 33348893      PMCID: PMC7766984          DOI: 10.3390/e22121423

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


  7 in total

1.  Image quality assessment: from error visibility to structural similarity.

Authors:  Zhou Wang; Alan Conrad Bovik; Hamid Rahim Sheikh; Eero P Simoncelli
Journal:  IEEE Trans Image Process       Date:  2004-04       Impact factor: 10.856

2.  DenseFuse: A Fusion Approach to Infrared and Visible Images.

Authors:  Hui Li; Xiao-Jun Wu
Journal:  IEEE Trans Image Process       Date:  2018-12-18       Impact factor: 10.856

3.  Image fusion with guided filtering.

Authors:  Shutao Li; Xudong Kang; Jianwen Hu
Journal:  IEEE Trans Image Process       Date:  2013-01-30       Impact factor: 10.856

4.  DDcGAN: A Dual-discriminator Conditional Generative Adversarial Network for Multi-resolution Image Fusion.

Authors:  Jiayi Ma; Han Xu; Junjun Jiang; Xiaoguang Mei; Xiao-Ping Zhang
Journal:  IEEE Trans Image Process       Date:  2020-03-10       Impact factor: 10.856

5.  Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer.

Authors:  Florent Tixier; Catherine Cheze Le Rest; Mathieu Hatt; Nidal Albarghach; Olivier Pradier; Jean-Philippe Metges; Laurent Corcos; Dimitris Visvikis
Journal:  J Nucl Med       Date:  2011-02-14       Impact factor: 10.057

6.  Exploring feature-based approaches in PET images for predicting cancer treatment outcomes.

Authors:  I El Naqa; P Grigsby; A Apte; E Kidd; E Donnelly; D Khullar; S Chaudhari; D Yang; M Schmitt; Richard Laforest; W Thorstad; J O Deasy
Journal:  Pattern Recognit       Date:  2009-06-01       Impact factor: 7.740

7.  Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

Authors:  Zhen Chao; Dohyeon Kim; Hee-Joung Kim
Journal:  Phys Med       Date:  2018-03-21       Impact factor: 2.685

  7 in total
  1 in total

1.  A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation.

Authors:  Jilei Hou; Dazhi Zhang; Wei Wu; Jiayi Ma; Huabing Zhou
Journal:  Entropy (Basel)       Date:  2021-03-21       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.