| Literature DB >> 35464043 |
R Nandhini Abirami1, P M Durai Raj Vincent1, Kathiravan Srinivasan2, K Suresh Manic3, Chuan-Yu Chang4,5.
Abstract
Multimodal medical image fusion is a current technique applied in the applications related to medical field to combine images from the same modality or different modalities to improve the visual content of the image to perform further operations like image segmentation. Biomedical research and medical image analysis highly demand medical image fusion to perform higher level of medical analysis. Multimodal medical fusion assists medical practitioners to visualize the internal organs and tissues. Multimodal medical fusion of brain image helps to medical practitioners to simultaneously visualize hard portion like skull and soft portion like tissue. Brain tumor segmentation can be accurately performed by utilizing the image obtained after multimodal medical image fusion. The area of the tumor can be accurately located with the information obtained from both Positron Emission Tomography and Magnetic Resonance Image in a single fused image. This approach increases the accuracy in diagnosing the tumor and reduces the time consumed in diagnosing and locating the tumor. The functional information of the brain is available in the Positron Emission Tomography while the anatomy of the brain tissue is available in the Magnetic Resonance Image. Thus, the spatial characteristics and functional information can be obtained from a single image using a robust multimodal medical image fusion model. The proposed approach uses a generative adversarial network to fuse Positron Emission Tomography and Magnetic Resonance Image into a single image. The results obtained from the proposed approach can be used for further medical analysis to locate the tumor and plan for further surgical procedures. The performance of the GAN based model is evaluated using two metrics, namely, structural similarity index and mutual information. The proposed approach achieved a structural similarity index of 0.8551 and a mutual information of 2.8059.Entities:
Mesh:
Year: 2022 PMID: 35464043 PMCID: PMC9023223 DOI: 10.1155/2022/6878783
Source DB: PubMed Journal: Behav Neurol ISSN: 0953-4180 Impact factor: 3.112
Figure 1Medical image modalities.
Figure 2Methods adopted to perform image fusion.
Medical image fusion—existing approaches.
| Modality | Work | Organ | Fusion technique | Approach |
|---|---|---|---|---|
| PET and MRI | [ | Brain | 2D Intensity Hue Saturation and Hilbert transform | PET and MRI image fusion using 2D Hilbert transform and Intensity Hue Saturation Fourier Transform is calculated for each of the input signals. |
| PET and MRI | [ | Brain | Dual ripplet II transform | The proposed approach uses dual ripplet II transform as it uses complex wavelet transform. The color and spatial information of the images are preserved using weight matrix. |
| PET and MRI | [ | Brain | Intrinsic image decomposition | Model based on intrinsic image decomposition are proposed to extract structural information from MRI and for obtaining color details from PET. |
| PET and MRI | [ | Brain | Non-subsamples shearlet transform | The input images are registered and normalized and then transformed into independent components. Pulse coupled neural network is used to obtain useful information from the image. |
| MRI and CT | [ | Brain | Hybrid image fusion | The image fusion is performed using principal component analysis and independent component analysis. |
| MRI and CT | [ | Brain | Adolescent Identity Search Algorithm | The proposed approach decomposes the input image into high and low frequency component using nonsubsampled shearlet transform. |
| MRI and PET | [ | Brain | Spectral Total Variation Transform | The proposed method decomposes the source image into base components and then combined using spatial frequency dual channel model. |
Algorithm 1
Figure 3Architecture of the proposed approach for image fusion.
Figure 4Generator architecture.
Figure 5Discriminator architecture.
Figure 6Flowchart of the proposed approach.
Figure 7Sample PET images from the dataset.
Figure 8Sample MRI images from the dataset.
Figure 9Result obtained from the proposed approach.
Figure 10Comparison of obtained fusion results with existing approach.
Performance evaluation of the proposed approach.
| Fusion technique | Average structural similarity | Mutual information |
|---|---|---|
| Shift-invariant shearlet transform [ | 0.7104 | 2.2583 |
| Wavelet transform [ | 0.6725 | 2.3171 |
| Nonsubsampled shearlet transform [ | 0.8051 | 2.4336 |
| Multilevel local extrema [ | 0.8175 | 2.5377 |
| Proposed method | 0.8551 | 2.8059 |