Literature DB >> 33364210

Medical Image Fusion using bi-dimensional empirical mode decomposition (BEMD) and an Efficient Fusion Scheme.

Mozaffarilegha M1, Yaghobi Joybari A2, Mostaar A1,3.   

Abstract

BACKGROUND: Medical image fusion is being widely used for capturing complimentary information from images of different modalities. Combination of useful information presented in medical images is the aim of image fusion techniques, and the fused image will exhibit more information in comparison with source images.
OBJECTIVE: In the current study, a BEMD-based multi-modal medical image fusion technique is utilized. Moreover, Teager-Kaiser energy operator (TKEO) was applied to lower BIMFs. The results were compared to six routine methods.
MATERIAL AND METHODS: In this study, which is of experimental type, an image fusion technique using bi-dimensional empirical mode decomposition (BEMD), Teager-Kaiser energy operator (TKEO) as a local feature selection and Hierarchical Model And X (HMAX) model is presented. BEMD fusion technique can preserve much functional information. In the process of fusion, we adopt the fusion rule of TKEO for lower bi-dimensional intrinsic mode functions (BIMFs) of two images and HMAX visual cortex model as a fusion rule for higher BIMFs, which are verified to be more appropriate for human vision system. Integrating BEMD and this efficient fusion scheme can retain more spatial and functional features of input images.
RESULTS: We compared our method with IHS, DWT, LWT, PCA, NSCT and SIST methods. The simulation results and fusion performance show that the presented method is effective in terms of mutual information, quality of fused image (QAB/F), standard deviation, peak signal to noise ratio, structural similarity and considerably better results compared to six typical fusion methods.
CONCLUSION: The statistical analyses revealed that our algorithm significantly improved spatial features and diminished the color distortion compared to other fusion techniques. The proposed approach can be used for routine practice. Fusion of functional and morphological medical images is possible before, during and after treatment of tumors in different organs. Image fusion can enable interventional events and can be further assessed. Copyright: © Journal of Biomedical Physics and Engineering.

Entities:  

Keywords:  Diagnostic Imaging; Empirical Mode Decomposition; Image Fusion; Image Processing, Computer-Assisted; Multimodal Imaging

Year:  2020        PMID: 33364210      PMCID: PMC7753264          DOI: 10.31661/jbpe.v0i0.830

Source DB:  PubMed          Journal:  J Biomed Phys Eng        ISSN: 2251-7200


Introduction

Recently, an increasing interest in medical image fusion has been observed [ 1 - 4 ]. Fusion of medical images obtained from different imaging systems such as positron emission tomography (PET), magnetic resonance image (MRI), single photon emission computed tomography (SPECT) and computed tomography (CT) facilitate image analysis, clinical diagnosis and treatment planning [ 5 ]. Each medical imaging modality provides a different level of structural and functional information. For instance, CT (based on x-ray principle) is often used to represent dense structures, and is not suitable for soft tissues and physiological analysis. By contrast, MRI provides a better representation of soft tissue and is usually used for the diagnosis of tumors and other tissue abnormalities. Similarly, low blood pressure information in one region of the body is obtained by PET; nonetheless, its low resolution is one of the disadvantages of this imaging modality [ 6 ]. Previous methods have revealed that image fusion has a great ability to improve diagnostic and treatment in different pathological populations such as cancer patients [ 7 - 10 ]. Various algorithms have been applied effectively for most applications in the past and were successfully applied for diagnosis of kidney and liver tumors [ 11 ]. Fusion can be valuable during interventional events and can contribute before, during and after tumor therapy [ 12 ]. Most functional and morphologic imaging researches offer distinct and complimentary information. Registration of medical images can provide another insight into the spatial relationships between tumor and thermal lesion. Predictable clarification practices mental registration [ 13 ]; nevertheless, computer processing can afford an impartial and exact assessment [ 14 ]. Recent research has revealed that fusion of abdominal images from diverse modalities can recover analysis and monitoring of progression of disease [ 15 , 16 ]. New imaging modalities joining positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT) proposes a unique inspection encouraging investigative and prognostic capacities for different applications of image fusion in cancer [ 14 ]. Image fusion has confirmed advantageous for the assessment of patients with cancer supportive diagnosis, treatment development, monitoring the reply to therapy with disease development [ 17 - 19 ]. Hence, combining images obtained from different methods is required to extract sufficient information, reduce redundancy and make it more suitable for visual perception [ 20 ]. When there are multiple images of a patient, medical image fusion is applied. Fused images could be provided from multiple images from the same imaging modality [ 21 ], or multiple modalities [ 22 ]. Goshtas by categorized image fusion algorithms into pixel [ 23 ], feature [ 24 ] and symbolic [ 25 ] levels [ 20 ]. Pixel level fusion is more appropriate than other fusion methods, and can be implemented in both spatial and transform domains. Principal component analysis (PCA) [ 26 ], and Intensity hue saturation (IHS) [ 27 , 28 ] methods are in spatial domain pixel level fusion category. However, spatial domain fusion methods can cause spatial distortion [ 29 ]. In order to overcome these disadvantages, multi-scale decomposition (MSD) based medical image methods such as Daubechies complex wavelet transform [ 4 ], lifting wavelet transform (LWT) [ 30 ], Weighted Score Level Fusion [ 31 ], curvelet transform [ 32 , 33 ], non-subsample Deontourlet transform (NSCT) [ 34 ], Shearlet transform [ 35 ], shift-invariant shearlet transform (SIST) [ 1 ] and fuzzy transform [ 20 ] have been widely used to the fusion of medical imaging. Because of limitations in providing directional information, discrete wavelet transform (DWT) based fusion method produce block artifacts and inconsistency in the fused results [ 36 ]. Contourlet transform methods use various and flexible directions to distinguish geometrical structures. Hence, the down- and up-sampling cause ringing artifacts; it is a redundant transform [ 37 ]. Curvelet transform can capture the intrinsic geometrical structure of an image; however, it does not provide a multi-resolution demonstration of geometry [ 38 ]. Empirical Mode decomposition (EMD) is an innovative data representation that decomposes non-stationary and non-linear signals into Intrinsic Mode Functions (IMFs) [ 39 ]. Compared to other former multi-scale decomposition approaches, EMD can more precisely represent image information. The reasons are as follows [ 40 ]: (1) This decomposition technique is data driven; (2) Decomposition is based on the local spatial scale of the image; and (3) IMFs permit illustration of instantaneous frequencies as functions of space. Physical properties of one-dimensional EMD can also be extended to two-dimensional image analysis. Qiao et al. developed space transform by combining the panchromatic images into multispectral images [ 41 ]. Chenet et al. integrated SVM with EMD to create multi-focus image fusion [ 42 ]. After that Zhang et al. implemented a comparison of EMD-based image fusion approaches and showed that the fused image quality is the best by BEMD [ 43 ]. Ahmed et al. and Wielgus et al. considered the use of fast and adaptive BEMD in image fusion [ 44 , 45 ]. Also Zhao et al. proposed a bidimensional empirical mode decomposition with directional information to merge medical images [ 46 ]. These studies exhibit the potential of using BEMD on medical image fusion. Consequently, we select BEMD as the MSD tool in our present work. Choice of the fusion schemes is another essential work for the MSD-based image fusion technique. There are various fusion rules for a variety of applications. The coefficients are combined with a rule, such as choose-max [ 35 , 47 ], the energy and regional information entropy [ 48 ], Pulse Coupled Neural Network (PCNN) [ 49 ] and Self-Generating Neural Network [ 50 ]. The drawbacks of these rules include time-consuming process and no statistical dependency between these MSD coefficients. The correlation coefficients of the cross- and inter- sub-bands scales have been considered as fusion criteria [ 2 ]. However, BIMFs are statistically uncorrelated or orthogonal, and there are no dependencies between the BIMFs. In this study, a BEMD-based multi-modal medical image fusion technique is utilized. Teager-Kaiser energy operator (TKEO) applied to lower BIMFs. TKEO can track the energy and distinguish the instantaneous frequency and instantaneous amplitude of mono-component AM-FM signal [ 51 ]. TKEO is used to emphasize the pixel activity. Furthermore, this operator reflects much more the pixel energy activity compared to other features. HMAX visual cortex model is used for higher BIMFs. The proposed fusion schema makes complete use of the mechanism of V1 visual cortex to fused proper BIMFs. This study is organized as follows: Section 2 explains the proposed framework of method (Section 2.1), the BEMD-based fusion technique (Section 2.2), theoretical overview of TKEO (Section 2.3), HMAX visual cortex model (Section 2.4) and the fusion rule for the BIMFs is described in Section 2.5. Section 3 presents experimental results. Finally, Section 4 is devoted to conclusion.

Material and Methods

This study is an experimental type and this section offers main fusion method under the BEMD frame. Then, the theory and implementation of BEMD are presented. The fusion rules based on dependencies of the BIMFs are also discussed.

The Proposed Framework of Medical Image Fusion

It should be noted that PET images are depicted by pseudo-color, thus, we considered them color images. Figure 1 is a block diagram representing proposed algorithm. The steps of the algorithm are summarized here.
Figure 1

Schematic diagram of the bi-dimensional empirical mode decomposition (BEMD)-based medical image fusion method

Schematic diagram of the bi-dimensional empirical mode decomposition (BEMD)-based medical image fusion method Step 1: Convert source image B into IHS model and then calculate the intensity component of it. Step 2: Decompose the intensity components into BIMFs via BEMD. Step 3: Combine lower BIMFs according to TKEO rules. Step 4: Combine higher BIMFs based on HMAX Visual Cortex model. Step 5: Reconstruct the intensity components of the fused image by summation of selected BIMFs. Step 6: Reconstruct the fused color image using the inverse IHS transform.

BEMD-based Fusion Algorithm: Theoretical Overview of BEMD

The bi-dimensional empirical mode decomposition (BEMD) has been suggested to adaptively extract different frequency components of image [ 39 ]. This technique is derived from the assumption that image consists of various bi-dimensional intrinsic mode functions (BIMFs). ABIMF is defined by two criteria: firstly, each BIMF has the same number of zero crossings and extrema; secondly, each BIMF is symmetric with respect to the local mean. The following plan suggests an idea about the principle algorithm of the BEMD: 1) Identify the extrema of the image I by morphological reconstruction based on geodesic operators. 2) Generate the 2D ‘envelope’ by connecting maxima points (respectively, minima points) with a radial basis function (RBF). 3) Determine the local mean m1; by averaging the two envelopes. 4) Since BIMF should have zero local mean, subtract out the mean from the image: I-m 5) Repeat as h1 is a BIMF.

Theoretical Overview of TKEO

It is revealed that the TKEO can track the energy and recognize the instantaneous frequency (IF) and the amplitude of a signal [ 52 ]. Energy of each pixel can be assessed using an image statistic such as the Sobel detectors or gradient; nevertheless, these methods are sensitive to noise [ 53 ] and do not perfectly highlight edges. The 2D-TKEO distinguishes noise peaks and true edges, and reflects better the local activity than the amplitude of the gradient [ 54 ]. The 2D-TKEO is defined by [ 55 ]: ψ(I(m,n))=||∇I(m,n)|| (1) I(x, y) supposes twice-differentiable continuous real valued function. The first type of the new 2D nonlinear filter has been attained by applying the filtering operation of Eq. (1) along both the vertical and horizontal directions resulting in a 2-D version given by [ 57 ]: ψ(I(m,n))=2I (2) An essential characteristic of 2D-TKEO is that it is approximately instantaneous and this resolution offers us an ability to capture the energy fluctuations. Additionally, implementation is very easy.

HMAX Visual Cortex Model

The HMAX Visual Cortex model is arranged in some layers to evaluate information in a bottom-up way. The layers of the model are called simple “S” or complex “C” which are discovered by Hubel and Wiesel [ 42 ]. These cells are placed in the striate cortex (called V1), which is the part of visual cortex in the most posterior area of the occipital lobe. The structure of HMAX model is shown in Figure 2.
Figure 2

The structure of mathematical simulation of Hierarchical Model And X (HMAX) model

The structure of mathematical simulation of Hierarchical Model And X (HMAX) model In this model [ 43 ], S1 and S2 are two layers of simple cells, and C1 and C2 are two layers of complex cells (Figure 2). The layers are computed by a hard max filter. The images are processed by the subsequent simple and complex cells layers and reduced to set of features (F). The S1 layer adjusts the 2D Gabor filters calculated for four orientations (horizontal, vertical, and two diagonal) at each position and scale. The Gabor filter is described by: (3) where, X = xcosφ - ysinφ and Y = xsinφ + ycosφ; The aspect ratio (γ), affective width (σ), and wavelength (λ) are fixed to 0.3, 4.5 and 5.6, respectively. Finally, the HMAX response R can be computed using the formula (4) (4)

Fusion Rule for BIMFs

It is well known that lower IMFs correspond to higher frequency parts and vice versa, thus, higher BIMFs provide the approximation of original images. Frequently, the averaging or regional standard deviation methods are used to produce the fused low frequency coefficients. However, their drawback is low-contrast results. On the other hand, clarity of the local energy is observed. Therefore, a new scheme for fusion is developed by the hierarchical HMAX in the visual cortex model to select between BIMFs. The completely developed scheme is described as follows: 1) Compute the HMAX response by Eq. (4) 2) The fused BIMFs are obtained by the hierarchical HMAX response mapping: (5) where, C denotes higher BIMFs located at (i, j), l=A,B.A (MRI), B (PET/SPECT) and F is fused image. Lower BIMFs offer the detailed information of image. The choose-max method is a popular scheme, which is used for composition of high frequency coefficients. This method selects only the maximum amplitude of single coefficient; hence, it is not suitable for medical image features. Consequently, to obtain better results than other fusion schemes, 2D Teager Kaiser Energy Operator (2D TKEO) is applied to construct a weight fusion scheme. The 2D TKEO reflects finer local activity. This quadratic filter enhances information, which is the average of gray values by the energy activity at each pixel. Let C denote lower BIMFs located at (x, y), i=A,B. The coeffcient of the fused image at location (x, y) can be calculated by: (6) where, μ is the weight for the local Energy Ei. Ei is computed in the 3 * 3 neighborhood by Eq. (2): (7)

Results

PET/MRI/SPECT images used in this study were obtained from Harvard university site (http://www.med.harvard.edu/AANLIB/home.html). The simulation results of our method were compared with IHS transform, DWT, LWT, PCA, NSCT and SIST. The performance of our algorithm is evaluated by Mutual Information (MI for short) [ 27 ], QAB/F [ 57 ], Standard deviation (SD for short) [ 35 ], peak signal-to-noise ratio (PSNR for short) and structural similarity (SS for short) [ 58 , 59 ]. Figures 3a and b show PET-MRI images from a 60 year-old man with Mild Alzheimer’s disease. Figures 4a and b demonstrate SPECT-MRI images from a 38 year-old man with Mild Neoplastic Disease (brain tumor). From the result, it can be obviously seen that proposed fusion technique can retain high spatial resolution features of the MRI image. Moreover, the fused image does not distort the spectral features of multispectral image. In addition, according to the quantitative comparison of different fusion techniques, most metrics can achieve the best value by the proposed method that is seen in Tables 1 and 2.
Figure 3

Alzheimer’s disease positron emission tomography (PET) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i).

Figure 4

Single photon emission computed tomography (SPECT) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i).

Table 1

The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/positron emission tomography (PET) (Alzheimer’s disease).

IHSPCADWTLWTNSCTSTProposed method
MI2.46412.60932.77402.77832.52532.85542.9871
SD37.756947.679153.170253.194153.530868.985581.9598
QAB/F0.31020.27420.37450.37210.21560.21110.4023
PSNR 14.855317.242321.189221.180620.909425.863028.4485
SS0.88700.81410.95460.95370.91440.94450.9456

IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity

Table 2

The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/single photon emission computed tomography (SPECT) (Neoplastic Disease).

IHSPCADWTLWTNSCTSTProposed method
MI2.61422.51162.49732.49202.21912.68692.9302
SD51.939248.045548.479248.540851.377069.297283.4856
QAB/F0.56730.25690.28530.28210.11170.52180.5253
PSNR 19.434825.873424.730824.687322.090620.722317.2108
SS0.86620.92400.91500.91380.84530.88710.8827

IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity

Alzheimer’s disease positron emission tomography (PET) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i). Single photon emission computed tomography (SPECT) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i). The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/positron emission tomography (PET) (Alzheimer’s disease). IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/single photon emission computed tomography (SPECT) (Neoplastic Disease). IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity

Discussion

Visual analysis demonstrates that results of the proposed method have more spatial resolutions. It appears that, our results (Figures (3i) and (4i)) show visually the best results among all other results. The proposed method enables offering less spectral information loss as compared to other state-of-the-art techniques. The proposed algorithm is compared with IHS, DWT, LWT, PCA, NSCT and SIST methods. A proper fusion method should maintain the spectral characteristics of PET image and the high spatial characteristics of the MRI image, which is obtained by the proposed method. To entirely envisage two fused images, it is significant to be able to regulate the colorization, brightness and image contrast autonomously. In addition, it is also vital to be able to correct the amount of merger between the two fused images. Having the capability to adjust these image characteristics significantly recovers the imagining of lesions and necrotic parts. Such a visualization is vigorous to precise evaluations. The most individually actual color arrangements were kept, which permitted further computerization of repetitive post-processing stages. In addition, our techniques allow localizing and fusing both MRI and PET images. Image processing and fusion established investigative apparatuses that can be further assessed for possible effectiveness during interventional procedures.

Conclusion

In this study, we present a novel method based on the BEMD technique to decompose medical images into various frequency bands and Teager-Kaiser energy operator (TKEO) applied to lower modes to extract regional features and HMAX visual cortex model for higher BIMFs. Thanks to this model, the proposed divisive HMAX-based fusion rule can be applied on higher BIMFs to make complete use of the mechanism of visual cortex (V1). The statistical analyses revealed that our algorithm significantly rose spatial information and decreased the color distortion compared to other fusion methods on the fusion of MRI/PET and MRI/SPECT. Furthermore, the results of our algorithm are visually better than all other results.
  22 in total

1.  Coregistration of brain single-positron emission computed tomography and magnetic resonance images using anatomical features.

Authors:  L Ferrari de Oliveira; P M Azevedo Marques
Journal:  J Digit Imaging       Date:  2000-05       Impact factor: 4.056

Review 2.  A survey of medical image registration.

Authors:  J B Maintz; M A Viergever
Journal:  Med Image Anal       Date:  1998-03       Impact factor: 8.545

3.  MR and CT image fusion for postimplant analysis in permanent prostate seed implants.

Authors:  Alfredo Polo; Federica Cattani; Andrea Vavassori; Daniela Origgi; Gaetano Villa; Hugo Marsiglia; Massimo Bellomi; Giampiero Tosi; Ottavio De Cobelli; Roberto Orecchia
Journal:  Int J Radiat Oncol Biol Phys       Date:  2004-12-01       Impact factor: 7.038

4.  The contourlet transform: an efficient directional multiresolution image representation.

Authors:  Minh N Do; Martin Vetterli
Journal:  IEEE Trans Image Process       Date:  2005-12       Impact factor: 10.856

5.  Feature level fusion of multimodal medical images in lifting wavelet transform domain.

Authors:  Sudipta Kor; Umashanker Tiwary
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2004

6.  Bayesian reconstruction and use of anatomical a priori information for emission tomography.

Authors:  J E Bowsher; V E Johnson; T G Turkington; R J Jaszczak; C R Floyd; R E Coleman
Journal:  IEEE Trans Med Imaging       Date:  1996       Impact factor: 10.048

7.  SPET/CT image co-registration in the abdomen with a simple and cost-effective tool.

Authors:  Gregor J Förster; Christina Laumann; Otmar Nickel; Peter Kann; Olaf Rieker; Peter Bartenstein
Journal:  Eur J Nucl Med Mol Imaging       Date:  2002-10-25       Impact factor: 9.236

8.  Image fusion using CT, MRI and PET for treatment planning, navigation and follow up in percutaneous RFA.

Authors:  F L Giesel; A Mehndiratta; J Locklin; M J McAuliffe; S White; P L Choyke; M V Knopp; B J Wood; U Haberkorn; H von Tengg-Kobligk
Journal:  Exp Oncol       Date:  2009-06

9.  Improving Empirical Mode Decomposition Using Support Vector Machines for Multifocus Image Fusion.

Authors:  Shaohui Chen; Hongbo Su; Renhua Zhang; Jing Tian; Lihu Yang
Journal:  Sensors (Basel)       Date:  2008-04-08       Impact factor: 3.576

10.  MRI and PET image fusion using fuzzy logic and image local features.

Authors:  Umer Javed; Muhammad Mohsin Riaz; Abdul Ghafoor; Syed Sohaib Ali; Tanveer Ahmed Cheema
Journal:  ScientificWorldJournal       Date:  2014-01-19
View more
  1 in total

1.  Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19).

Authors:  Nasrin Amini; Ahmad Mostaar
Journal:  J Med Signals Sens       Date:  2021-12-28
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.