Literature DB >> 24817880

MRI Volume Fusion Based on 3D Shearlet Decompositions.

Chang Duan1, Shuai Wang2, Xue Gang Wang2, Qi Hong Huang3.   

Abstract

Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

Entities:  

Year:  2014        PMID: 24817880      PMCID: PMC4003782          DOI: 10.1155/2014/469015

Source DB:  PubMed          Journal:  Int J Biomed Imaging        ISSN: 1687-4188


1. Introduction

Medical image fusion is a special case of image fusion and has been studied for decades. It is widely applied in medical diagnostics [1, 2]. It refers to extracting and merging the feasible information from different source images, which were captured by different kinds of sensors, such as CT, MRI, and PET, or different configurations of the same sensor, such as MRI T2* and quantitative susceptibility mapping (QSM). Some information is correlated, but most of the information is complementary, because special sensors or special configurations of the same sensor are sensible to special sources. For example, CT images provide the details of dense hard tissues, MRI images give information of soft tissues: T2* provides the contrast of the tissue relaxation time, and QSM can be provide susceptibility contrast information, which is produced by a range of endogenous magnetic biomarkers and contrast agents such as iron, calcium, and gadolinium (Gd). If different data can be properly fused, the fused data contain all the feasible information of the scanned object, which can reveal the details of structure more clearly than each single sensor. Previously, all source data need to be registered. Because 3D T2* magnitude image and QSM image are acquired from the real and imaginary part of the same scan, QSM images are exactly registered to T2* images. Nowadays, many researches on medical fusion method only consider the 2D case. However, many medical sensors can provide 3D data volume, and the value of each point in the volume is correlated not only to the adjacent points in the same layer but also to the points in neighboring layers. Therefore, it is necessary to develop the volume fusion method instead of 2D image fusion method which can only fuse the data in single layer. Fusion methods can be performed in spatial domain or certain transformed domain. In spatial domain, the intuitive fused image is selected as the weighted average image of source images. This kind of methods is relatively easy to implement, but the performances are low and some feasible information is reduced or even lost. The transformed domain of fusion methods is usually following the following steps: (1) registering source images, (2) performing the forward transform to sources images, (3) acquiring the fused coefficients from coefficients of source images under fusion rules, and (4) performing backward transform to fused coefficients to get the fused image. In this type of methods, the research works are usually focused on two points: the choice of the transform and the design of fusion rule. Many multiscale transforms are applied in fusion methods, such as DWT, DTCWT, curvelets [3], and shearlets [4]. Shearlets emerged in recent years among the most successful frameworks for the efficient representation of multidimensional data. Indeed, many other transforms were introduced to overcome the limitation of traditional multiscale transforms due to their poor ability of capturing edges and other anisotropic features. However, shearlet transform stands out since it has many advantages uniquely. It has a single or finite set of generating functions; it provides optimally sparse representations for multidimensional data; it allows a unified treatment of the continuum and digital realms. With these advantages, shearlet transform has been widely utilized in many image processing tasks such as denoising [5], edge detection [6], and enhancement [7]. And in many papers [4], as well as this paper, shearlet transform is indeed also very suited to image fusion. In this paper, the 3D band limited shearlet transform, which is the discrete implementation of shearlet transform, is selected for medical volume fusion. Three fusion rules are utilized in this paper: maximum points' modulus (MPM), which considers only the value of single point; maximum regional energy (MRE), which considers the information for the local region [8] and treats each point of the region equally; and maximum sum of modified laplacian [9], which also considers the information in the region but treats the center point of the region and the points around it differently. Other more complicated fusion rules have also been proposed. The above three methods were selected as representatives. These three classic fusion rules are expanded into 3 dimensions. In order to evaluate the performance of proposed method, the quality indices also are extended into 3D version. The rest of the paper is organized as follows. In Section 2, the basic theories about 3D shearlet transform and the discrete implementation, 3D BLST, are briefly introduced. In Section 3, fusion method based on 3D BLST with three fusion rules are proposed. Using the experiments of Section 4, the comparison of 2D and 3D methods and the performance of the proposed methods are illustrated and discussed. Finally, we draw conclusions in Section 5.

2. 3D Shearlet Transform

In this section, the basic theory of 3D shearlet transform and its discrete implementation, 3D band limited shearlet transform (3D BLST), are introduced.

2.1. Basic Theory of the 3D Shearlet Transform

As shown in Figure 1, the 3D frequency domain can be partitioned into three pairs of pyramids given by and the center cube The partitioning of frequency space into pyramids allows restricting the range of the shear parameters. Without such partitioning, one must allow arbitrarily large shear parameters, which leads to a treatment biased toward one axis. The defined partition, however, enables restriction of the shear parameters to [⌈−2⌉, ⌈2⌉]. This approach is the key to provide an almost uniform treatment of different directions in a sense of a good approximation to rotation.
Figure 1

The partition of frequency domain for 3D pyramid-adapted shearlet transform.

Pyramid-adapted shearlets are scaled according to the paraboloidal scaling matrices as A 2, and , j ∈ ℤ defined by , , and . Directionality is encoded by the shear matrices S , , and , k = (k 1, k 2) ∈ ℤ 2, given by , , and , respectively. The translation lattices will be defined through the following matrices: M = diag⁡(c 1, c 2, c 2), , and , where c 1 > 0 and c 2 > 0. Then the definition of 3D pyramid-adapted discrete shearlet can be given as follows.

Definition 1

For c = (c 1, c 2)∈(ℝ+)2, the pyramid-adapted discrete shearlet system generated by is defined by where Φ(ϕ; c 1) = {ϕ = ϕ(·−m) : m ∈ c 1 ℤ 3},

2.2. 3D Band Limited Shearlet Transform

3D band limited shearlet transform (3D BLST) is one discrete implementation of 3d pyramid-adapted shearlet transform. Let the shearlet generator ψ ∈ L 2(ℝ3) be defined by , where ψ 1 and ψ 2 satisfy the following assumptions: , , and for |ξ| ≥ 1, ξ ∈ ℝ. , , and for |ξ| ≤ 1, ξ ∈ ℝ. Thus, in frequency domain, the band limited shearlet function ψ ∈ L 2(ℝ3) is almost a tensor product of one wavelet with two “bump” functions and, thereby, a canonical generalization of the classical band limited 2D shearlets. This implies the support in frequency domain by a needle-like shape with the wavelet acting in radial direction and ensures high directional selectivity. The derivation from a tensor product in fact ensures a favorable behavior with respect to the shearing operator and thus a tiling of frequency domain which leads to a tight frame for L 2(ℝ3).

Theorem 2

Let ψ be a band limited shearlet defined as before; the family of functions P Ψ(ψ) forms a tight frame for , where P denotes the orthogonal projection onto and Ψ(ψ) = {ψ : j ≥ 0, |k| ≤ ⌈2⌉, m ∈ (1/8)ℤ 3}. By this theorem and a change of variables, the shearlet tight frames for , , and can be constructed, respectively. Furthermore, wavelet theory provides choice of ϕ ∈ L 2(ℝ3) such that Φ(ϕ; 1/8) forms a tight frame for . Since as a disjoint union, any function can be expressed by f ∈ L 2(ℝ3) as , where P denotes the orthogonal projection onto the closed subspace for some measurable set C ⊂ ℝ3. More details of 3D shearlet theory and the implementation of band limited shearlet transform as well as other implementations can be found in [10-14].

3. Proposed Fusion Method

The proposed fusion method in this paper belongs to the voxel-level fusion, with average rule for low frequency coefficients and three different fusion rules for high frequency coefficients. (a) Max Modulus of Points' Modulus (MPM). One has The fused high coefficients are the coefficients that have the larger modulus as represented in (5), where C , t ∈ {a, b, f} means the high frequency coefficients; a, b label two sources, respectively; f refers to the fused result. This fusion rule considers only the point information. (b) Max Region Energy (MRE) [One has where , t ∈ {a, b}, Ω is a local region, is the mean of all C in Ω, and N is the number of coefficients in Ω. The fused high coefficients are the coefficients that have the larger local energy. This kind of method considers not only the information of the current position but also that information around it. (c) Max Region Sum of Modified Laplacian (MSML). The fused high frequency coefficients are acquired according to (7). 3D version of modified Laplacian index is calculated through (9), and the sum of them is calculated as (8); Ω is a local region. In this paper, the variation step equals 1: The steps of proposed fusion method are given in Figure 2. Firstly, 3D BLST are performed to both source volumes; the low frequency is the average of both source coefficients, the low frequency coefficients are the average of both low frequency coefficients of source images, the high frequency coefficients are acquired by equations (5)–(9). Finally, the backward 3D BLST is performed to fused coefficients, and the output is the fused volume as represented by V .
Figure 2

Steps of proposed medical volume fusion method.

4. Experiment

In this section, the performances of proposed methods are evaluated on 4 human brain subjects. The human study was approved by our Institutional Review Board. MR examinations were performed with a 3.0T MR system (Signa HDxt, GE, USA), using an 8-channel head coil. A 3D T2* weighted multiecho gradient echo sequence was used with the following parameters: FA = 20°; TR = 57 ms; number of TEs = 8; first TE = 5.7 ms; uniform TE spacing (ΔTE) = 6.7 ms; BW = ±41.67 kHz; field of view (FOV) = 24 cm; a range of resolutions were tested: 0.57 × 0.75 × 2 mm3. The 3D T2* magnitude and QSM images reconstructed by NMEDI [15] are interpolated to 128 × 128 × 128 for fusion. Because in QSM processing the magnetic field outside the brain parenchyma was corrupted by noise, QSM region was cropped by a mask, which was obtained using a brain extraction tool (BET) [16]. In following comparisons, the fusion regions are performed in the mask.

4.1. 2D versus 3D: The Consistency along the z-Axis

The volume data has 3 dimensions, that is, x-, y-, and z-axis. In the first experiment, the 2D methods are performed along x- and y-axis frame by frame. And the 3D methods directly fuse the whole volume data. One major difference between these two type methods is the treatment of data along z-axis. The consistency along z-axis is compared by the visual effect of inter frame difference (IFD) images and the measurement of IFD_MI [17, 18]. When calculating IFD_MI, only the voxel which located in both masks of frames is calculated, because the data out of the mask is invalid and the mask is different in IFD images. Suppose the IFD_MI is acquired by MI(D (:, :, i), D (:, :, i), D (:, :, i)), where D , D , D refer to the IFD images from the source V , V and fused volume V , D (:, :, i) = V (:, :, i + 1) − V (:, :, i), d ∈ {a, b, f}. The IFD_MI of this paper is calculated by (10), where N refers to the number of frames along z-axis and • means pointwise multiplication: From the visual impression of the IFD images (Figure 3), it can be noticed that the fused images by 2D methods have several obvious distortions which make the fused images similar to neither of the source images. While, in results by 3D methods, the IDF images are more consistent with the IDF of source data, which means the IDF images are highly correlated to the IDF of source images (Figure 3). The IDF images for fused volumes are very similar to the IDF of QSM data. And the difference among the 3D methods can hardly be noticed. This conclusion can still be drawn from the quality index of IFD_MI, as given in Tables 1, 2, 3, and 4. The 3D BLST with MPM fusion has the highest value of IFD_MI, and all the values for 3D methods are higher than that of the 2D methods with the same fusion rule.
Figure 3

Interframe differences for 2D and 3D fusion methods.

Table 1

IFD_MI for the first group data.

IFD_MI 2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPM1.84431.76591.89052.2147 2.5317
MRE1.73491.76501.89892.15582.3629
MSML1.72741.75032.09652.34322.3766
Table 2

IFD_MI for the second group data.

IFD_MI2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPM2.45162.48522.57622.8131 3.0744
MRE2.41262.41302.46012.61692.8617
MSML2.41052.40552.60212.76992.8555
Table 3

IFD_MI for the third group data.

IFD_MI2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPM1.50341.55461.68842.0240 2.3175
MRE1.48091.49781.63541.99072.1224
MSML1.47051.48301.78592.03802.1244
Table 4

IFD_MI for the fourth group data.

IFD_MI2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPM1.83901.91222.02752.3374 2.6479
MRE1.86201.89572.06682.31432.5054
MSML1.85731.88552.23732.46152.4778

4.2. Performance of Proposed Methods

In the second experiment, the visual effect and the quality index are compared among the fusion methods based on 2D, 3D DWT, 2D, 3D DTCWT, and 3D BLST. Two widely used performance indices are selected as the subjective measurements of the fused results: mutual information (MI) and Q AB∣. However, in document [19], the index of Q AB∣ is in only suited for the case of 2D images; it is necessary to be expanded into 3D in our experiment, where the 2D “Sobel” operator was substituted by 3D “Sobel” operator and the number of angels increases to 3. In most of the previous image fusion research, the quality index is calculated through the whole voxels (or pixels) of the source images (or volumes). However, in this experiment, only the points which located in the area of mask are taken into consideration, because those points outside the mask carry no useful information about the brains and are set to arbitrary value manually; consequently they are ignored in evaluation step. One layer of each coronal, axial, and sagittal plane is selected as the representation; the source and result images are shown in Figures 4, 5, and 6. From the perspective impression, it is hard to tell which fusion method is better, because the resulting images are more similar to each other. The distinctions among them can only be noticed after carefully observation. This phenomenon suggests that the proposed method and all conventional methods can fulfill the fusion task effectively. The performance of different fusion methods can be compared through the quality indices which are listed in Tables 5, 6, 7, and 8. From the tables, it can be noticed that the quality indices of proposed method are larger than the methods based on DWT or DT CWT. And in the case of 3D BLST, among different fusion rules, the rule of MRE has the largest indices.
Figure 4

Coronal source and result images.

Figure 5

Axial source and result images.

Figure 6

Sagittal source and result images.

Table 5

Performance of the first group.

First group2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPMMI1.16521.21681.16151.24041.2683
Q AB∣F 0.18240.19850.18200.21090.2288

MREMI1.15961.23231.14711.2561 1.2974
Q AB∣F 0.19870.21850.19750.2351 0.2611

MSMLMI1.15661.23391.20431.26171.2799
Q AB∣F 0.19770.21580.22570.24020.2497
Table 6

Performance of the second group.

Second group2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPMMI1.18971.23511.18401.27701.2960
Q AB∣F 0.21220.23260.21540.25550.2732

MREMI1.25051.29211.24911.3271 1.3340
Q AB∣F 0.24610.25960.24980.2885 0.3051

MSMLMI1.25451.29691.30561.31571.3232
Q AB∣F 0.24340.25780.27200.29270.2971
Table 7

Performance of the third group.

Third group2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPMMI0.92390.96610.92070.99151.0246
Q AB∣F 0.17630.19230.17850.21160.2283

MREMI0.95141.00680.95921.0390 1.0674
Q AB∣F 0.19060.20840.20010.2333 0.2546

MSMLMI0.95111.00871.01161.06051.0650
Q AB∣F 0.18720.20520.22410.24240.2457
Table 8

Performance of the fourth group.

Fourth group2D DWT2D DTCWT3D DWT3D DTCWT3D BLST
MPMMI1.09681.16121.09601.18991.2168
Q AB∣F 0.19530.21700.19720.23630.2540

MREMI1.11771.18791.14221.2283 1.2512
Q AB∣F 0.21570.24020.23540.2780 0.2936

MSMLMI1.12251.19121.19591.23671.2401
Q AB∣F 0.21520.23670.26230.28600.2883
The method of this paper belongs to the voxel-level fusion method, which considers only the distribution of the shearlet coefficients. In the future, the inner structure feature of the organ will be taken into account to see whether it can further improve the quality of the fused medical volume.

5. Conclusion

In this paper, the 3D medical volume fusion method based on 3D band limited shearlet transform is proposed. From the principles of methods and the experiments the following conclusion can be drawn: (1) the 3D transform based methods have better consistency along z-axis (the third dimension) than conventional 2D transform based methods in medical volume fusion. (2) From both perspective impression and the quality indices, the proposed 3D BLST medical fusion method is better than 3D DWT or 3D DTCWT. (3) Among the fusion rules using 3D BLST, the MRE fusion rule has better performance than the other two fusion rules of MPM and MSML.
  10 in total

1.  3-D discrete shearlet transform and video processing.

Authors:  Pooran Singh Negi; Demetrio Labate
Journal:  IEEE Trans Image Process       Date:  2012-01-11       Impact factor: 10.856

2.  Nonlinear formulation of the magnetic field to source relationship for robust quantitative susceptibility mapping.

Authors:  Tian Liu; Cynthia Wisnieff; Min Lou; Weiwei Chen; Pascal Spincemaille; Yi Wang
Journal:  Magn Reson Med       Date:  2012-04-09       Impact factor: 4.668

3.  Shearlet-based total variation diffusion for denoising.

Authors:  Glenn R Easley; Demetrio Labate; Flavia Colonna
Journal:  IEEE Trans Image Process       Date:  2008-12-16       Impact factor: 10.856

4.  A shearlet approach to edge analysis and detection.

Authors:  Sheng Yi; Demetrio Labate; Glenn R Easley; Hamid Krim
Journal:  IEEE Trans Image Process       Date:  2009-05       Impact factor: 10.856

5.  The discrete shearlet transform: a new directional transform and compactly supported shearlet frames.

Authors:  Wang-Q Lim
Journal:  IEEE Trans Image Process       Date:  2010-01-26       Impact factor: 10.856

6.  Nonseparable shearlet transform.

Authors:  Wang-Q Lim
Journal:  IEEE Trans Image Process       Date:  2013-01-30       Impact factor: 10.856

7.  [Ultrasound-guided image fusion with computed tomography and magnetic resonance imaging. Clinical utility for imaging and interventional diagnostics of hepatic lesions].

Authors:  D-A Clevert; A Helck; P M Paprottka; P Zengel; C Trumm; M F Reiser
Journal:  Radiologe       Date:  2012-01       Impact factor: 0.635

8.  MRI-3D ultrasound-X-ray image fusion with electromagnetic tracking for transendocardial therapeutic injections: in-vitro validation and in-vivo feasibility.

Authors:  Charles R Hatt; Ameet K Jain; Vijay Parthasarathy; Andrew Lang; Amish N Raval
Journal:  Comput Med Imaging Graph       Date:  2013-04-03       Impact factor: 4.790

Review 9.  Fast robust automated brain extraction.

Authors:  Stephen M Smith
Journal:  Hum Brain Mapp       Date:  2002-11       Impact factor: 5.038

10.  A fusion algorithm for GFP image and phase contrast image of Arabidopsis cell based on SFL-contourlet transform.

Authors:  Peng Feng; Jing Wang; Biao Wei; Deling Mi
Journal:  Comput Math Methods Med       Date:  2013-02-14       Impact factor: 2.238

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.