Varun P Gopi1, P Palanisamy, Khan A Wahid, Paul Babyn. 1. Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, Tamil Nadu 620015, India. vpgcet@gmail.com
Abstract
This paper introduces an efficient algorithm for magnetic resonance (MR) image reconstruction. The proposed method minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from undersampled k-space data. The nonlocal total variation is taken as the L 1-regularization functional and solved using Split Bregman iteration. The proposed algorithm is compared with previous methods in terms of the reconstruction accuracy and computational complexity. The comparison results demonstrate the superiority of the proposed algorithm for compressed MR image reconstruction.
This paper introduces an efficient algorithm for magnetic resonance (MR) image reconstruction. The proposed method minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from undersampled k-space data. The nonlocal total variation is taken as the L 1-regularization functional and solved using Split Bregman iteration. The proposed algorithm is compared with previous methods in terms of the reconstruction accuracy and computational complexity. The comparison results demonstrate the superiority of the proposed algorithm for compressed MR image reconstruction.
Magnetic resonance (MR) imaging has been utilized in diagnosis because of its glorious depiction of soft tissue changes and noninvasive nature. As explained in [1, 2], it is possible to accurately reconstruct the MR images from undersampled k-space data using compressed sensing and considerably scale back the scanning period. Suppose u ∈ R
is a sparse signal, and let K be a measurement matrix such that Ku = v, where v is the observed data. Then, recovering u from v is equivalent to solve
where J(u) is a regularizing function; usually it may be bounded variation or Besov norm. In case of compressed sensing MRI, K is partial Fourier matrix (K = Pℱ), where P ∈ R
is an identity matrix (M ≪ N), ℱ is a discrete Fourier matrix, and v is the observed k-space data contaminated with Gaussian noise of variance σ
2, the relaxation form for (1) should be given by
In order to make (1) simple to solve, first convert it into an unconstrained optimization problem. One common method for doing this is to use a penalty function/continuation method, which approximates (1) by a problem of the form
where μ is a balancing parameter between the fidelity term and sparsity term [3, 4]. This is the unconstrained problem we need to solve.There are a lot of iterative methods existing to reconstruct MR images from undersampled data such as conjugate gradient algorithm (CG) [4], operator splitting algorithm (TVCMRI) [5], variable splitting method (RecPF) [6], composite splitting algorithm (CSA) and its accelerated version called a fast composite splitting algorithm (FCSA) [7], and split Bregman algorithm combined with total variation (SB-TV) [8]. The Split Bregman method provides better solution to a wide class of L
1-regularized problems. In SB-TV, the Split Bregman technique is applied to the Rudin-Osher-Fatemi functional to a compressed sensing problem that arises in a magnetic resonance imaging. A detailed explanation about the Split Bregman technique is given in Section 2. Reconstruction using CG is very slow for MR images. TVCMRI and RecPF can replace iterative linear solvers with Fourier domain computations, which can provide a reduction in computation time. FCSA performs much better than TVCMRI and RecPF and CSA. FCSA is based on wavelet sparsity and total variation (TV). However, despite the huge popularity of TV minimization, it has also some unwanted effects. In the presence of noise, it tends to piecewise constant solutions, the so called staircasing effect which was analyzed in detail in [9, 10]. Another deficit of TV regularization is the systematic loss of contrast in the reconstructions, even if the given data is noise free. This effect is the well-known systematic error of the total variation minimization and was studied extensively in [11, 12]. The major problem associated with TV-based compressed sensing method is that it tries to uniformly penalize the image gradient irrespective of the underlying image structures, and thus low contrast regions are sometimes over smoothed [13]. To resolve this issue, we propose a new algorithm which jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR images from under-sampled data. In medical image reconstruction, fine structures, details, and texture should be preserved. The nonlocal total variation makes the recovered image quality sharper, and they preserve the fine structures, details, and textures. NLTV is much better than TV for improving the signal-to-noise ratio in practical application [14-16]. Authors of [17, 18] showed that among the existing nonlocal regularization techniques, NLTV is performing well in reconstruction. So in the proposed method, the nonlocal total variation is taken as the L
1-regularization functional and solved using Split Bregman algorithm. Numerous experiments have been done on real MR images to show the efficiency and advantages of the proposed work in terms of computational complexity and reconstruction accuracy.
2. Split Bregman Algorithm and Regularization Method
The L
1-regularized problems have many important applications in engineering and imaging science. The general form of such problems is
where |·| represents the L
1-norm and both Φ(u) and L(u) are convex functions. Many important problems in image processing can be interpreted as L
1-regularized optimization problems. Equation (3) is an example of such a problem
Unfortunately, for several issues, selecting a large value for μ makes (5) extremely tough to solve numerically [19, 20]. Also, for several applications, μ should be increased in small steps, creating the strategy less efficient. Bregman iteration can be used to reduce (5) into small sequence of unconstrained problems for further processing.
2.1. Bregman Iteration
Bregman iteration is a method for finding extrema of convex functionals [21]. Bregman iteration was already applied to solve the basis pursuit problem in [22] and medical imaging problem in [23]. The general formulation of this method is explained by using Bregman distance. The Bregman distance associated with a convex function J at the point v is
where p is in the subgradient of J at v. Clearly, this is not a distance in the usual sense, it measures closeness in the sense that D
(u, v) ≥ 0 and D
(u, v)≥D
(w, v) for w on the line segment between u and v. Again, consider two convex energy functionals, J and L, defined over R
with min
L(u) = 0. The associated unconstrained minimization problem is
Modify this problem by iteratively solving
as suggested by Bregman in [21]. For simplicity, assume that L is differentiable, and in this case, we have 0 ∈ δ(D
(u, u
) + μL(u)), where this subdifferential is evaluated at u
. Since p
∈ δJ(u
) at this location, then
Apply the Bregman iteration (8) on (5)
Bregman iterations of this form were considered in [12]. Here, it is shown that when K is linear, this seemingly complicated iteration is equivalent to the simplified method
2.2. Split Bregman Method
Apply Bregman framework to solve L
1-regularized problem (4). In this case, assume that both Φ(u) and L(u) are convex functions and Φ(u) is differentiable. In this method, decouple the L
1 and L
2 parts of energy in (4). Now, consider (4) as
To solve this problem, first convert it into an unconstrained problem
Let J(u, d) = |d | +L(u), and let K(u, d) = d − Φ(u), then we can see that (14) is simply an application of (5). Use (11) to obtain the solution for the above problem:
Applying simplification presented in (12), we get the following solutions:
In order to perform the minimization effectively, split the L
1 and L
2 components of (16) and minimize with respect to u and d separately. The two steps result in the following solutions:
In (19), there is no coupling between elements of d. We can use shrinkage operator to find the optimal value of d as follows:
where
This shrinkage operation is extremely fast and requires only few operations per element of d
.Based on the above described equations, the generalized Split Bregman algorithm can be explained in Algorithm 1.
Algorithm 1
Generalized Split Bregman algorithm.
2.3. Nonlocal Total Variation Norm
Total Variation (TV) regularization [24, 25] makes the recovered image quality sharper, but they do not preserve the fine structures, details, and textures. This effect is caused by the regularity assumption of the TV formulation of the image model, namely, that the image has a simple geometric description consisting of a set of connected sets (objects) with smooth contours (edges). Additionally, the model assumes that the image is smooth inside single objects and has discontinuous jumps across the boundaries. Therefore, TV regularization is optimal to reduce the noise and to reconstruct the main geometrical configuration in an image. However, it fails to preserve texture, details, and fine structures, because they behave in all aspects like noise and thus cannot be distinguished from noise. The nonlocal total variation (NLTV) extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function [14–17, 26, 27]. NLTV is an effective tool instead of TV for improving the signal-to-noise ratio in practical application [14-16]. Recently, it has been successfully used for 4D computed tomography reconstruction from few projection data [28]. NLTV extends the TV functional to a nonlocal variant using the definition of nonlocal derivative operators based on a nonlocal weight function (graph). The notion nonlocal means that any point can directly interact with any other point in the whole image domain, where the intensity of the interaction is depending on the value of the weight function. This weight function should represent the similarity of the two points and should be significant, if both points are similar in an appropriate measure. Therefore, the expectation is that such an approach is able to process both structures (geometrical parts) and texture within the same framework, due to the identification of recurring structures in the whole image. A brief review regarding the definition of nonlocal functional is given below.Let Ω ∈ R
2, and let u :Ω → R be a real function. Assume that w : Ω × Ω → R is a weight function which is symmetric and nonnegative. Then, the nonlocal gradient of an image u(x) is defined as
The norm of a vector q : Ω × Ω → R at point x ∈ Ω is given by
Hence, the norm of the nonlocal gradient of a function u : Ω → R at point x ∈ Ω is represented as
The nonlocal divergence operator can be defined using the standard adjoint relation with the nonlocal gradient as follows:
which guides to the definition of nonlocal divergence of the nonlocal vector q
Next, the nonlocal Laplacian operator is defined as
The discrete forms of the nonlocal gradient, divergence, and laplace operators can be represented as follows:
where u
represents the value of a pixel i in the image (1 ≤ i ≤ N) and w
is the discrete sparse version of w(x, y), and it is defined as
where h is a filtering parameter; in general h corresponds to the noise level; usually, we set it to be the standard deviation of the noise, and G
is a Gaussian of standard deviation σ, and u(x) and u(y) are the image values in pixel x and y. The weight functions w(x, y) denote how much the difference between pixels x and y is penalized in the images. The more similar the neighborhoods of x and y are, more the difference should be penalized, and vice versa. The weight functions w(x, y), significant only if the window around y looks like the corresponding window around x. Hence, the nonlocal algorithm is very efficient in reducing noise, while preserving contrast in natural images and redundant structures such as texture. In our work, we used 5 × 5 pixel patches, a search neighborhood window of size 11 × 11. The observed noisy data v is taken as the reference image to construct the weight, and by this weighed averaging, the structures, for example, boundaries, are reinforced, while the noise gets cancelled out [29]. The weights are computed by using either a distance between the noisy pixel values |u(x) − u(y)| [30-32] or a distance between the patches around x and y [33-35]. The use of NLTV reduces the noise in the reconstructed image, thus the difference between reference and reconstructed image reduces. From the definition of signal-to-noise ratio (SNR), it is clear that the reduction in image difference increases the SNR.
3. Proposed Work
The proposed work jointly minimizes a linear combination of nonlocal total variation and least-square data-fitting term to reconstruct the MR image from undersampled data. The main aim is to solve the compressed sensing MRI problem (2) using Split Bregman algorithm and nonlocal total variation. In this work, the nonlocal total variation is taken as the L
1-regularization functional and solved using Split Bregman iteration. Recall (2)
Using Bregman iteration method, (30) can be reduced to a sequence of unconstrained problems of the form
where J(u) represents L
1-regularization term. In order to proceed further selection of regularization method is important. Here, we choose nonlocal total variation (NLTV) as the regularizer; that is
Now, (31) becomes
where ||∇NL
u||1 = ∑ | ∇NL
u|. We can write (34) as follows by introducing an auxiliary variable d instead of ∇NL
u:
Equation (35) can be converted into unconstrained form by using the quadratic penalty method
Using split Bregman method, (36) can be transformed into the following forms:
Equation (37) is convex and can be minimized by alternatively solving the following two minimization subproblems with respect to u and d
By direct computation, the optimal conditions of (39) are
Use the Gauss-Seidel iteration to get a fast solution of (40), and the discrete solution is represented as
As explained in the introduction, K is partial Fourier matrix (K = Pℱ), where P ∈ R
is an identity matrix (M ≪ N) and ℱ is a discrete Fourier matrix. Using the identity ℱ
= ℱ
−1, now (42) becomes
The discrete solution for (41) can be written asFinally, the Bregman variable is updated as
The proposed method is summarized as Algorithm 2.
Algorithm 2
Proposed algorithm.
4. Evaluation of Image Quality
In this work, a detailed evaluation study has done on the reconstruction of MR images, which represent varying degrees of object structural complexity. Even though algorithms based on regularization techniques effectively remove streaks, other aspects of image quality should also be analyzed. To address this, a number of image quality evaluations are performed at different levels including qualitative visualization-based evaluation and quantitative metric-based evaluation.
4.1. Qualitative Visualization-Based Evaluation
In qualitative visualization-based evaluation, reconstructed image obtained with different algorithms are visually compared with the reference image.
4.2. Quantitative Metric-Based Evaluation
Besides the visualization-based evaluation, similarity between reconstructed and reference images is quantitatively assessed by means of four measures such as signal-to-noise ratio (SNR), relative error (RE), structural similarity index (SSIM), and feature similarity index (FSIM). SNR and RE are widely used for measuring reconstruction accuracy, SSIM and FSIM are used for evaluating the similarity between reconstructed and reference image.
4.2.1. Signal-to-Noise Ratio (SNR)
One can see that
where u
ref is the reference image, is the mean intensity value of u
ref, and u
rec is the reconstructed image.
4.2.2. Relative Error (RE)
One can see that
4.2.3. Structural Similarity Index (SSIM)
The SNR measurement gives a numerical value on the damage, but it does not describe its type. Moreover, as is noted in [36, 37], it does not quite represent the quality perceived by human observers. For medical imaging applications, where images are degraded, must eventually be examined by experts, traditional evaluation remains insufficient. For this reason, objective approaches are needed to assess the medical imaging quality. We then evaluate a new paradigm to estimate the quality of medical images based on the assumption that the humanvisual system (HVS) is highly adapted to extract structural information. The similarity index compares the brightness I(x, y), contrast c(x, y), and structure s(x, y) between each pair of vectors, where the SSIM index between two signals x and y is given by the following expression [38, 39]:
However, the comparison of brightness is determined by the following expression:
where the average intensity of signal x is given by
the constant K
1 ≪ 1, and L is the dynamic row of the pixel values (255 for an image coded on 8 bits). The function of contrast comparison takes the following form:
where is the standard deviation of the original signal x, C
2 = (K
2
L)2, and the constant K
2 ≪ 1.The function of structure comparison is defined as follows:
where cov(x, y) = μ
− μ
μ
and C
3 = C
2/2.Then, the expression of the structural similarity index becomes
4.2.4. Feature Similarity Index (FSIM)
SSIM index provides image quality assessment (IQA) from pixel-based stage to structure-based stage. Humanvisual system (HVS) understands an image mainly based on its low-level features: mainly, the phase congruency (PC), which is a measure of the significance of a local structure and it is dimensionless. PC is used as the primary feature in FSIM [40]. The secondary feature used in FSIM is the image gradient magnitude (GM). In order to find out the feature similarity between two images f
1 and f
2, the above mentioned parameters PC and GM are to be calaculated first.Let PC1, PC2, G
1, and G
2 be the phase congruency and gradient magnitude of images f
1 and f
2, respectively. Initially, separate the feature similarity measurement between f
1(x) and f
2(x) into two components. First similarity measure is based on PC1(x) and PC2(x) and is defined as
where T
1 is a positive constant which increases the stability of S
PC. Value of T
1 depends on the dynamic range of PC. Similarly, the similarity measure based on GM values G
1(x) and G
2(x) is defined as
where T
1 is a positive constant which depends on the dynamic range of GM value.Next step is to combine S
PC(x) and S
(x) to get the similarity measure S
(x) between f
1(x) and f
2(x) and is defined as
The relative importance of PC and GM features can be adjusted by means of the parameters α and β. For simplicity, set α = β = 1, then (56) becomes
where S
(x) represents the similarity at each location x, and the overall similarity should be found. For a given location x, if any of f
1(x) and f
2(x) has a significant PC value, it implies that this position x will have a high impact on HVS in evaluating the similarity between f
1 and f
2. Therefore, introducing a new term PC(x) = max(PC1(x), PC2(x)) to weight the importance of S
(x) in the overall similarity between f
1 and f
2. Accordingly, the FSIM index can defined as
where Ω means the whole image spatial domain.
5. Experiments and Numerical Results
The experimental setup used in previous works [5-7] is explained here. Suppose that an MR image u has m × m pixels and the partial Fourier transform K in (3) consists of n rows of a m × m matrix corresponding to the full 2D discrete Fourier transform. The n selected rows correspond to the acquired data v. The sampling ratio is outlined as n/m. The scanning time is shorter if the sampling ratio is smaller. In the k-space, randomly choose more samples in low frequencies and fewer samples in higher frequencies. This sample theme has been widely used for compressed MR image reconstruction, and therefore similar themes are utilized in [4-7]. Practically, the sampling scheme and speed in MR imaging also depend on the physical and physiological limitations [4]. In the proposed work, the compressive sensing matrix K = Pℱ, where P is a row selector matrix, and ℱ is the Fourier transform matrix. For an N × N image, we randomly choose m coefficients, then R is a sampling matrix of size m × N
2. All experiments are conducted on a PC with an Intel core-i72670, 2.2 GHz CPU in MATLAB environment. The result of the proposed work is compared with the existing methods like TVCMRI [5], RecPF [6], CSA, FCSA [7], and SB-TV [8]. The observation measurement v is synthesized as v = Ku + n, where n is the noise, K and v are given as inputs, and u is the unknown target. In this work we considered two sets of observations: one is contaminated with Gaussian noise of standard deviation σ = 0.01, and the other is contaminated with Rician noise of noise level 5%.The proposed and existing algorithms are tested using four 2D MR images: brain, chest, artery, and cardiac, imges respectively, as shown in Figure 1. They have been used for validation in [7]. For convenience, they have the same size of 256 × 256. The sample ratio is set to be approximately 20%. To perform comparisons, all methods run 50 iterations, except SB-TV and the proposed method. SB-TV completes reconstruction in 35 iterations, while the proposed method takes only 30 iterations. The proposed method provides best visual effects on all MR images. Figures 2, 3, 4, and 5 show the visual comparisons of the reconstructed results using Gaussian noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 10 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 1 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Gaussian noise case. Figures 6, 7, 8, and 9 show the visual comparisons of the reconstructed results using Rician noisy observations by different methods in the brain, chest, artery, and cardiac images, respectively. Figure 11 gives the performance comparisons between different methods in terms of the CPU time over SNR. Table 2 shows the average value of CPU time, SNR, RE, SSIM and FSIM of different methods in Rician noise case. In both cases, the reconstructed results produced by the proposed method is better than those produced by the TVCMRI, RecPF, CSA, FCSA, and SB-TV. The reconstruction performance of the proposed work is the best in terms of both the computational complexity and reconstruction accuracy, which clearly demonstrate the efficiency of this method for the compressed MR image reconstruction.
Figure 1
2D MR test images.
Figure 2
Brain MR image reconstruction from 20% samples (Gaussian noise case).
Figure 3
Chest MR image reconstruction from 20% samples (Gaussian noise case).
Figure 4
Artery MR image reconstruction from 20% samples (Gaussian noise case).
Figure 5
Cardiac MR image reconstruction from 20% samples (Gaussian noise case).
Figure 10
Performance comparisons (CPU time versus SNR) on different MR images (Gaussian noise case).
Table 1
Comparison of the average values of quality evaluation parameters (Gaussian noise case).
Image
TVCMRI
RecPF
CSA
FCSA
SB-TV
Proposed
SNR (dB)
Brain
14.19
14.75
15.20
15.78
16.31
17.49
Chest
16.99
17.38
17.13
17.63
18.03
18.89
Artery
18.70
19.98
22.15
23.61
24.17
25.75
Cardiac
16.97
17.65
17.95
18.77
18.98
19.47
RE (%)
Brain
16.96
16.54
07.97
07.37
06.90
05.98
Chest
11.58
11.21
10.84
10.33
08.89
07.96
Artery
12.61
11.84
05.77
04.90
04.57
04.10
Cardiac
14.03
13.39
11.10
09.78
08.29
07.72
SSIM
Brain
0.9567
0.9742
0.9906
0.9935
0.9940
0.9947
Chest
0.9829
0.9839
0.9848
0.9861
0.9887
0.9914
Artery
0.9825
0.9835
0.9848
0.9919
0.9924
0.9940
Cardiac
0.9614
0.9642
0.9748
0.9799
0.9835
0.9889
FSIM
Brain
0.8712
0.8773
0.9489
0.9654
0.9719
0.9765
Chest
0.8990
0.9012
0.9063
0.9148
0.9237
0.9374
Artery
0.9010
0.9168
0.9516
0.9668
0.9705
0.9742
Cardiac
0.9313
0.9366
0.9454
0.9551
0.9587
0.9629
CPU time (s)
Brain
2.14
2.05
1.96
1.88
1.78
1.62
Chest
2.21
2.16
1.95
1.86
1.73
1.62
Artery
2.26
2.15
1.86
1.83
1.72
1.64
Cardiac
2.13
2.10
1.84
1.82
1.80
1.67
Figure 6
Brain MR image reconstruction from 20% samples (Rician noise case).
Figure 7
Chest MR image reconstruction from 20% samples (Rician noise case).
Figure 8
Artery MR image reconstruction from 20% samples (Rician noise case).
Figure 9
Cardiac MR image reconstruction from 20% samples (Rician noise case).
Figure 11
Performance comparisons (CPU time versus SNR) on different MR images (Rician noise case).
Table 2
Comparison of the average values of quality evaluation parameters (Rician noise case).
Image
TVCMRI
RecPF
CSA
FCSA
SB-TV
Proposed
SNR (dB)
Brain
13.31
13.85
14.81
15.31
16.24
17.08
Chest
15.30
15.50
15.45
15.69
16.67
17.18
Artery
18.34
19.55
21.26
22.54
23.64
24.28
Cardiac
14.68
14.95
14.98
15.26
17.52
18.93
RE (%)
Brain
17.15
16.78
13.01
12.39
10.36
10.01
Chest
13.19
12.94
11.58
10.98
09.38
08.81
Artery
13.24
12.46
06.76
05.95
05.04
04.71
Cardiac
15.06
14.73
12.69
10.24
09.89
08.96
SSIM
Brain
0.9511
0.9625
0.9787
0.9807
0.9860
0.9870
Chest
0.9747
0.9762
0.9801
0.9816
0.9828
0.9889
Artery
0.9810
0.9824
0.9832
0.9889
0.9904
0.9924
Cardiac
0.9582
0.9592
0.9697
0.9717
0.9812
0.9858
FSIM
Brain
0.8499
0.8562
0.8994
0.9082
0.9290
0.9339
Chest
0.8860
0.8968
0.9031
0.9067
0.9221
0.9298
Artery
0.8652
0.8797
0.9424
0.9566
0.9672
0.9715
Cardiac
0.9220
0.9304
0.9352
0.9404
0.9518
0.9565
CPU time (s)
Brain
2.10
2.05
1.96
1.88
1.78
1.62
Chest
2.26
2.14
1.97
1.85
1.73
1.63
Artery
2.18
2.14
1.88
1.84
1.74
1.63
Cardiac
2.10
2.12
1.84
1.82
1.79
1.67
6. Conclusion
An efficient algorithm is proposed for the compressed MR image reconstruction based on nonlocal total variation and Split Bregman method. The algorithm effectively solves a NLTV-based L
1-norm term by using the Split Bregman method. NLTV can effectively avoid blocky artifacts caused by traditional TV regularization. Numerous experiments were conducted to compare different reconstruction methods. The results of our study indicate that the proposed method outperforms the classic methods in terms of both accuracy and complexity.
Authors: R R Coifman; S Lafon; A B Lee; M Maggioni; B Nadler; F Warner; S W Zucker Journal: Proc Natl Acad Sci U S A Date: 2005-05-17 Impact factor: 12.779