Literature DB >> 30540797

A new development of non-local image denoising using fixed-point iteration for non-convex ℓp sparse optimization.

Shuting Cai1, Kun Liu1, Ming Yang2, Jianliang Tang3, Xiaoming Xiong1, Mingqing Xiao4.   

Abstract

We proposed a new efficient image denoising scheme, which mainly leads to four important contributions whose approaches are different from existing ones. The first is to show the equivalence between the group-based sparse representation and the Schatten-p norm minimization problem, so that the sparsity of the coefficients for each group can be measured by estimating the underlying singular values. The second is that we construct the proximal operator for sparse optimization in ℓp space with p ∈ (0, 1] by using fixed-point iteration and obtained a new solution of Schatten-p norm minimization problem, which is more rigorous and accurate than current available results. The third is that we analyze the suitable setting of power p for each noise level σ = 20, 30, 50, 60, 75, 100, respectively. We find that the optimal value of p is inversely proportional to the noise level except for high level of noise, where the best values of p are 1 and 0.95, when the noise levels are respectively 75 and 100. Last we measure the structural similarity between two image patches and extends previous deterministic annealing-based solution to sparsity optimization problem through incorporating the idea of dictionary learning. Experimental results demonstrate that for every given noise level, the proposed Spatially Adaptive Fixed Point Iteration (SAFPI) algorithm attains the best denoising performance on the value of Peak Signal-to-Noise Ratio (PSNR) and structure similarity (SSIM), being able to retain the image structure information, which outperforms many state-of-the-art denoising methods such as Block-matching and 3D filtering (BM3D), Weighted Nuclear Norm Minimization (WNNM) and Weighted Schatten p-Norm Minimization (WSNM).

Entities:  

Mesh:

Year:  2018        PMID: 30540797      PMCID: PMC6291268          DOI: 10.1371/journal.pone.0208503

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


1 Introduction

Images are generally contaminated by noise during acquisition, transmission and compression and real-life images are often degraded with mixed noise and it is hard to identify the type and model the noise [1-8]. Images with high resolutions are desirable in many applications, e.g., object recognition, image classification, and image segmentation in medical and biological science. As an essential low-level image processing procedure, image denoising has been studied extensively and belong to a special type of classical inverse problems. The general observation with additive noise can be modeled as Y = X + N, where Y is the noisy observation, and X and N present the original image and white Gaussian noise, respectively. Though a plethora of noise removal techniques have appeared in recent years, for example, Convolutional Neural Network (CNN) [9, 10] have proved very promising on denoising tasks for which large training sets are available, but when the training data are scarce, their performance suffers from overfitting. Therefore image denoising for real-life noise still remains an important challenge in order to recover the images with high quality [11]. Image denoising problem is in general ill-posed and it requires appropriate regularization. Over the past few decades, numerous image denoising methods have been developed [12]. This is usually achieved by minimizing a suitable energy functional that characterizes a trade-off between data-fidelity and regularity. Frobenius norm is often employed to measure the data fitting loss for additive Gaussian noise. Sparse signal representation describes a signal that can be approximated as a linear combination of as few as possible atoms from a given dictionary. Recently, Elad [13] showed that sparse overcomplete representation approach is quite effective in denoising images, supported by recent study that better denoising performance can be achieved by using a variant of sparse coding methods [14, 15]. In order to promote sparsity more extensively than convex regularization, it is also standard practice to employ non-convex optimization [16]. In image denoising, following [17], each noise patch y is extracted from the noisy image Y. In order to better exploit group sparsity, we group a set of similar patches . Thus, denoising problem becomes the recovery problem of x from y. Now let us consider the group sparsity defined by a group norm ||A||: where is related to image patches by X = DA. We note that the group norm (quadratic symmetric gauge function, see 2.4.2 of [18])||⋅|| is defined by where α = [α, …, α] denotes the i column of matrix A in . In recent years, many research is devoted to address the group sparse optimization problem (1), aiming at the improvement of efficiency and accuracy (e.g., see survey paper [16] and references therein). Once all group sparse codes A are achieved, the latent clean image X can be reconstructed as X = DA by standard approach(see Theorem 1 and 2 in [19]). The main contributions of this paper are illustrated as follows: We unify the group-based sparse coding in [20] and the Schatten-p norm minimization problem in [21] by proving their mathematical equivalence. A fixed-point iteration scheme is developed for sparse optimization in ℓ space with p ∈ (0, 1] by using proximal operator and we a new solution to Schatten-p norm minimization problem is obtained, which appears to be more accurate and rigorous than [21]. Regarding to image denoising, we find that the optimal value of p is inversely proportional to the noise level except for high level noise, where the best values of p are 1 and 0.95, when the noise levels are 75 and 100, respectively. The proposed Spatially Adaptive Fixed Point Iteration (SAFPI) algorithm attains the best denoising performance on the value of PSNR and SSIM, being able to retain the image structure information, which outperforms many state-of-the-art denoising methods such as BM3D, WNNM and WSNM. The rest of the paper is organized as follows. In Section 2.1, we prove the equivalence of group-based sparse coding and the Schatten-p norm minimization problem and propose a new solution to Schatten-p norm minimization problem. A fixed point iteration for solving sparse optimization in ℓ space with p ∈ (0, 1] is formulated and discussed. In Section 2.2, we establish an image denoising scheme using nonlocal self-similarity and Schatten-p norm minimization. In Section 3, based on the new developed Spatially Adaptive Fixed Point Iteration (SAFPI) algorithm, we present the experimental results using a set of standard benchmark images. And the comparison with several existing methods are also provided to demonstrate our improvement. Finally, the paper ends with concluding remarks.

2 Materials and methods

2.1 Proximal operator for Schatten-p norm minimization

2.1.1 Background

Consider a matrix , then Y Y is a positive semidefinite matrix. The eigenvalues of Y Y are called the singular values of Y, denoted by σ1(Y), …, σmin{(Y) in decreasing order (see page 246 of [22]). Let r = rank(Y), it is clear that The matrix Y also has the following Singular Value Decomposition (SVD) Y = UΣV, where ( is the set of orthogonal matrices) and σ is an m×n diagonal matrix with diagonal entries σ1(Y), …, σmin{(Y). We introduce the Schatten-p norm (0 < p < ∞) of Y, which is defined as Special cases of the Schatten-p norm include the nuclear norm (p = 1) and the Frobenius norm (p = 2). Next we analyze the relationship between group-based sparse coding and the Schatten-p norm minimization problem, which improves Theorem 2 in [23]. But our approach is based on the “symmetry” technique (similar to [17] for other purpose), which is essentially different from [23]. Theorem 1 The group-based sparse coding problem (1) is equivalent to a Schatten-p norm minimization problem. Eqs (12), (13) and (14) imply that any operation designated for sparse coefficient vector α’s can be conveniently implemented with singular values of X (only differs by a constant scalar). The Schatten-p norm (0 < p ≤ 1) has been widely used to replace the nuclear norm for better approximating the rank function. There are extensive study for the Schatten-p norm optimization problem (14) in literature [24, 25]. Note that the main difference between group sparse coding and the Schatten-p norm minimization problem is that group sparse coding has a dictionary learning operator while the Schatten-p norm minimization problem does not involve such operation.

2.1.2 Computation of proximal mapping using fixed point iterative method

Now let us recall the definition of proximal mapping. Definition 2 The proximal mapping of a mapping is The proximal mapping of is defined as: And we have the following celebrated theorem: Theorem 3 [Theorem 1 of [26]] If matrix has the following Singular Value Decomposition (SVD) Y = UΣV, where and σ is an m × n diagonal matrix with diagonal entries σ1(Y), …, σmin{(Y). Then we have in Eq (2) where is defined as the scalar proximal mapping in (2): In order to be transparent for our proposed approach to solve Eq (4), we recall two important concepts in convex optimization next. Definition 4 (see Chapter 2 p 82 of [27]) Let be paired by a bilinear functional (inner product) 〈,〉 and let be any extended real-valued function on . Then the function f* on defined by is called the Fenchel conjugate of f (with respect to the given pairing). Note that f* is always a closed convex function, regardless of the structure of f. Definition 5 Given the proper convex function , the subdifferential of such a function is the (generally multivalued) mapping defined by The elements x* ∈ ∂f(x) are called subgradients of f at x. Actually, same definition works for nonconvex f (however, subgradient need not exist). A point is a minimizer of a function f (not necessarily convex) over if and only if f is subdifferentiable at x and 0 ∈ ∂f(x). Lemma 6 is closed and convex, but it has no close form solution for general p. If p = 1, it is well-known that the function ϕ(w) = |w| is not differentiable but still convex, and can be described by a subgradient (see Section 2.3 of [28]) as ∂ϕ(w) = sign(w) and from Lemma 6, we have Furthermore, we can obtain the following theorem, which improves Theorem 1 in [29] and Theorem 1 in [19] using fixed-point iteration (see Chapter 1 of [30] for details). Definition 7 Given a function , find ξ ∈ [a, b] such that ξ = g(ξ). If such ξ exists, it will be called a fixed point of g and it could be computed by the following algorithm: ξ( = g(ξ(), n ≥ 1. And g is said to be a contraction on [a, b] if there exists a constant L such that 0 < L < 1 and |g(x) − g(y)| < L|x − y| for any x, y ∈ [a, b]. Theorem 8 Let us denote then for 0 < p < 1, we have

2.2 Spatially Adaptive Fixed Point Iteration (SAFPI) denoising algorithm

In [20, 21], the authors proposed a group sparse representation framework and a Schatten-p norm minimization framework for image denoising. In Theorem 1, we have shown these two approaches are equivallent. From combining Theorem 3 and Theorem 8, we obtained a fixed point iteration solution of Eq (14) in Theorem 1, which is more rigorous than [20, 21]. After grouping a set of similar patches , the denoising problem becomes the recovery problem of x from y. And as was shown in Theorem 1, the Schatten-p norm minimization problem (14) converts the denoising problem to recover the low-rank matrix X from the non-low-rank matrix Y, and thus filtering out the noise of the structure set. And the second identity in Eq (14) can be solved using Theorem 3 and Theorem 8. Wavelet-based image denoising assumes that the wavelet coefficients obey the Laplace distribution, and the threshold method is used to filter the noise in the image. The prior distribution of the block matrix singular values can also approximate the Laplace distribution in space. The parameter λ for each group that balances the fidelity term and the regularization term should be adaptively determined for better denoising performance. Using the Spatial Adaptive Laplacian Transcendental as appeared in [17, 31], the threshold parameter can be set to where σ denotes the locally estimated variance at the position i. Now the second identity in Eq (14) becomes From Eq (4), if Y has singular value decomposition Y = UΣV, we have , where . And can be computed using Theorem 3 and Theorem 8. Recently some developed iterative regularization techniques in [17] offers an alternative approach toward spatial adaptation. The basic idea of iterative regularization is to add filtered noise back to the denoised image i.e., where k denotes the iteration number and δ is a relaxation parameter. Besides, we can execute the above denoising procedures for better results after several iterations. In the k + 1-th iteration, the iterative regularization strategy in [17] is used to update the estimation of noise variance. Then the standard deviation of noise in k + 1-th iteration is adjusted as where γ is a scaling factor controlling the re-estimation of noise variance and the local estimated variance at the i-th position is where χ is the i-th singular value of image y. The higher the structural similarity of the blocks in the structure group is, the more correlative the column vectors in the block matrix will be, which means that it has a low rank property corresponding to the noise-free matrix. The information is mainly concentrated in those largest singular values. During the proximal operation, selecting the appropriate threshold parameter for those with larger singular value makes the processed singular value closer to the noise-free singular value, which can well preserve the useful information in the image while filtering out the noises. Therefore, choosing blocks with more similar structure will help to improve the image denoising effect. There are many commonly used similarity measures such as Euclidean distance, cosine angle, and correlation coefficient. The traditional block similarity measure function have some shortcomings in measuring the similarity between blocks. Euclidean distance simply calculate the difference between the pixel gray value of the blocks, and then add up as a standard measure of the degree of similarity. Although this method is simple and easy to implement, it only treats the blocks as isolated pixels and neglects the statistical relevance between local pixels, which leads to the inaccuracy of similarity measure. This is because the blocks are not in an Euclidean space. There is a very strong correlation between the pixels in the block. The local pixel correlation carries important structural information of the blocks. In order to solve this problem, Structural SIMilarity (SSIM) index [32, 33] is often used to evaluate the image quality. SSIM is defined as , where , , and denote the average of X, the average of , the variance of X and the variance of , respectively. c1 and c2 are two variables to stabilize the division with weak denominator. A detailed step-by-step description of Spatially Adaptive Fixed Point Iteration (SAFPI) denoising algorithm is given by Algorithm 1 Algorithm 1 Image Denoising via SAFPI Algorithm Require: Initialization: ; Iterate on i = 1,2, …, iter Iterative regularization: and compute its variance ; Divide y( into several blocks, the SSIM is used to classify the blocks with structural similarities into one structural group Y; Noise variance update: re-estimate σ from y( via ; SVD for each noisy data matrix Y: (U, Σ, V) = SVD(Y), where Σ = diag{ε1, …, εmin{}; Thresholds update: compute λ using and ; Compute Application of proximal operator: Updating the ε value by using the follow formula with computed λ and ε from step 5 and 4; Image update: obtain an improved denoised image by weighted averaging all denoised patches , where ; Output: .

3 Results and discussion

In recent years, many denoising algorithms have been developed and the adaptive image removal algorithms [34-36] is a hot trend in signal and image denoising. To demonstrate the effectiveness of the proposed denoising algorithm, in this section, we compared the denoising performance with recently proposed state-of-the-art denoising methods, such as BM3D [37], WNNM [38], WSNM [21], Expected Patch Log-likelihood (EPLL) [39], Spatially Adaptive Iterative Singular-value Thresholding (SAIST) [17], Patch-Based Near-Optimal image denoising (PBNO) [40], Global Image Denoising (GID) [41], iterative denoising system based on Wiener filtering (WIENER) [34], and Linear Complex Diffusion Process (LCDP) [35]. We have used some well known images that are commonly used in the literature such as [17, 21, 38, 42]. We added noise to them, and test the proposed denoising algorithm with different power p under different noise levels. The experimental images are shown in Fig 1.
Fig 1

The 14 test images for image denoising.

There are several image quality evaluation indicators measuring success of denoising such as kurtosis, low signal-to-noise-ratio(SNR). Low kurtosis indicate superior performance and it is defined as [43], where C(.) is the k-th cumulant function. In our work, we evaluated the performance with three criterion: Structure Similarity Index (SSIM), kurtosis and Peak Signal-to-Noise Ratio (PSNR) which defined as , where M denotes the maximum intensity of the underlying image and is the mean squared error between the denoised image and the noiseless image X. All the experiments were carried out on Matlab (R2016a) of a PC with Intel(R) Xeon(R) CPU E5 − 1630 V4@3.7GHz and 32GB RAM.

3.1 Analysis of over-shrinkage problem and optimal power p

Firstly, we noticed that not all values of power p applied well to the proposed Spatially Adaptive Fixed Point Iteration (SAFPI) algorithm. It would conduct an approximation deviation with the solved singular values and produce excessive contraction, when the value of p is not suitable. As shown in Fig 2, we tested SAFPI to process low rank approximation on the red patch in Fig 2B with the noise level be 50, which is randomly marked from “Monarch” Fig 2A. In Fig 2C, represents the singular values of the denoised similar patches with different power p. The ground-truth line (denoted by blue line) is the singular value connection line for the similar blocks of the noiseless red patch in Fig 2A. Now we can see that (shown on green line) is more close to the ground-truth line. This means that the other ’s (denoted by black, red, blue lines) conducted a serious over-shrinkage problem. In this case, setting p = 1 as in WNNM in denoising will lead to bad processing results. So the advantage of SAFPI algorithm is to overcome the over-shrinkage problem, in case we can find the optimal value of power p.
Fig 2

Illustration of the over-shrinkage problem with the value of power p.

(A) Original image. (B) Noisy image with σ = 50. (C) Singular values of .

Illustration of the over-shrinkage problem with the value of power p.

(A) Original image. (B) Noisy image with σ = 50. (C) Singular values of . Secondly, in order to find the optimal values of p under different noise levels for SAFPI algorithm, we randomly chose 10 test images in Fig 1 for our experiments and set the values of power p to be from 0.05 to 1 with an interval of 0.05. The zero mean additive white Gaussian noise levels were set to be σ = {20, 30, 50, 60, 75, 100}, and the other parameters were the same as WSNM [21]. The results are shown in Fig 3, the horizontal coordinate denotes the different values of p and the vertical coordinate represents the average value of PSNR under given noise level. And the red dots are the optimal points for each given noise level.
Fig 3

The influence of changing p upon denoised results under different noise levels.

(A) σ = 20 (B) σ = 30 (C) σ = 50 (D) σ = 60 (E) σ = 75 (F) σ = 100.

The influence of changing p upon denoised results under different noise levels.

(A) σ = 20 (B) σ = 30 (C) σ = 50 (D) σ = 60 (E) σ = 75 (F) σ = 100. We can see that the best values of power p are 1.0, 0.90, 0.85 and 0.6, when the noise levels are low or medium 20, 30, 50 and 60, respectively. While handling very high noise levels 75, 100, the average PSNR values decrease firstly and then increase, the best values of p are 0.95 and 0.9 respectively. To sum up, we find that the optimal value of p is inversely proportional to the noise level except for high level of noise, where the best values of p are 1 and 0.95. And then we applied the best empirical values for the next experiments.

3.2 Performance comparison with different methods

We set p = {1.0, 0.9, 0.85, 0.6, 0.95, 0.9} for σ = {20, 30, 50, 60, 75, 100} in our proposed SAFPI algorithm. And then we compared the performance with seven standard algorithms (BM3D, WNNM, WSNM, EPLL, SAIST, PBNO, GID, WIENER, LCDP) from 13 widely used images from Fig 1. The results (thanks to the source codes provided by the authors) are in Tables 1, 2, 3, 4, 5 and 6. It can be seen from Table 7 that our algorithm always obtains the best average values of PSNR under different noise levels. The proposed approach achieves 0.3dB to 0.51dB improvement on average over the BM3D, when the noise levels are between 20 and 100. It also achieves 0.02dB, 0.06dB and 0.14dB improvement on average over the WSNM, when the noise levels are 30, 50 and 100, respectively. And our average values of SSIM are the best when the noise levels are 20, 30, 50 and 60. To sum up, for every given low and medium noise level, our algorithm attains the best denoising performance on the values of SSIM and PSNR for all noise levels. This leads to a better image denoising performance and high robustness to noise strength in comparison to several existing denosing algorithms.
Table 1

Denoising results of different algorithms for given noise level σ = 20.

PSNR/σn = 20SSIM/σn = 20
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena33.1633.0532.7532.6131.7433.0833.1333.0930.5931.390.87920.87720.87280.86910.8520.8790.8790.87730.81950.8773
Monarch31.1830.3529.5530.4829.6530.7631.2431.1627.1229.280.92610.91790.91140.91660.89840.92430.92720.92650.85850.8855
Cameraman30.6830.4829.6130.3429.3130.4530.6430.6427.2829.620.87750.87550.86280.88170.85930.87750.87460.8750.78520.8469
Couple30.7830.7630.2230.5429.2830.6630.7730.8127.7929.340.84180.84760.83530.83990.79480.83650.84160.84420.74890.791
Hill30.8230.7230.3230.4929.5930.5830.7930.8128.9729.660.8070.8040.79070.79930.76320.79770.80390.80510.74560.7585
House34.0733.7733.5832.9832.8133.7534.233.9829.7232.150.87360.87260.87490.86090.85690.86890.8720.87230.79650.8367
Man30.7230.5930.1530.6329.5930.5430.7330.7228.9729.660.83570.83330.82290.83790.79840.83160.83550.8360.78180.7948
Peppers31.5631.2930.5531.1730.1731.3231.5331.6228.6830.180.89190.88680.87750.88470.86380.88880.89540.89080.8410.8559
Straw27.6527.0725.8626.9226.6327.2327.6127.7122.0325.190.9090.89730.85770.89630.88060.90950.90940.91180.58430.8177
Barbara32.1531.7731.0629.7630.2132.132.1232.1625.6528.830.91010.90540.89570.87520.87040.91130.9090.91040.73220.8321
Boat30.9530.8830.3930.6629.5330.8430.9430.9528.529.630.82560.82590.81640.82310.78340.81960.82490.82460.75750.7808
Jetplane32.9732.5332.0632.4131.4832.3932.9632.9930.531.390.90390.90060.89620.89970.88610.90240.90220.90350.84510.8678
W.bridge27.6727.2726.727.4926.4927.3127.6627.6425.5526.710.8010.790.76190.81170.74610.78840.79980.80020.68840.7532
Average31.104630.810030.215330.498529.729230.846931.101531.098527.796029.46400.86790.86420.85200.86120.83490.86430.86730.86750.76800.8193
Table 2

Denoising results of different algorithms for given noise level σ = 30.

PSNR/σn = 30SSIM/σn = 30
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena31.4931.2631.1630.7829.8330.7731.3831.4528.4329.410.8520.84490.84360.83250.80680.84710.84910.85250.73510.7702
Monarch29.0928.3627.8528.3527.628.0328.9429.0225.7827.010.89960.88220.87460.87890.85770.89030.89490.89650.810.8306
Cameraman28.7528.6327.8728.3627.8427.4728.7528.7425.7427.70.8380.83730.83310.83160.8230.82370.83690.83780.71230.7912
Couple28.9228.8628.5828.6127.1528.5828.9828.9227.226.490.79280.79470.78660.78310.72020.77920.79510.79160.69320.7115
Hill29.1929.1528.9528.927.7528.9429.1829.1727.3627.910.750.75040.73950.74180.69310.73730.74210.74650.68080.6875
House32.6132.0831.9231.2230.3531.3932.4632.7527.9429.950.85250.8480.84430.83380.82490.85130.85140.85430.72460.7835
Man28.9228.8628.6528.8227.8228.6828.9228.9227.3927.790.78030.78020.77230.77970.73780.77080.78030.77930.71630.7207
Peppers29.5529.2828.8129.1628.1628.3329.6529.5127.0627.830.85970.85050.83770.84670.81890.85380.86010.85840.78450.7912
Straw25.5424.9424.724.7424.5924.7425.5125.4121.7323.170.85260.8290.82130.82270.81620.84830.85330.85030.58050.7162
Barbara30.3129.8129.527.5627.9530.0430.2830.2724.5226.480.88020.86870.86550.81410.81290.87640.88010.87920.67330.7501
Boat29.2129.1128.8128.8927.6628.8329.1729.0927.0127.760.7770.77950.770.77320.72860.76690.77670.77550.7010.7158
Jetplane30.9627.5630.2130.4129.4729.3531.0130.9828.2429.250.87560.84170.83610.86550.85310.87310.87630.87540.76770.8082
W.bridge25.7925.4625.2225.6824.7825.4325.7825.7624.7225.030.71230.69860.68670.72290.65830.68880.71110.70650.65550.6662
Average29.256228.720028.633128.58575427.765428.505229.231529.2326.339227.42230.82480.81580.80860.80970.78090.81590.82400.82350.71040.7495
Table 3

Denoising results of different algorithms for given noise level σ = 50.

PSNR/σn = 50SSIM/σn = 50
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena29.2929.0528.8128.4227.6929.0129.2429.1924.8526.980.81160.79940.78170.77180.76630.80410.80770.80870.55650.6706
Monarch26.5425.8125.5325.7724.9726.0926.3726.223.2724.140.84820.820.7980.81240.76510.8310.83780.83830.68650.7093
Cameraman26.6326.1325.7126.0225.4825.9426.4526.4423.1424.870.79040.78280.75260.76170.76660.77660.78640.79030.55470.6699
Couple26.6426.4626.326.2324.6426.9226.6326.7123.7824.850.7130.70680.69650.69010.61390.6940.71180.71650.55310.5893
Hill27.2627.1927.0226.9525.9327.0427.2227.2224.3525.780.67120.67470.65960.66240.61620.66160.67110.66840.53470.5819
House30.5129.6929.4428.7627.6229.9930.2830.2124.627.080.82840.81220.7850.78450.77150.82360.82080.82730.55350.6846
Man26.926.826.7226.7225.8326.6726.9126.8824.2825.650.71010.70560.69390.69760.66310.69770.70990.70810.55930.6135
Peppers26.9626.6826.4626.6225.626.626.9427.0724.1424.980.80380.79360.76270.78320.76020.79990.79910.80880.63520.6717
Straw22.9922.422.812221.9822.6522.9122.9920.6920.980.73890.68810.73280.64890.67680.72580.73490.74350.54330.5601
Barbara27.9427.2226.9524.8225.1727.4927.8327.8122.4523.860.82290.79460.78490.70170.70130.80330.82120.82250.53150.6084
Boat26.8626.7826.6726.6525.5926.6326.8526.9224.0525.330.70380.70530.69360.6950.65380.69210.70660.70630.55040.6022
Jetplane28.5725.127.7727.8826.9128.2528.5628.5824.7826.570.83780.82690.79540.80590.80620.83140.83070.840.58580.6983
W.bridge23.8523.5723.4923.6922.8823.4923.8723.8122.7923.130.58740.57150.57260.59010.52890.55560.58930.58630.56090.5455
Average26.995426.375426.436926.194525.406926.674626.927726.925423.628524.93850.75900.74470.73150.72350.69920.74590.75590.75880.56960.6312
Table 4

Denoising results of different algorithms for given noise level σ = 60.

PSNR/σn = 60SSIM/σn = 60
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena28.4628.2727.9227.5926.912828.3928.4523.4126.060.79370.77950.75230.7460.74460.78950.78650.7930.48280.6176
Monarch25.4924.9724.6424.8524.1524.9425.4425.4522.2623.090.8190.79260.75710.7820.7660.79660.81050.81540.62560.6621
Cameraman25.7825.3124.9825.224.525.1525.6325.6322.0323.820.77410.76250.72370.73420.73480.75270.76370.77140.48680.6184
Couple25.8725.6625.4325.424.0124.9825.7925.8222.6324.020.68540.67150.65370.6510.58240.66790.67540.67980.48960.5417
Hill26.5426.5426.2726.2725.3226.3926.5226.5423.0525.080.6460.6470.62550.6320.58920.64290.64180.6440.47050.5423
House29.6628.7328.6227.8426.6628.8829.3829.3923.16260.81720.79410.76490.76040.74840.80490.80440.81010.48420.6262
Man26.2526.13262625.1425.7826.2226.2222.9624.860.68290.67860.6620.66670.63910.6770.6830.68540.48940.5672
Peppers26.1125.8125.6625.6724.6425.6326.0826.0322.8623.950.78350.76980.73710.75570.7280.7650.77440.77850.57240.6232
Straw22.1221.6322.0121.0620.9322.1321.9922.0520.0820.370.68580.62850.68150.56140.59040.69050.66810.68340.52340.5118
Barbara26.9526.2826.0823.8724.1926.426.8826.9621.5323.070.79220.75890.75130.65380.65290.78330.78980.79610.46580.554
Boat26.1726.0225.9425.8424.6825.5226.0726.0922.8524.510.67970.67670.66430.66250.62390.67150.67770.68160.49040.5595
Jetplane27.7127.3226.9826.9725.8226.6427.727.7423.3925.670.8220.80750.7720.7790.77450.81530.81050.8190.5160.6502
W.bridge23.223.0222.923.0822.1922.8523.2323.2121.8122.490.54640.53390.53110.54430.47360.53280.54260.54120.5130.5025
Average26.177725.822325.648525.356924.549225.637726.101526.121522.463024.07620.73290.71550.69820.68690.66520.72230.72530.73070.50850.5828
Table 5

Denoising results of different algorithms for given noise level σ = 75.

PSNR/σn = 75SSIM/σn = 75
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena27.5527.252726.5725.9626.9727.5327.3421.6124.960.76620.75160.72380.71010.71260.76420.7670.77070.39820.5617
Monarch24.1423.923.6223.7122.7723.9524.2823.9520.7222.080.77530.75570.71530.73950.69560.76390.77880.77610.54370.6114
Cameraman24.6924.3224.0124.1923.2624.2724.5624.520.5722.760.74590.7340.67660.69550.6770.72960.7340.74860.38890.539
Couple25.0124.724.5124.4423.2724.1724.8724.8921.0323.160.63850.6260.60860.60170.54540.61820.6370.63930.41160.4882
Hill25.7725.6725.4525.4524.6225.525.7725.7121.3424.080.61850.61180.59010.59360.55990.6060.61190.60830.3910.4814
House28.3927.527.1526.6825.1627.928.1828.0121.3624.850.79570.76450.70940.72510.69510.78720.78420.79870.39050.563
Man25.425.3125.1125.1424.3825.0625.425.3521.323.830.64940.64450.61950.62740.6040.64240.65380.65180.40850.5048
Peppers24.9324.7324.5524.5623.3424.6824.982521.1122.680.75240.73680.690.71980.69340.73950.74260.75820.47350.5581
Straw21.1120.7221.0420.0719.5521.0821.1321.1519.1519.650.62820.54620.60380.45670.44580.59590.60770.62890.47780.4401
Barbara25.8525.1224.9422.9423.0625.3525.925.8320.2822.210.74890.71120.70060.60020.58330.73690.75760.75740.39420.491
Boat25.1325.1424.8524.8823.8124.825.1925.1621.2223.470.6420.6410.61430.62120.58770.63690.64610.64510.40920.499
Jetplane26.726.3125.8325.8324.6925.8226.7226.5921.5824.560.79240.78120.7290.74140.75820.7920.78950.80550.4230.588
W.bridge22.5522.422.2622.3921.5222.0722.5722.4420.5321.690.50170.49050.4850.49390.43290.47250.50070.4889.44610.448
Average25.170824.851524.640024.373123.491524.7425.1625.070820.907723.07540.69650.67650.65120.64050.61470.68350.69310.69830.42740.5211
Table 6

Average denoising result of different algorithms for given noise level σ = 100.

PSNR/σn = 100SSIM/σn = 100
ImageSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDPSAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
Lena26.3325.9525.625.324.6425.8126.2526.2219.2323.390.73170.7090.66720.65770.66040.72750.72940.74450.2990.472
Monarch22.8422.5122.1922.2320.8322.6322.9422.7118.8220.610.7240.70210.64150.67710.63440.71560.73370.73370.44740.5223
Cameraman23.4223.0822.6522.8521.7223.0823.3523.0518.5321.430.70240.69280.58680.63510.64710.69630.6910.70720.29380.4487
Couple23.5923.5123.2823.3222.3823.0123.6323.618.9422.060.56530.56650.53950.53830.4970.55390.57170.58320.31850.4164
Hill24.6124.5824.3324.4223.7924.2924.724.7419.1222.840.5640.5650.53550.54210.51960.55660.5660.57260.29570.4088
House27.0625.8725.4225.1923.5926.4526.6526.419.0723.310.7610.72030.64080.66950.6340.75660.74990.7640.30130.491
Man24.3924.2223.9824.0723.3323.9824.2824.219.0222.570.60760.59780.56260.57290.55040.59780.60840.61440.30780.4316
Peppers23.5623.3923.0323.0821.6123.3523.6723.3219.0321.130.70470.68810.61640.66530.62540.70370.70020.70140.37190.4729
Straw19.8419.5819.8619.0118.4119.5419.7719.8617.7118.810.46420.42240.48530.32950.31210.41060.4540.50930.42060.3701
Barbara24.523.6223.4222.1421.7623.9824.3924.4318.3821.090.69740.6430.61990.54630.53680.68060.69250.70690.29980.4077
Boat2423.9723.6223.7122.7423.6724.0924.0719.0322.150.59890.59360.55570.56530.53520.59110.60020.61090.31660.4209
Jetplane25.3922.1124.3124.3523.2824.5525.4625.319.1723.010.75420.74420.65880.68360.69270.75620.75210.77240.31670.4957
W.bridge21.7621.621.4221.5820.7421.2121.6921.618.5720.770.45030.43980.42870.43640.38660.41170.4410.43810.35860.3902
Average23.945423.383823.316223.173122.216923.503823.913123.807718.816921.78230.64040.62190.57990.57840.55630.62760.63770.65070.33440.4422
Table 7

Comparison of average PNSR with different methods.

σnAverage PSNR
SAFPIBM3DPBNOEPLLGIDSAISTWNNMWSNMWIENERLCDP
2031.104630.810030.215330.498529.729230.846931.101531.098527.796029.4640
3029.256228.720028.633128.58575427.765428.505229.231529.2326.339227.4223
5026.995426.375426.436926.194525.406926.674626.927726.925423.628524.9385
6026.177725.822325.648525.356924.549225.637726.101526.121522.463024.0762
7525.170824.851524.640024.373123.491524.7425.1625.070820.907723.0754
10023.945423.383823.316223.173122.216923.503823.913123.807718.816921.7823
For visual quality, some comparative images are shown in Figs 4, 5, 6 and 7. As shown in Fig 4, our algorithm resumed the structure of the ear (which is magnified in the highlighted red window) better than other algorithms. When the noise level is very high, as shown in the zoom-in window in Fig 7, our algorithm could reconstruct clear texture structures, while the competing methods get more blurred textures. Other visual improvements can be seen in Figs 5 and 6. Sometimes the variation of noise is too big and too small in the same image (in different parts of the image). To demonstrate our method, we randomly selected two small pieces from the given image. Although their local noise level would be different, our algorithm always gets the best visual texture. Now we could conclude that the proposed SAFPI algorithm can display excellent denoising performance, producing good visual effect and rebuilding better textures.
Fig 4

Denoising results on image Cameraman by different methods (noise level σ = 20).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 30.48 (D) EPLL, PSNR = 30.34 (E) SAIST, PSNR = 30.45 (F) WNNM, PSNR = 30.64 (G) WSNM, PSNR = 30.64 (H) SAFPI, PSNR = 30.68.

Fig 5

Denoising results on image Monarch by different methods (noise level σ = 30).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 28.36 (D) EPLL, PSNR = 28.35 (E) SAIST, PSNR = 28.03 (F) WNNM, PSNR = 28.94 (G) WSNM, PSNR = 29.02 (H) SAFPI, PSNR = 29.09.

Fig 6

Denoising results on image House by different methods (noise level σ = 50).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 29.69 (D) EPLL, PSNR = 28.76 (E) SAIST, PSNR = 29.99 (F) WNNM, PSNR = 30.28 (G) WSNM, PSNR = 30.21 (H) SAFPI, PSNR = 30.51.

Fig 7

Denoising results on image Barbara by different methods (noise level σ = 100).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 23.62 (D) EPLL,PSNR = 22.14 (E) SAIST, PSNR = 23.98 (F) WNNM, PSNR = 24.39 (G) WSNM, PSNR = 24.43 (H) SAFPI, PSNR = 24.5.

Denoising results on image Cameraman by different methods (noise level σ = 20).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 30.48 (D) EPLL, PSNR = 30.34 (E) SAIST, PSNR = 30.45 (F) WNNM, PSNR = 30.64 (G) WSNM, PSNR = 30.64 (H) SAFPI, PSNR = 30.68.

Denoising results on image Monarch by different methods (noise level σ = 30).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 28.36 (D) EPLL, PSNR = 28.35 (E) SAIST, PSNR = 28.03 (F) WNNM, PSNR = 28.94 (G) WSNM, PSNR = 29.02 (H) SAFPI, PSNR = 29.09.

Denoising results on image House by different methods (noise level σ = 50).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 29.69 (D) EPLL, PSNR = 28.76 (E) SAIST, PSNR = 29.99 (F) WNNM, PSNR = 30.28 (G) WSNM, PSNR = 30.21 (H) SAFPI, PSNR = 30.51.

Denoising results on image Barbara by different methods (noise level σ = 100).

(A) Ground Truth (B) Noisy Image (C) BM3D, PSNR = 23.62 (D) EPLL,PSNR = 22.14 (E) SAIST, PSNR = 23.98 (F) WNNM, PSNR = 24.39 (G) WSNM, PSNR = 24.43 (H) SAFPI, PSNR = 24.5. If noise is non-Gaussian, one popular method is to transform the non-Gaussian noise into a more tractable Gaussian model such as the generalized Anscombe transformation (GAT) [43, 44]. In this paper, we deal with non-Gaussian noise using the proposed algorithms. In the first experiment, we assumed the noise was a mix of Gaussian noise (σ = 20, 50, 100) and speckle noise (the density is d = 1*10−3). Then we used the SAFPI algorithm directly to remove the noise. We randomly selecteded six images (Lena, Monarch, Barbara, Cameraman, House, Peppers) on Fig 1 for experimental verification and compared with some excellent denoising algorithms which has been mentioned in the previous experiments. The results are shown on Table 8. In the second experiment, we assumed the noise was mixed Poisson-Gaussian noise. Then we transformed the Poisson-Gaussian hybrid noise into an approximate Gaussian noise using the GAT [43] algorithm and obtained the repaired images by using the proposed denoising algorithm and the exact unbiased inverse GAT [44]. We used Lena and C.man as the test images and set eight different peak values to be (1, 2, 5, 10, 20, 30, 60, 120). The Poisson-Gaussian noise were set to be σ = peak/10. We compared with BM3D, SAIST, WNNM and recorded the average PSNR and kurtosis parameters of these two images. The results are shown on Table 9. All bold numbers represent the best evaluation index values. From Table 8, we can see when the standard deviation is not big (σ = 20, 50), our proposed algorithm almost achieved the best values of all three quality evaluation indicators, and obtained the best PSNR values on all of hybrid noises experiments. From Table 9, it can be seen that the SAFPI algorithm almost get the highest averaged PSNR value under all peaks experiments. In most cases, our proposed algorithm obtained relatively good kurtosis metrics and optimal PSNR value, which is 0.2 to 0.3dB higher than the BM3D algorithm and about 0.1dB to 1.11dB higher than WNNM.
Table 8

Average denoising results of different algorithms for Speckle-Gaussian noise.

Speckle-Gaussian(σn = 20)Speckle-Gaussian(σn = 50)Speckle-Gaussian(σn = 100)
KurtosisPSNRSSIMKurtosisPSNRSSIMKurtosisPSNRSSIM
NOISES2.306721.91830.41262.744714.11170.1952.95628.12330.0726
BM3D2.092731.56170.87822.105727.40670.77162.134324.00830.6427
EPLL2.102729.14170.82682.127825.86830.70792.142324.75670.6572
SAIST2.089731.810.88452.10527.530.78112.173324.28330.6545
WNNM2.076232.03830.88432.097827.820.78872.141224.48170.668
WSNM2.074332.01170.88422.127.87830.79212.187224.39330.6807
SAFPI2.073932.04330.88462.11227.880.79252.177824.50.6729
Table 9

Average denoising results of different algorithms for Poisson-Gaussian noise.

ImagePeakσnNoisyPSNRKurtosis
GAT+SAFPIGAT+BM3DGAT+SAISTGAT+WNNMGAT+SAFPIGAT+BM3DGAT+SAISTGAT+WNNM
C.man and Lena10.13.0520.48521.4418.65519.372.1982.26852.37452.2425
20.25.9723.08523.08522.0222.832.1522.16952.18552.157
50.59.6925.4125.03524.6624.3352.09652.132.1312.0885
10112.3226.79526.5126.26526.122.16852.17652.1632.1675
20214.6428.07527.8327.5727.8452.14952.37452.14352.13
30315.8028.69528.4328.2628.572.1362.18552.14372.1275
60617.4329.4629.20529.08529.3852.13852.1312.13652.125
1201218.5430.01529.7329.6729.9952.1522.1632.1452.1345
Finally, we bravely attempted to discuss the complexity of SAFPI algorithm. We assume each patch size is A * A, where A represents the length or width of each block, and k is the number of similar patches in each structural group y. Now calculating SVD (step 4 in Algorithm 1) needs flops in each iteration. And it also costs to compute the singular values in step 6. Next since the image y( can be divided into N blocks in step 2, then it needs flops, where i is the number of iterations in Algorithm 1. Then we recorded the execution times of several excellent denoising algorithms spent on the above experiments with the standard deviation σ of the white Gaussian noise to be 20: SAFPI 4843.339s, WSNM 5453.311s, WNNM 4410.991s, SAIST 923.9837s, BM3D 17.6242s and EPLL 1550.607s. Our algorithm did not take much longer time while maintaining the best denoising results.

4 Conclusions

In this paper, a fixed-point iteration scheme was developed for sparse optimization in ℓ space with p ∈ (0, 1] by using proximal operator. We showed that group sparse coding was equivalent to Schatten-p norm minimization problem, and thus the sparse coefficient of each group were measured by estimating the singular values of each group. When analyzing the optimal value of power p, we can find that the optimal value of Schatten p-norm is related to the noise level. As the noise level increases, the optimal value of p decreases gradually. And if the noise reaches a high level, the optimal value of p will be close to 1. The developed SAFPI algorithm can obtain higher PSNR indices and is able to retain promising texture structure information and visual quality. The methods developed in this paper leads to a better image denoising compared to other competing denoising algorithms. There are several future research directions. We are further exploring other non-convex optimization strategies for more effective convergence and further improvement. The convolutional neural networks(CNN) based denoising methods become more and more popular now and we will investigate CNN architectures for the denoising of images in the future.

5 Appendix

Proof 9 (Proof of Theorem 1) Let D = U and A = ΣV in Eq (1), where is a diagonal matrix and each column of V in is decomposed of v = (α)/ε. Then we have Let σ denotes the standard deviation of the sparse coefficients α in the i-th column, then the sum of standard deviations associated with sparse coefficient vector in each column is And then it is not hard to see Using Eq (10) and the unitary property of V, we have Then it is ready to see By substituting Eq (13) into Eq (9) we could obtain which appears to be better approximation to the rank function by using the Schatten-p quasi-norm. Proof 10 (Proof of Lemma 6) Let ϕ = |w|, by using subdifferential and Corollary 2.59 of [27], one has If ϕ is convex, this is an equality. If we further define , since it is ready to see For Π*(x) is the pointwise minimum of the collection of affine functions, we know it is closed and convex. Thus Proxλ ∥ ⋅∥(x) is also closed and convex. But it is easy to see Π*(x) is a discontinues mapping, so generally there is no closed form expression for it. Proof 11 (Proof of Theorem 8) For 0 < p < 1, let ϕ(w) = |w|, we have ∂ϕ(w) = ∅ when w = 0. In order to overcome the singularity of (|w|)′ = pw/|w|2− near w = 0, following Section 4 of [28], we consider for 0 < ϵ << 1 the approximation It is important to observe that Proxλ ∥⋅∥(x) = 0 if which is equivalent to equality obtained when . Otherwise, the necessary optimality condition is given by To solve Eq (18) for nonnegative w, let we consider the iteration One can easily see that if and only if . Notice that the first and second order derivatives of f(w) = (w − x)2 + 2λ|w| are and one can easily verify that f(w) is concave in the range of , and is convex in the range of . By using the Contraction Mapping Theorem (Theorem 1.5 on page 11 of [30]), we have The iteration will eventually converge to a fixed-point, which is the root w = g(w) in the interval . Moreover, by noting that the derivative of x(w) = w + λpw is by solving , we have then and . Combined with (17), we denote Thus the proof is completed.
  11 in total

1.  Image quality assessment: from error visibility to structural similarity.

Authors:  Zhou Wang; Alan Conrad Bovik; Hamid Rahim Sheikh; Eero P Simoncelli
Journal:  IEEE Trans Image Process       Date:  2004-04       Impact factor: 10.856

2.  On the mathematical properties of the structural similarity index.

Authors:  Dominique Brunet; Edward R Vrscay; Zhou Wang
Journal:  IEEE Trans Image Process       Date:  2011-10-24       Impact factor: 10.856

3.  Patch-based near-optimal image denoising.

Authors:  Priyam Chatterjee; Peyman Milanfar
Journal:  IEEE Trans Image Process       Date:  2011-10-19       Impact factor: 10.856

4.  Global Image Denoising.

Authors:  Hossein Talebi; Peyman Milanfar
Journal:  IEEE Trans Image Process       Date:  2014-02       Impact factor: 10.856

5.  Image denoising by sparse 3-D transform-domain collaborative filtering.

Authors:  Kostadin Dabov; Alessandro Foi; Vladimir Katkovnik; Karen Egiazarian
Journal:  IEEE Trans Image Process       Date:  2007-08       Impact factor: 10.856

6.  Joint solution for PET image segmentation, denoising, and partial volume correction.

Authors:  Ziyue Xu; Mingchen Gao; Georgios Z Papadakis; Brian Luna; Sanjay Jain; Daniel J Mollura; Ulas Bagci
Journal:  Med Image Anal       Date:  2018-03-28       Impact factor: 8.545

7.  Is denoising dead?

Authors:  Priyam Chatterjee; Peyman Milanfar
Journal:  IEEE Trans Image Process       Date:  2009-11-20       Impact factor: 10.856

8.  Denoising techniques in adaptive multi-resolution domains with applications to biomedical images.

Authors:  Salim Lahmiri
Journal:  Healthc Technol Lett       Date:  2016-12-14

9.  Nonlocal image restoration with bilateral variance estimation: a low-rank approach.

Authors:  Weisheng Dong; Guangming Shi; Xin Li
Journal:  IEEE Trans Image Process       Date:  2012-10-02       Impact factor: 10.856

10.  Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

Authors:  Salim Lahmiri
Journal:  Healthc Technol Lett       Date:  2015-12-15
View more
  2 in total

1.  Joint image compression and encryption based on sparse Bayesian learning and bit-level 3D Arnold cat maps.

Authors:  Xinsheng Li; Taiyong Li; Jiang Wu; Zhilong Xie; Jiayi Shi
Journal:  PLoS One       Date:  2019-11-18       Impact factor: 3.240

2.  Compressed fluorescence lifetime imaging via combined TV-based and deep priors.

Authors:  Chao Ji; Xing Wang; Kai He; Yanhua Xue; Yahui Li; Liwei Xin; Wei Zhao; Jinshou Tian; Liang Sheng
Journal:  PLoS One       Date:  2022-08-12       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.