Yulan Han1, Yongping Zhao1, Qisong Wang1. 1. Department of Automatic Test and Control, Harbin Institute of Technology, Harbin, Heilongjiang, China.
Abstract
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness.
In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness.
Single image super-resolution (SR) is a classical problem in computer vision. In general, it uses signal processing techniques to recover a high resolution (HR) image from only one low resolution (LR) image. SR methods can be broadly classified into three categories: interpolation-based methods, reconstruction-based methods, and example-based methods.Interpolation-based SR such as [1, 2] has been proposed for in various applications and it demonstrates the advantage of fast computational simplicity. But they usually fail to generate fine details in discontinuous regions and often result in introducing blurring of edges and other high-frequency features in practice [3].Reconstruction-based methods usually integrate one or more sophisticated priors such as gradient profile prior [4], edge prior [5], and total variation [6] into SR literature to estimate the missed details. Recently, sparse-based regularization [7-10] has also been shown to be particularly effective for the ill-posed problems of SR. Usually, these methods achieved impressive results in preserving sharper edges and suppressing aliasing artifacts. However, the performance depends heavily upon a rational prior imposed on the up-sampled image [11].Over the years, many example-based SR methods [12-14] have been proposed with demonstrated promising results and become the mainstream approaches of SR domain. The methods assume that the missing high frequency details can be estimated based on learning the mapping relationship from LR-HR patch pairs of external database and input LR patches. Two kinds of relationship models exist for these methods. One is that between LR patches and the corresponding HR patches in the database. After Freeman et al. [15] used Markov network to model the relationship, regression functions [16] are employed to exploit the relationship between HR and LR patch pairs. In addition, supervised or semi-supervised learning models are introduced into some of the algorithms [17-19]. Recently, a mapping of LR-HR image pairs was learned using a deep convolutional neural network [20], and has shown favorable results. D. Dai et al. [21] jointly learned a collection of regressors from LR to HR patches, which collectively yielded the smallest error for all training data. The other is that between LR example patches and input LR patches. Most of the methods [22, 23] is based on Nearest Neighbor Embedding (NNE). In these methods, a fixed number of nearest neighbors are extracted from database for each input LR patch, and then the corresponding HR patches are used to estimate the output HR patch by a linear combination determined by LR patch and its neighbors. Despite the algorithms are demonstrated by successful results, they highly depend on the number of neighbors which is difficult to determine. For this problem, [24] operates on a dynamic k-nearest neighbor algorithm, where k is small for test point with highly relevant neighbors and large others. Some researchers calculate the distance between input patch and its neighbors respectively. The neighbors will be abandoned when the distance is smaller than mean value. Yang [25] exploited sparse coding to perform image SR. The algorithm assumes that LR-HR patch pairs share the same sparse coefficients with respect to their respective dictionaries which are jointly learned from a set of external training images. It can be considered as neighbor embedding in sparse domain without choosing the number of neighbors. Since then, sparse coding is applied to SR problem [21-23], and achieves impressive results. Zeyde [26] used dimensionality reduction and orthogonal matching pursuit for sparse representation to improve efficiency. S. Wang [27], proposed a semi-coupled dictionary learning model, under which a pair of dictionaries and a mapping function describing the relationship between sparse coefficients of LR-HR patch pairs will be simultaneously learned. In [28], kernel ridge regression is employed to connect sparse coefficients of LR-HR patch pairs. Kaibing Zhang [29] determine the relationship between LR image patches and HR image patches by assuming that LR image patches and HR image patches are share the same sparse coefficients. R. Timofte et al. [30] proposed a fast image SR method called anchored neighbourhood regression (ANR) which learns sparse dictionaries and regressors anchored to dictionary atoms. This algorithm is faster, while making no compromise on quality. R. Timofte et al. [31] then produced an improved variant of ANR. The study in [31] enhanced these features and anchored regressors for ANR. Instead of learning the regressors on the dictionary, their method uses the full training material. It obtained improved quality, and became the fastest method indisputably. S. Gu [32] proposed a convolutional sparse coding based SR method to address consistency issue. In addition, researches show that image structures tend to repeat themselves within and across scales. [33-35] exploits the self-similarity of structures in nature image and extracts the database directly from the LR input image instead of the external database. Good reconstruction quality relies on much additional memory and running time to build counterparts across different scales in a recursive scheme. Therefore, its application is limited.Although the algorithms can results in better performance, most of the SR algorithms including other learning-based methods assume that the input LR image is noise-free. Such assumption is not in accord with real applications. The algorithms are less robustness to noisy image SR. So another challenge is the super-resolution for noisy images. While compared with SR on clear LR input images, less attention has been paid to develop effective SR algorithms for noisy ones. J. Xie [36] first employs an adaptively regularized Shock filter to tackle the jagged noise, and then perform SR for depth image. The disadvantage of such scheme is that the artifacts can be created in denoising process and magnified in super-resolution process. Therefore, researchers started on simultaneously denoising and super-resolution. In [37], LR training images are magnified by a TV regularization model with a constraint before dictionaries training stage. However, the level of noise dealt with the method is small. Furthermore, it focuses on magnification only. Based on the current research status, we devote to design an algorithm to complete SR and denoising in the same framework to deal with noisy image patches.Sparse representation makes the signal energy only concentrated in a few atoms. Because of the special nature, some sparse coding based SR algorithms such as [25] show certain robustness to noisy image. In addition, sparse representation has been successfully employed in image denoising [38, 39], image restoration [40, 41] and other processing [42, 43]. The dictionary plays an important role in the sparse representation process. A predefined analytical dictionary (e.g., wavelet dictionary, Gabor dictionary) make the coding fast and explicit, but it is less effective to model the complex local structures of natural images. A synthesis dictionary (e.g., K-SVD dictionary) can be learned from example natural images and has more expensive computation but can better model complex image local structures [44]. In recent years, lots of dictionary learning methods have been proposed and achieved obvious performance. Feng et al. [45] propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. Zhang et al. [46] propose a semisupervised label consistent dictionary learning framework for machine fault classification. Inspired by these, we introduce sparse theory to our research. The synthesis procedure is illustrated in Fig 1. The input LR image and example images are firstly cropped into patches. The example images are noise-free. Then the features of example patch pairs are extracted, which will be learned for dictionary pair. For each input LR patch, according to its features, it is easy to achieve simultaneously similar dictionary atom pairs finding and calculating distance b between input LR patch and its similar atoms. Next, combined with the input LR image patch feature, LR dictionary atom and distance b are used to compute weight ω. After the weight is computed, we can obtain estimated HR image patch and denoised LR image patch from . Put all the estimated HR patches into an estimated HR image, which is computed by averaging in overlapping regions. In the same way, we obtain the denoised LR image from all the denoised LR patches. At last, combined with the iterative back projection (IBP), the estimated HR image and the denoised LR image are applied to obtain the final output HR image.
Fig 1
The flowchart of the proposed SR algorithm.
The contributions can be summarized as follows.(1) Different from the conventional methods, the proposed algorithm can process noisy image, and present for simultaneously image superresolution and denoising. Furthermore, in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained.(2) The core idea of our proposed method is that the estimated HR patch is weighted average of similar HR example patches. To reduce computational cost for finding similar patches from millions of examples, example patches are replaced by the learned sparse dictionary which makes the signal energy only concentrate in few atoms.(3) Penalty function is applied to least squares regression regularized by l2-norm for modeling weight. It makes the objective function treat each similar atom unequally. The function is determined by the similarity between input LR patch and its similar atom of LR dictionary. When the similarity is strong, we make the penalty small, which forces large weight at the same time. Conversely, when the similarity is weak, we make the penalty large, which forces small or zero weight at the same time.(4) LR example patches subtracted mean pixel value are used for training dictionary rather than just their gradient features like other literatures such as [25]. In the training stage, for each LR example patch, we first subtract its mean pixel value, then connect it to its corresponding HR example patch into a single vector. All the new vectors are used as new HR examples to learn HR dictionary. Thus, the HR dictionary represents textures of HR example patches, but also that of LR example patches which are noise-free. Therefore, in the reconstruction stage, the HR dictionary can also be used to recover denoised input LR patches. This is different from conventional learning methods. Combined with iterative back projection (IBP), the denoised LR patches are applied to enhance robustness to noise.The remainder of this paper is organized as follows. The proposed algorithm is presented in detail in Section 2. Experimental results and comparisons are demonstrated in section 3. Section 4 concludes this paper.
2 The proposed method
Firstly, let us recall the image degradation model which is shown in Eq (1). Given an observed LR image Y ∈ R that is a degraded version of a HR image X ∈ R of the same scene
Where, G is the down-sampling operator with scaling factor s; H is the blurring operator; v is the noise. It is the task of SR reconstruction to recover X from Y as accurate as possible. It is considered that the image is noise-free by conventional SR methods.
2.1 Example database
From the example images , LR images are first obtained, which are considered as noise-free ones. For each image , its corresponding LR image is determined byA set of vectorized HR patches of size are taken from example HR images and a set of vectorized LR patches of size are taken from example LR images . Consequently, we obtain a database of HR-LR patch pairs
2.2 Distance penalty weight model
For the super-resolution, given a LR image Y, which is generated form HR image X by Eq (1), the task is to recover the unknown X from Y with the help of example patch pairs. The algorithm is performed with patch for the unit. Similar to [25], Y is firstly divided into overlapping patches
Where, is the vectorized LR image patch of size , N is the number of patches of Y.The estimated vectorized HR image X can be represented as
Where, is the estimated HR image patch of size .According to Eq (1), the relationship can be described by
Where,v is the noise. We assume that it is Gaussian noise with zero-mean and variance σ2.Thus, it become the purpose of super-resolution to estimate HR image patch from input LR image patch .As we known, for each , it can be approximated by HR example patches through weighted average, which have similar structures. Therefore, based on this core idea, the problem in this method is to find the similar patches of in database and to calculate the weight.Due to the repetition of local structures of images, a subset of patches in which has similar structures with exists. That is
Where, weight vector is ω = [ω, ω, …, ω, …, ω], k is the number of the patch pairs in this subset .There are many methods to determine the weight, such as set the weights to be inversely proportional to the distance between patches. These methods relying on number of similar patches heavily, and cannot suppress noise. Now, we discuss a new weight model in details. According to the degradation model Eqs (1) and (7), we haveFrom Eq (8), we can obtain
Where, v is assumed as Gaussian noise with zero-mean and variance σ2.Thus,
Where, ε is related to σ2. We can see that the LR patch can be represented by the same weight vector ω over , with an error ε. That is to say, we can get the weight from input LR image patch and similar LR example patches with a controlled error.Based on the above discussions, We formulate the weight solution as a least squares regression regularized by l2-norm:From Eq (12), the objective function treats the patches equally. It is not flexible to obtain accurate weights for the input patch. Motivated by this, we introduce distance penalty to the least square problem
Where, ⋅ denotes a point wise vector product, b = [b, b, …, b, …, b]. b is the distance between and each similar example patch in . When the similarity between and is strong, we make the b small, which forces large ω at the same time. Conversely, when the similarity is weak, we make b large, which forces small or zero ω at the same time. It is simply determined by the squared Euclidean distance.Eq (13) can be written as
Where, λ is a regularization parameter.According to Eq (10), we have
Where, γ is a positive constant. So we set λ = γσ2, when σ ≠ 0.Thus, the main task in reconstruction stage is to find the patches from p, which is similar to and compute the weight. Squared Euclidean distance can be adopted in to quantify the similarity. The corresponding is assumed to have similar structures with . But it is uneasy to find similar patches for each input patch from millions of example patch pairs. It will take lots of time for the repetitive computation. Sparse dictionary make the signal energy only concentrate in few atoms, and some sparse coding based SR algorithm [25] show certain robustness to noisy image, so that we use a learned sparse dictionary instead of examples. We find similar patch pairs from dictionary atom pairs, meaning .Two dictionaries D and D are trained to have the same sparse coding for each HR and LR patch pair. Similar to Yang [25] and Chang [22], we subtract the mean pixel value for each HR example patch, so that the dictionary D represents image textures rather than absolute intensities. In the reconstruction stage, the mean value for each estimated patch is then predicted by its LR version. Also we employ first- and second-order derivatives as the feature extraction for LR example patches to train. Thus, D represents the gradient feature of images rather than absolute intensities. The four filters used here are:In addition, to enhance robustness to noise, we also subtract mean pixel value for each LR example patch, and connect the LR example patch to its corresponding HR example patch into a single vector, which is also used to learn D. Thus, dictionary D represents textures of HR example patches, but also that of LR example patches which are noise-free. In the reconstruction stage, the D can also be used to recover denoised input LR patches. This is different from conventional learning methods.From above, the training set is obtained by
Where, (p,p) is original HR-LR patch pairs in Eq(3), is the mean value of , is the mean value of , F(⋅) is the operator to get four gradient vectors by Eq (16) and connect the four vectors into a single vector.The set (P,P) is used to jointly train the dictionaries as
Where, N and M are the vector dimensions of P and P, respectively.To solve the problem easily, Eq (18) can be rewritten as
Where, , .The minimization of Eq (19) is a typical patch-based sparse problem. Many methods can be used to solve it. Yang [25] proposed the framework and acquired good results. However, it takes a large amount of time to solve this sparse model. Zeyde [26] improve the execution speed by dimensionality reduction on the patches through PCA and Orthogonal Matching Pursuit for the Sparse coding. For sparse dictionaries learning, we use the approach of Zeyde [26].Gradient features(see Eq (16)) of LR example patches are used to learn LR dictionary. D represents the image gradient feature and . Therefore, the weight model is rewritten by
Where, is the weight.This problem Eq (20) is l2-norm constraint. We solve it for by taking . The closed-form solution is
Where, , B is a k × k diagonal matrix,The final optimal weight is obtained by rescaling it so that .
2.3 Reconstruction
Based on the above discussions, for each input , we start by extracting its gradient features and finding k similar atom pairs . Because the dictionary atoms are learned basis vectors, we find the similar atoms based on the correlation between the LR dictionary atoms and input LR patch rather than the Euclidean distance. Now, we describe how to compute the correlation.can be represented by dictionary ( is the LR dictionary atom, nd is dictionary size)
Where, β = [β1, β2, …, β, …, β], β is the correlation between and .Eq (23) shows that every dictionary atom makes its own contribution to representing the input patch. The contribution of the j atom can be evaluated by β. In other words, β is a measurement of the similarity between the input patch and the j dictionary atom. We consider that the larger the β, the larger scale of similarity between input patch and dictionary atom ; and a small β means that there is little similarity. We can solve β byThus, could return the correlation. In Eq (20), we use distance b as the penalty. When the similarity between and is strong, we make the b small, which forces large at the same time. Conversely, when the similarity is weak, we make b large, which forces small or zero at the same time. Therefore, we use the reciprocal of β to compute the penalty. The atom pairs corresponding to the maximal k correlation coefficients constitute . b in Eq (20) is determined by
Where, Sort(a, num) is a function returning num top biggest values of vector a, abs(.) is absolute value operation. The scheme can achieve simultaneously similar atoms finding and distance computing. If σ = 0, after finding similar atoms, we set b = 1.After this, we can easily obtain the weight by Eq (20) and . According to section 2.2, the reconstructed vector represents the estimated HR patch and the denoised LR patch correspondent to . And the estimated patch and the denoised patch are subtracted mean pixel value. Based on this, we have
Where, is the estimation of , is the denoised patch of , is an all-one column vector, is an all-one column vector, w1 is the size of , w2 is the size of , E(⋅) is the mean evaluation operator.Noise here is assumed as zero-means, soWe can see that the noise has little effect on image mean. The mean of and could be estimated by the mean of . Eq (26) can be written byPut all estimated patches into a HR image , which is computed by averaging in overlapping regions. In the same way, we obtain a denoised image from . In order to strengthen the reconstruction constraint Eq (1), we compute the final estimated HR image X* byThe iterative back-projection (IBP) method [32] is used to solve this optimization problem
Where, is the estimate of the HR image at the t iteration, ↑ denote up-scaling by factor s, p is a symmetric Gaussian filter.The entire SR process is summarized as Algorithm 1.Algorithm 1: The Proposed SR AlgorithmInput: the sparse dictionaries D and D; input LR image Y; number of similar atoms k; a positive constant γ;output: HR image X*;1: for each patch of Y
do2: Extract the gradient features for by Eq (16).3: Find k similar atom pairs and compute b by Eq (25).4: Solve Eq (21) for .5: Generate estimated HR patch and denoised patch by Eq (28).6: end for7: Put the patches and into an image and , respectively.8: Perform IBP Eq (30) to obtain a HR image X*.
3 Experiments
In this section, we will show the robustness of the proposed algorithm to noise and compare the state-of-the-art methods [20, 22, 25, 26, 31, 32]. In the training stage, we used 77 standard natural images as training set. For testing, we used Set5 [20, 31], Set14 [20, 31] and B100 [20, 31] to evaluate the performance of upscaling factors ×2, ×3 and ×4, respectively. Set5 and Set14 contain 5 and respectively 14 images for super-resolution evaluation. B100 contains 100 testing images of Berkeley Segmentation Dataset called BSDS300.All LR images (training or test images) are generated from the original HR images. Firstly, the original HR images are directly blurred and down-sampled. The MATLAB function “imresize” is used here to complete the process. The function “imresize” involved a smooth filtering before down-sampling. Similar to [7], the noise is generated by MATLAB function “randn”, and σ times noise is added to the blurred and down-sampled test images. It should be noted that LR example images for training dictionary are noise-free. For color images used in experiments, SR algorithms are performed only on luminance channel, because humans are more sensitive to illuminant changes. Therefore, we first changes channels into YCbCr ones and then apply our method to the Y channel. We interpolate the color layers (Cb, Cr) using bicubic interpolation.
3.1 Parameters
In this section, we analyze the main parameters of our algorithm. The standard settings we use are Set5 [20, 31] database, dictionary size 1024, γ = 0.08 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. Peak signal-to-noise ratio (PSNR) and reconstruction time were used as the objective criteria.
3.1.1 Regularization parameter
γ is a key regularization parameter of our method. Here, we validate the effectiveness of using different γ, and choose an appropriate one. The results of Set5 are shown in Fig 2. Experimental setting is dictionary size 1024 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. We can see that the curves are not monotonic, and PSNR peaks at γ = 0.08. For different datasets, the optimal γ is slightly different (0.06 of Set14 and B100 compared to 0.08 of Set5) for reconstruction quality. The results of Set14 and B100 are shown in S1–S6 Figs. Therefore, we suggest determining γ to be around 0.08 in practice. Here, in all of our following experiments, we set γ as 0.08 for convenience.
In this experiments, dictionary size is varied from 32 up to 2048, while the training samples are extracted from the same training images previously mentioned. In Fig 3, we present the results that show the relation between our method’s performance and the dictionary size when γ = 0.08 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. Actually, noise has little effect on reconstruction time. So we only show the reconstruction time when σ = 10. We can see that the larger we learn the dictionary, the better reconstruction quality becomes. However, this comes with a higher computational cost. The result is the same as that of [25, 47]. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S7–S12 Figs. In practice, we suggest choosing the appropriate dictionary size as a tradeoff between reconstruction quality and computation. Dictionary size here is 1024 in our following experiments.
Fig 3
Dictionary size influence on performance on average on Set5.
The proposed method finds the similar atom pairs for each input patch. The performance of the method depends on the number of similar atoms k. The effect of k is shown in Fig 4 when dictionary size is 1024 and γ = 0.08. Here, we also only show the reconstruction time when σ = 10. We can see that k = 24 is best for reconstruction quality when upscaling factor is ×2. The PSNR peaks at k = 8 when upscaling factor is ×3 or ×4. Moreover, average reconstruction time increases distinctly as k increases. It is due to the fact that by having a larger k, the computation of matrix inversion in Eq (21) increases. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S13–S18 Figs. Therefore, in resource-limited systems, a reasonable selection of k depends on the tradeoff between reconstruction quality and computational time. We will use k = 24 when upscaling factor is ×2, k = 8 when upscaling factor is ×3 or ×4 in our further experiments.
Fig 4
Number of similar atoms influence on performance on average on Set5.
Intuitively, using a too large or too small patch size tends to produce a smooth or unwanted artifact as noticed also in [25, 29] and a larger overlapping leads to a better SR results [25]. Therefore, patch size is set as 6×6, 6×6 and 8×8 for upscaling factor ×2, ×3 and ×4, respectively, and overlap is set as 4, 3 and 4 for upscaling factor ×2, ×3 and ×4, respectively.
3.2 Performance evaluation
In this section we analyze the performance of our algorithm in quantitative and qualitative comparison with the state-of-the-art methods including NE [22], SCSR [25], Zeyde [26], A+ [31], SRCNN [20], and CSC [32]. We also show the reconstruction times of the algorithms. The code of the compared method was downloaded from the authors’ homepage. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were used as the objective criteria. The parameters are analyzed in the previous section. Besides the patch size and overlap(see section 3.1.4), the other parameter are unified (γ = 0.08, dictionary size = 1024, k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3 and ×4).
3.2.1 Quality
Tables 1–3 list the PSNR and SSIM comparisons. When σ = 0, the approach CSC [32] achieves the best performance. But it is not in accord with real application. When σ ≠ 0, as repeatedly shown, the results demonstrate the superiority of our proposed algorithm over other approaches on Set5, Set14 and B100. The average PSNR of the recent method CSC [32] is 0.24 dB (Set14, upscaling factor ×4, σ = 5) and 7.4 dB (Set5, upscaling factor ×2, σ = 20) behind our method. Compared with CSC, for dataset B100, the average PSNR improvement is from the minimum 0.52 dB (upscaling factor ×4, σ = 5) to the maximum 6.18 dB (upscaling factor ×2, σ = 20). In addition, our method improves on average 3.62 dB (Set5, upscaling factor ×2, σ = 20) over the next top robustness method SCSR [25]. Figs 5–8 provide a visual assessment. We can see that our method gets similar quality performance as the top methods it was compared to when σ = 0, and it has the strongest robustness.
Table 1
Comparisons of average PSNR (dB) and SSIM (σ = 0).
dataset
Scale
NE [22]
SCSR [25]
Zedye [26]
A+ [31]
SRCNN [20]
CSC [32]
ours
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
Set5
×2
35.77
0.949
36.04
0.951
35.78
0.949
36.55
0.954
36.34
0.952
36.62
0.955
35.65
0.948
×3
31.84
0.896
31.40
0.887
31.90
9.897
32.59
0.909
32.39
0.887
32.66
0.909
31.57
0.895
×4
29.61
0.840
-
-
29.69
0.843
30.28
0.860
30.09
0.853
30.36
0.859
29.49
0.841
Set14
×2
31.76
0.899
31.71
0.903
31.81
0.899
32.28
0.906
32.18
0.904
32.31
0.907
31.71
0.901
×3
28.60
0.808
28.07
0.803
28.67
0.808
29.13
0.819
29.00
0.815
29.15
0.821
28.26
0.811
×4
26.81
0.733
-
-
26.88
0.734
27.32
0.749
26.61
0.725
27.30
0.750
26.55
0.738
B100
×2
30.41
0.871
31.04
0.884
30.40
0.868
30.77
0.877
31.14
0.885
31.27
0.888
30.76
0.881
×3
27.85
0.771
27.81
0.772
27.87
0.770
28.18
0.780
28.21
0.780
28.31
0.786
27.85
0.778
×4
26.47
0.697
-
-
26.55
0.697
26.77
0.709
26.71
0.702
26.83
0.711
26.51
0.703
Table 3
The results of average PSNR (dB) and SSIM on the Set14 and B100 dataset.
dataset
Scale
σ
NE [22]
SCSR [25]
Zedye [26]
A+ [31]
SRCNN [20]
CSC [32]
ours
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
Set14
×2
5
28.74
0.7514
29.31
0.7981
29.01
0.7647
28.71
0.7737
28.61
0.7435
28.36
0.7275
29.69
0.8205
10
25.08
0.5551
26.59
0.6478
25.46
0.5750
24.78
0.5400
24.45
0.5264
24.31
0.5180
27.80
0.7381
15
22.29
0.4204
24.31
0.5213
22.71
0.4394
21.89
0.4309
21.42
0.3848
21.35
0.3821
26.38
0.6732
20
20.15
0.3302
22.47
0.4259
20.57
0.3466
19.70
0.3140
19.14
0.2945
19.10
0.2933
25.35
0.6220
×3
5
26.86
0.6903
26.55
0.6891
27.08
0.7035
26.96
0.6859
26.99
0.6942
26.70
0.6703
27.16
0.7220
10
24.19
0.5240
23.95
0.5215
24.52
0.5441
23.92
0.5079
23.90
0.5014
23.53
0.4856
25.78
0.6663
15
21.89
0.4032
21.64
0.3990
22.24
0.4221
21.43
0.3827
21.29
0.3781
20.96
0.3606
24.67
0.6075
20
19.99
0.3196
19.72
0.3142
20.35
0.3355
19.43
0.2981
19.20
0.2900
18.88
0.2767
23.77
0.5579
×4
5
25.57
0.6398
-
-
25.76
0.6526
25.76
0.6416
25.89
0.6575
25.49
0.6241
25.73
0.6788
10
23.42
0.4985
-
-
23.42
0.5174
23.24
0.4865
23.45
0.5078
22.84
0.4607
24.64
0.6171
15
21.39
0.3896
-
-
21.69
0.4076
21.01
0.3713
21.08
0.3836
20.51
0.3448
23.70
0.5686
20
19.66
0.3115
-
-
19.96
0.3265
19.15
0.2910
19.08
0.2958
18.57
0.2655
22.91
0.5283
B100
×2
5
28.00
0.7264
28.81
0.7719
28.19
0.7380
27.96
0.7196
28.02
0.721
27.83
0.7076
28.95
0.7917
10
24.66
0.5279
26.28
0.6200
25.02
0.5480
24.36
0.5123
24.17
0.5059
24.07
0.4997
27.29
0.7037
15
22.01
0.3951
24.10
0.4941
22.42
0.4136
21.63
0.3792
21.25
0.3661
21.23
0.3654
26.08
0.6378
20
19.95
0.3077
22.32
0.4000
20.37
0.3233
19.52
0.2925
19.03
0.277
19.02
0.2779
25.20
0.5873
×3
5
26.32
0.6518
26.79
0.6728
26.49
0.6638
26.34
0.6463
26.46
0.6586
26.20
0.6351
26.85
0.7010
10
23.86
0.4878
23.74
0.4874
24.15
0.5067
23.58
0.4716
23.60
0.4767
23.27
0.4538
25.64
0.6273
15
21.66
0.3703
21.47
0.3674
21.99
0.3882
21.22
0.3510
21.10
0.3481
20.81
0.3326
24.66
0.5702
20
19.82
0.2902
19.59
0.2855
20.17
0.3051
19.29
0.2706
19.07
0.2635
18.78
0.2526
23.84
0.5223
×4
5
25.3
0.6015
-
-
25.46
0.6133
25.36
0.5991
25.53
0.6171
25.2
0.5857
25.72
0.6414
10
23.23
0.4615
-
-
23.49
0.4800
23.01
0.4478
23.23
0.4690
22.69
0.4269
24.74
0.5815
15
21.26
0.3562
-
-
21.56
0.3736
20.86
0.3378
20.95
0.3500
20.43
0.3161
23.89
0.5358
20
19.56
0.2820
-
-
19.86
0.2966
19.06
0.2622
18.99
0.2669
18.52
0.2412
23.15
0.4980
Fig 5
Comparisons with various image super-resolution methods on “coastguard” from Set14 with upscaling factor ×2 (σ = 0, PSNR in dB).
Average reconstruction time of test images in Set5 was compared when σ = 10. Actually, noise has little effect on test results. The experiments were conducted on the same computer. The results are summarized in Table 4. The reconstruction time varies a lot for different upscaling factors. Our algorithm cost fewer than 10s. The reconstruction time of our algorithm is comparable to that of SCSR, CSC, and SRCNN. SCSR is the slowest method.
Table 4
Comparisons of average reconstruction time (s)on Set5.
Scale
NE [22]
SCSR [25]
Zedye [26]
A+ [31]
SRCNN [20]
CSC [32]
ours
×2
4.78
193.26
6.82
0.88
7.54
139.03
3.21
×3
2.78
44.31
3.01
0.57
7.47
78.46
1.24
×4
1.63
-
1.96
0.42
6.39
48.24
0.75
3.3 Effect of IBP
Combined with iterative back projection (IBP), the denoised LR patches are applied to improve SR performance in our algorithm. According to [47], IBP has an important role to improve SR performance. But if the input is a noisy image, the model of IBP will propagate the noise to the HR image. Experimental results show that if we use IBP algorithm directly on the input LR image, the performance will become worse. The results are listed in Table 5. The iteration number of IBP here is 20. From this comparison, we can see that the superiority of our method is obvious. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S1 Table.
Table 5
Effect of IBP on average PSNR(dB) and SSIM (Set 5).
Scale
IBP
σ = 5
σ = 10
σ = 15
σ = 20
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
PSNR
SSIM
×2
×
31.48
0.831
27.76
0.665
25.03
0.531
22.93
0.432
√
29.93
0.753
25.16
0.526
21.95
0.383
19.58
0.293
ours
32.49
0.873
29.69
0.800
27.92
0.741
26.67
0.691
×3
×
29.19
0.801
26.59
0.660
24.33
0.537
22.47
0.442
√
28.39
0.730
24.52
0.523
21.62
0.385
19.39
0.296
ours
29.72
0.828
27.74
0.756
26.24
0.693
25.07
0.638
×4
×
27.65
0.765
25.58
0.646
23.64
0.537
21.97
0.449
√
27.19
0.706
23.93
0.524
21.28
0.392
19.17
0.303
ours
28.10
0.783
26.48
0.718
25.16
0.663
24.11
0.615
3.4 Effect of distance penalty
Distance penalty is applied to model the weight. To check the effect of the penalty for improving SR performance, we perform our method with and without the penalty respectively on Set5 database. The experiments are done in different γ. The results are shown in in Fig 9. We can see that our method with distance penalty obtains better performance and the superiority of our method with distance penalty is obvious. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S19–S24 Figs.
Fig 9
Effect of distance penalty on average PSNR (dB)(Set 5).
In this research, we proposed an algorithm of noisy image super-resolution based on sparse representation. For the problem of noisy image super-resolution, most of the existing methods will become less effective because they assume that the input LR image is noise-free. The proposed algorithm can achieve simultaneously image super-resolution and denoising. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retained. The core idea of the proposed algorithm is that HR image patch is reconstructed through weighted average of similar HR example patches. In particular, atoms of learned sparse dictionary are used to compute the weight and reconstruct HR patch instead of example patches. This strategy can reduce time computation and suppress noise. In addition, LR example patches subtracted mean pixel value are also used to learn dictionary rather than just their gradient features, which will help IBP to further improve the SR performance. The experimental results show that our method performs better noise robustness.
γ versus average PSNR on Set14. (upscaling factor ×2).
(TIF)Click here for additional data file.
γ versus average PSNR on Set14. (upscaling factor ×3).
(TIF)Click here for additional data file.
γ versus average PSNR on Set14. (upscaling factor ×4).
(TIF)Click here for additional data file.
γ versus average PSNR on B100. (upscaling factor ×2).
(TIF)Click here for additional data file.
γ versus average PSNR on B100. (upscaling factor ×3).
(TIF)Click here for additional data file.
γ versus average PSNR on B100. (upscaling factor ×4).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on Set14. (upscaling factor ×2).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on Set14. (upscaling factor ×3).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on Set14. (upscaling factor ×4).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on B100. (upscaling factor ×2).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on B100. (upscaling factor ×3).
(TIF)Click here for additional data file.
Dictionary size influence on performance on average on B100. (upscaling factor ×4).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on Set14. (upscaling factor ×2).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on Set14. (upscaling factor ×3).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on Set14. (upscaling factor ×4).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on B100. (upscaling factor ×2).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on B100. (upscaling factor ×3).
(TIF)Click here for additional data file.
Number of similar atoms influence on performance on average on B100. (upscaling factor ×4).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×2).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×3).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×4).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×2).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×3).
(TIF)Click here for additional data file.
Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×4).
(TIF)Click here for additional data file.
Effect of IBP on Average PSNR (dB) and SSIM (Set14 and B100).
(PDF)Click here for additional data file.
Table 2
The results of PSNR (dB) and SSIM on the set5 dataset.
Authors: Chun Lung Philip Chen; Licheng Liu; Long Chen; Yuan Yan Tang; Yicong Zhou Journal: IEEE Trans Image Process Date: 2015-07-14 Impact factor: 10.856