Mushtaq Ahmad Khan1, Wen Chen1, Asmat Ullah1, Lin Ji1. 1. State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering, Center for Numerical Simulation Software in Engineering and Sciences, College of Mechanics and Materials, Hohai University, Nanjing, Jiangsu 210098, P. R. China.
Abstract
Minimization functionals related to Euler's elastica energy has a broad range of applications in computer vision and image processing. This paper proposes a novel Euler's elastica and curvature-based variational model for image restoration corrupted with multiplicative noise. It combines Euler's elastica curvature with a Weberized total variation (TV) regularization and gets a novel Euler's elastica energy and TV-based minimization functional. The combined approach in this variational model can preserve edges while reducing the blocky effect in smooth regions. The implicit gradient descent scheme is applied to efficiently finding the minimizer of the proposed functional. Experimental results demonstrate the effectiveness of the proposed model in visual improvement, as well as an increase in the peak signal-to-noise ratio, compared to the PDE-based methods.
Minimization functionals related to Euler's elastica energy has a broad range of applications in computer vision and image processing. This paper proposes a novel Euler's elastica and curvature-based variational model for image restoration corrupted with multiplicative noise. It combines Euler's elastica curvature with a Weberized total variation (TV) regularization and gets a novel Euler's elastica energy and TV-based minimization functional. The combined approach in this variational model can preserve edges while reducing the blocky effect in smooth regions. The implicit gradient descent scheme is applied to efficiently finding the minimizer of the proposed functional. Experimental results demonstrate the effectiveness of the proposed model in visual improvement, as well as an increase in the peak signal-to-noise ratio, compared to the PDE-based methods.
During some phases of the manipulation of an image, some random noise is usually introduced. Therefore, image restoration is the fundamental problem in image processing. Among the restoration models, the variational model has been extremely successful in a wide verity of image restoration problems and remains one of the most active areas of research in mathematical image processing and computer vision. The problem includes additive noise removal and multiplicative noise removal. Multiplicative noise appears in various image processing applications, such as synthetic aperture radar (SAR), medical images, single-particle emission-computed technology, and positron emission tomography. The additive noise problem can be molded as
where f is the observed image, g is the original image and η1 is the additive noise. In literature, many effective numerical methods have been utilized to tackle such models connected with image de-noising having additive noise, for instance, in [1-6]. In the last two decades, variational techniques have been widely studied and investigated for image processing tasks, some of them are related image de-noising. For further details, see [7-14].Multiplicative noise removal problem can be stated as follow;
where f is the observed image, g is the original image and η2 is multiplicative noise. In this direction, researchers have used various total variation-based approaches to solve the multiplicative noise removal problem. The interested reader is referred to [15-20] for more details.The main advantage of TV-based regularization is that it preserves the edges well, but the images resulting from the application of this technique in the presence of noise lead to piecewise constant function. Thus, the finer details in the original image may not be recovered satisfactorily, and the ramps will give stairs (piecewise constants) even though some efforts have been made to reduce the staircase effect. For further details, see [21-26].Euler’s elastic was first introduced as a prior curve model by Mumford [27] in computer vision. Then, the Euler’s elastica curvature-based total variation-based regularization was applied to find image inpainting [28, 29], and useful results were obtained in image inpainting and restoration.In this paper, we develop a new model with an improved regularized term, i.e., Euler’s elastica curvature-based TV regularizer with fourth-order, nonlinear Euler-Lagrange equation. This model restores the images well and substantially reduces the staircase effect while preserving the edges and textures.The rest of the paper is organized as follows. In section 2, the literature has been reviewed regarding image processing by using TV-based methods and Euler’s elastica and curvature-based energy in image restoration and inpainting models. Section 3, discusses Huang et.al [19] model for the removal of multiplicative noise. The proposed model which is Euler’s elastica and curvature-based total variation model for multiplicative noise removal has been discussed in section 4. Section 5, presents the discretization and numerical implementation of the proposed model. Section 6, describes experimental results to compare the two models regarding the visual quality of restoration (PSNR). Section 7, contains the comparison of our proposed model with two other variational based models for image restoration. The sensitivity of the parameters of the proposed model is explained in Section 8. Section 9, concludes the paper. Finally, the derivation of the proposed Euler-Lagrange PDE has been given in appendix A.
2 Background work
In the last few years, many approaches have been used to solve the multiplicative noise removal problem [18, 30, 31]. One of these approaches is the local linear minimum mean square method [32-35], another approach is the anisotropic diffusion method [36-38]; these strategies are based on the statistical information of images and noise, so we have not discussed them in details. Our focus in this paper is on variational methods.The variational approach is an important paradigm for solving the image de-noising problems when the image is defined in the continuous domain. The total variation is a powerful notion in such variational problems. In recent years, variational methods have got much attention in reducing the multiplicative noise owing to the use of total variation (TV) and nonlocal total variation (NLTV) regularization [10, 39, 40]. In literature, many total variation-based models have been proposed, see [41-46] for more details. The variational-based NLTV models have also been widely applied in image restoration [47, 48]. Steidl and Teuber [48] employed the NLTV as a regularizer to recover multiplicative noise images. Since the NLTV-norm used the relevant image patches and hence gives good qualitative results.As, in image model, TV(g) = ∫|∇g|dx does not take into account that our visual sensitivity to the regularity or local fluctuation |∇g| depends on the ambient intensity level g [49]. Since all images are eventually perceived and interpreted by the Human Visual System (HVS); as a result, many researchers have found that the human vision psychology and psychophysics play a significant role in image processing. The classic example is the using of the Just Noticeable Difference Model (JND) in image coding and watermarking techniques [50, 51]. In these fields, the NDJ model is used to control the visual perceptual distortion during the coding procedure and watermark embedding. Weber’s law was first described by Weber [52]. The law reveals the universal influence of the background stimulus g on human’s sensitivity to the intensity increment |∇g|, or so called JND, in the perception of the both sound and light:Accoring to Weber’s law [49, 53], when the mean intensity of background is increasing with high value, then the intensity increment ∇g also has high value. In literature [19], the authors proposed a nonconvex variational functional of model (2) for multiplicative noise removal:
where the first term is the regularization term and the second term is the nonconvex fitting/fidelity term following the MAP estimator for multiplicative Gama noise. β1 and β2 are the two parameters, f > 0 in L∞(Ω) is the given data. The first regularization term is TV functional, which preserve important structures such as edges, an important visual cue in human and computer vision. The second term is the well known Weberized TV regularization term. To briefly explain the role of this term, we assume that g has a gradient ∇g ∈ L1(Ω)2, then TV(logg) = ∫|∇g|dx and the Weberized local variation is
which encodes the influence of the background intensity according to Weber’s law (3).Recently many variational models involving higher order derivatives have been widely used in image processing because they reduce the staircase effect during the noise elimination. The use of Euler’s elastic energy minimization model based fourth order derivatives damps out the high-frequency components of images faster than faster than the second order PDE based methods because the associated PDE to the Euler’s elastica minimization is fourth order. So the Euler’s elastica model can reduce the staircase effect, textures and can produce better approximations to the natural image. Indeed, it is also able to preserve the object edges while erasing noise. The Euler’s elastica model is one of the most famous high orders models. It has been successfully applied in various problems image denoising, inpainting, and zooming. In [28] the authors derived are the Euler-Langrage equation, proposed some numerical schemes to solve the corresponding Euler-Language equation and also explained the effect of parameters α1 and α2 in Eq (3) in image inpainting.The Euler’ elastica can be described by using the curvature κ of the smooth curve Γ as follows
where s is the arc length and α1 and α2 are the two positive parameters. In the above functional (6), the first term minimizes the total length while the send term minimizes the power of total curvature, where p can be set p = 1, 2 in [28, 54]. In this work, we set p = 2. The Euler’s elastica of all the level curves of an image g can be defined as
where γl represents the level curve with g = l. The curvature κ can be expressed asCombining the Eqs (7) and (8) and using the co-area formula we getFor image restoration applications, the Euler’s elastica energy (9) can be used as a regularization term.The Elastica model [28], minimizing the Euler’s elastica energy for image inpainting problem is proposed in the following minimization functional
where α1 and α2 are arbitrary positive constants, λ > 0 is a penalty parameter, p = 2 is usually chosen, g = g(x, y) is the true image to be restored and is the curvature. The virtue of Eq (10) is that the regularization using the Euler’s elastica energy penalizes the integral of the square of the curvature along edges instead of only penalizing the length of the edges as the TV model does (if taking α2 = 0) [10]. Motivated by the applications of Euler’s elastica in image inpainting and restoration and Weberized TV- regularization in image restoration, we propose a new model based on the theory of Euler’s elastica energy and Weberized TV-regularization for multiplicative noise removal problem.
3 Li-Li Huang model (M1)
Li-Li Huang et al. proposed the non-convex multiplicative noise removal model using the total variation (TV) filter in [19]. This model achieved some useful restoration results.The minimization functional by this approach for model (2) is given in (4) which is provided as follow;Here, f > 0 in L∞(Ω) is the given data in the model. In (11) the first term is the TV-functional which preserve the important structures such as edges, an important cue in the human and computer vision and its second term
is called Weberized TV regularization term [53]. We suppose that g has the gradient ∇g ∈ L1(Ω)2, then the above Eq (12) can be re-written as
with
is the Weberized local variation, which encodes the influence of backward intensity according to Weber’s law [53]. The corresponding Euler-Lagrange of the minimization functional (11) can be defined asSince g > 0, so the above Eq (15) can be re-defined as
orDefine , then Eq (17) implies
with Neumann boundary conditions. Additive operator splitting (AOS) method [19, 55] has been utilized to solve (18).
4 The proposed model (M2)
In this section, we aim to introduce a new model using both Euler’s elastica curvature energy and Weberized total variation (TV)-norm as a regularizer. This model apparently uses the advantages of both Euler’s elastica curvature energy and Weberized TV-norm, which leads to good restoration results. In this model, the Euler’s elastica energy, denoised images have smooth connections in the level curves of images which lead to good performance regarding image restoration (PSNR), elimination of staircase effect, and preservation of textures, while the Weberized total variation preserves jump discontinuities and sharp the edges in images. This is believed to be better estimation to a natural image than a piecewise constant approximation in the smooth regions. Hence using this model, consistent improvement in PSNR values is obtained.As the minimization approach for model (2) by Huang et al. [19] is defined as
where R(g) = |∇g|.But Euler’s elastica and curvature-based inpainting model [28] is given by the equation.
having
where α1 and α2 are the parameters and κ is the curvature and is defined as follows.Combining Eqs (19) and (21), the following equation is obtained.
Hence, (23) is a new high-order curvature-based total variational minimization functional for multiplicative noise removal problem. The first term is called regularization term and the second term is called non-convex data fitting term, where β1, β2, α1, and α2 are the four regularization parameters which usually depends on the noise level and type of the image.The corresponding Euler-Lagrange equation of (23) is given as
or
or
where
and
with boundary conditions and . Where ∇g in (28) is the normal vector and ∇⊥g is the corresponding tangent vector i.eFrom Eq (26)
U = (U1, U2), where
and
5 Numerical implementation
To discretize the Eq (26), which is a fourth-order PDE, we use the finite difference method discussed in [56-58]. Here, we include the details for the sake of the completeness of the present discussion.In this discussion, we use the notation u( = u(iΔx, jΔy), where i, j are integers, Δx, Δy are the space step sizes along the x and y directions, respectively; and (iΔx, jΔy) represents a discrete point.HereAt(i, j) we can write Eq (32)Using the Eq (33), we approximate and as follow;Curvature term. These are approximated by min-mode of two adjacent whole pixels.
wherePartial derivatives in x. By the central difference of two adjacent wholeHerePartial derivatives in y. By min-mod of ∂y, s at two adjacent whole points
with
andAlso
andBy similar way we can find the approximations for and from Eq (33) with Neumann boundary conditionsAs the steady state form of Eq (26) isAt (i, j), we approximate Eq (50) by using (33) as follow;Since, implicit gradient descent scheme has achieved good results in [14], so we utilize this scheme to solve the PDE (51), which can be expressed as under;
where shows the value of g at ndt times, dt is time step size, and Δx = Δy.
6 Numerical experiments
In this section, we provide some numerical results from applying our proposed model M2. We also compare them with the results obtained by using the model M1. The test images used for numerical experiments are “Moon”, “Rose”, “Synthetic1”, “Synthetic2”, “Synthetic3”, “Synthetic4” and “Lena”, “Boat”, “House”, “Peppers”, “Baboon”, and “Texture”, respectively, which are shown in Fig 1. We test these images on speckle noise (uniform distribution) with mean value 1 and variance σ2.
Peak signal to noise ratio (PSNR) is used to analyze the quality of the image that has been restored. So here we check the restoration quality of the two models by PSNR as follow:
where is the given image, g is the restore image and M × N the size of the image.Example 1:In this first example, the two methods M1 and M2 are applied and tested on the natural images “Moon” and “Rose,” having speckle noise with noise variance σ2 = 0.1, and σ2 = 0.1, respectively, which are shown in Figs 2 and 3. In both Figs, (a) and (b) are the original and noisy images, while Figs (c) and (d) depict the restored images by methods M1 and M2, respectively. In each case, we can notice that the visual quality of restoration by proposed method M2 is much better than that of method M1. Moreover, the PSNR values for the two images “Moon” and “Rose” for methods M1 and M2 are also listed in Table 1. The bigger the PSNR value, the better the de-noising performance. It can be seen from Table 1 that the PSNR values of procedure M2 are greater than that of method M1 for the two images, which shows the better restoration performance of M2 over M1. Hence, the results in Figs 2 and 3, and Table 1 can show that our proposed method M2 can improve the visual quality of the restored images and PSNR significantly better than M1. The optimal experimental values of the parameters of our technique M2 (β1, β2, α1, α2), for the two images “Moon” and “Rose” are (0.003, 0.05, 0.40, 1.38) and (0.007, 0.01, 0.37, 1.41), respectively. In this example, we also set △x = △y = 10, and dt = 0.0002.
Fig 2
Restored results on real image Moon; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) Restored image by model M1; (d) Restored image by model M2.
Fig 3
De-noised results on real image Rose; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) Recovered image by model M1; (d) Recovered image by model M2.
Table 1
Comparison of models M1 and M2 in terms of PSNR.
Name
Size
PSNR
M1
M2
Moon
3002
25.31
26.23
Rose
3002
29.23
30.30
Synthetic1
3002
24.93
26.01
Synthetic2
3002
27.09
27.96
Synthetic3
3002
28.25
29.22
Synthetic4
3002
25.73
26.60
Lena
2562
23.52
24.39
Example 2:In this example, the techniques M1 and M2 are tested with the synthetic images, “SynImage1”, “SynImage2”, “SynImage3”, and “SynImage4”, each of which contain speckle noise with the noise variance σ2 = 0.1. These data are shown in Figs 4, 5, 6 and 7, respectively. In these Figs and Table 1, we can observe that the restoration results using Euler’s elastica and curvature-based technique M2 are better than the TV-based technique M1 regarding visual quality and PSNR values. The best experimental optimal values of the parameters of our technique M2 (β1, β2, α1, α2) with the images “SynImage1”, “SynImage2”, “Synthetic3”, and “Synthetic4” were (0.001, 0.06, 0.34, 1.56), (0.0.004, 0.03, 0.36, 1.53), (0.002, 0.05, 0.33, 1.54), and (0.004, 0.04, 0.38, 1.30), respectively. In this example, again, we set △x = △y = 10, and dt = 0.0002.
Fig 4
Reconstructed results on synthetic image Synthetic1; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) Restored image by model M1; (d) Restored image by model M2.
Fig 5
Restored results on synthetic image Synthetic2; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) De-noised image by model M1; (d) De-noised image by model M2.
Fig 6
De-noised results on synthetic image Synthetic3; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) Restored image by model M1; (d) Restored image by model M2.
Fig 7
Reconstructed results on synthetic image Synthetic4; (a) Original image; (b) Noisy image with σ2 = 0.1; (c) Restored image by model M1; (d) Restored image by model M2.
Example 3:Here, the image size of “Lena”, 2562, is taken as the test image. The main logic of our proposed model is to use Euler’s elastic curvature and Weberized TV-norm to recover both jumps and smooth signals accurately. In Figs 8, 9 and 10, and Table 1, we can observe that M2 yields better image restoration results than M1 since it preserves edges and minimizes the staircase effect. Fig 8(a) indicates the two rectangular regions of interest. Figs 9 and 10, respectively, show them zoomed in to firmly demonstrate our Euler’s elastica curvature and Weberized TV-based model M2 corresponding to TV-based model M1. The restoration results indicate that our model M2 displays better recovery results (shown in Figs 9(d) and 10(d)) compared with model M1 (shown in Figs 9(c) and 10(c)) since it preserves edges while minimizing the staircase effect as well.
Fig 8
Restored results on real image Lena (a) Original image; (b) Noisy image with σ2 = 0.05; (c) Obtained image by model M1; (d) Obtained image by model M2.
Fig 9
Lena with the region of interest; (a) True image; (b) Noisy image; (c) De-noised with model M1; (d) De-noised with model M2.
Fig 10
Lena with the region of interest; (a) Original image; (b) Noisy image; (c) Restored image by model M1; (d) Restored image by model M2.
Example 4:In this example, a texture image of size 2562 is taken as the test image. Again, Fig 11 illustrates that our model M2 preserves the texture regions more than model M1, which shows the better restoration performance of our model M2. This can be seen in Fig 11(c) and 11(d), respectively.
Fig 11
Reconstructed results on synthetic texture image; (a) Original image; (b) Noisy image; (c) De-noised image by model M1; (d) De-noised image by model M2.
Example 5:In this example, the homogeneity is checked, and loss (or preservation) is examined for the two techniques M1 and M2 while being applied to “Lena”. For this purpose, different lines of the original image are compared to noisy and restored images that are shown in Fig 12. It is clear that the line restored by proposed method M2 (shown in Fig 12(c)) is far better than what is acquired utilizing method M1 that is presented in Fig 12(b).
Fig 12
The 62 line compression of original image with noisy image, restored image by model M1 and restored image by method M2 of the “Lena;” (a) Original and noisy image lines; (b) Original and restored by method M1 image lines; (c) Original and restored by method M2 image lines.
The blue line is the original image, and the red line is the restored image.
The 62 line compression of original image with noisy image, restored image by model M1 and restored image by method M2 of the “Lena;” (a) Original and noisy image lines; (b) Original and restored by method M1 image lines; (c) Original and restored by method M2 image lines.
The blue line is the original image, and the red line is the restored image.Example 6:In this example, we test our proposed model M2 for different values of time step dt for a real image for the same values of parameters (β1, β2, α1, α2). The values of these parameters for this example are selected as (0.007, 0.093, 0.23, 1.29). So, from the Fig 13 and Table 2, we can notice that different values of time step dt affect the image restoration quality (PSNR) by our proposed model M2.
Fig 13
Experimental results on Baboon; (a) Original image; (b) Baboon image corrupted with multiplicative noise with σ2 = 0.1; (c) Obtained image by optimal value of dt = 0.0002; (d) Obtained image by decreased dt = 0.00002; (e) Obtained image by increased dt = 0.002; (f) Obtained image by increased dt = 0.02.
Table 2
Comparison of the image quantity (PSNR values) for different values (increase and decrease) in time step dt with the optimal value of time step dt of the proposed method M2 for real image.
Image
Size
Time step dt
PSNR
Baboon
3002
(Optimal value) dt = 0.0002
27.43
(Decreased value) dt = 0.00002
27.23
(Increased value) dt = 0.002
26.69
(Increased value) dt = 0.02
25.93
Therefore, it is reasonable to conclude that Euler’s elastica curvature-based model M2 is best in the sense that it has piecewise smooth intensities, has sharp edges, and minimizes the staircase effect better than model M1.
7 Comparison with other methods
7.1 Multiplicative noise removal based on the linear alternating direction method for a hybrid variational method (M3)
Yu Hao et. al proposed a new hybrid variational model and method (in 2017) [59] based on variable splitting for multiplicative removal as follows;
where ξ = log(f) and v is a vector field of the image ξ. Also λ > 0, α > 0, and β > 0 are the regularizer parameters. Here, f0 is the observed image and f is the original image.The alternating direction method has been used to solve Eq (54), which can be written into the following two minimization subproblems:For the solution of the subproblem (55), the authors used the Chambolle’s dual method which is given as follows;
where ,
divq = (divp1, divp2), p1 = (q11, q12), p2 = (q21, q22), represent first order forward and backward differences, respectively. Then, the authors solved p1 and p2 by fixed point iteration, which are given as under.
where, and l = 1.For the solution of the subproblem (56), the authors used Bregman method by letting η = ∇ξ, which leads to the following unconstrained problem.
where μ > 0 is a penalty parameter. The split Bregman algorithm is defend as under;
where
and T represents the thresholding operator defined byThe solution of the first term of Eq (62) is given as under:The closest solution is obtained as follow.Algorithm 1: Algorithm for method M31. Initialization: v0 = 0, ξ0 = logf0, η0 = 0, and b0 = 0.2. Compute v by (57).3. Compute ξ by (66).4. Compute η by the second formula of (62).5. Compute η by (63).6. Until the stop condition is satisfied, f = exp(ξ).For, more information, see [59].In this subsection, we have compared the two models, i.e., method M3 and proposed method M2 for image restoration for the same images with the same size and noise variance along with same parameters values as selected in [59]. We can see that the results obtained by our proposed method M2 are outstanding in the visual quality of restoration (PSNR), eliminating the staircase effect and preserving the textures. These obtained results are shown in Figs 14 and 15, and Table 3, respectively. The optimal values of parameters for our proposed method M2 (β1, β2, α1, α2) for the two images “Boat” and “House” are (0.01, 0.008, 0.23, 1.05), and (0.01, 0.0062, 0.17, 1.04), respectively. In this case, we choose △x = △y = 10, and dt = 0.0002.
Fig 14
Resultant results on Boat; (a) Original image; (b) Noisy image with multiplicative noise L = 3; (c) De-noised image by method M3; (d) Zoomed-in part of the de-noised image by method M3; (e) De-noised image by our method M2; (f) Zoomed-in part of the denoised image by our method M2.
Fig 15
Resultant results on House; (a) Original image; (b) Noisy image with multiplicative noise L = 5; (c) Obtained image by method M3; (d) Zoomed-in part of the obtained image by method M3; (e) Obtained image by our method M2; (f) Zoomed-in part of the obtained image by our method M2.
Table 3
Comparison of method M3 and our method M2 regarding of PSNR.
Image
Size
Method M3
Our method M2
PSNR
PSNR
Boat
2562
22.48
23.93
House
2562
25.47
26.61
7.2 Multiplicative noise removal combining a total variation regularizer and a nonconvex regularizer (M4)
Y. Han et. al proposed a new variational model (in 2014) [60] and hence some good restoration results have been obtained. The minimization functional for this model is given as follow:
where the second term is called the weighted total variation term, function
ϵ is the positive parameter such that 0 < ϵ < 1. Also the second term ∫(ϵ + (1 − ϵ)b2)|Du|, is finite. However, u ∈ BV(Ω) doest not mean that ∫
b2|Du| is also definitely finite. Hence, once ∫
b2|Du| is infinite, ∫(ϵ + (1 − ϵ)b2)|Du| can not be decomposed into the sum of ϵ∫|Du| and (1 + ϵ)∫|Du|. The authors used an alternating iteration process directly of (67) which includes two minimization problems as follows.First by fixing b, they solveSecond, with u fixed, they solveThe given algorithm shows the alternating iteration process for the task of removing multiplicative noise. The only challenging issue is to minimize the functional (70) in Algorithm 2, or to minimize the general functional in the minimization problem (68). For this purpose, the authors convert the minimization functional (68) equivalently for constructing more efficient solver, which is given as under.Algorithm 2: Algorithm for model M4Initialization process:Let u0 = logz, set the iteration index k = 0, and given parameter λ, ϵ(0 < ϵ < 1), α, σ.Iteration process:Determination process:If u and u satisfy ∫Ω(u − u)dx < |Ω|τ (τ shows fixed threshold), then stop the whole process and output and Otherwise, set k = k + 1 and return to the iteration process.s.t u = v.The authors then used the alternating direction of multipliers(ADMM) method to solve the constrained minimization problem (73) which is shown in the following iteration process in Algorithm 3.Algorithm 3: Algorithm for model Minimization functional (73)Initialization process:Given parameters λ and μ, set the iteration index n = 0, and let u0 = v0, d = 0;Iteration process:Determination process:Once the sequence {u} and {v} converge when n → ∞, stop the algorithm, otherwise, set n = n + 1 and return to the iteration process.The Euler Lagrange equation of the functional in the minimization problem (74) is given as under;The solution of above Eq (77) is obtained by using the few times Newton iterations. The solution to the (75) is obtained by the following Chambolle’s projection algorithm;
where q: Ω → R × R shows the vector function, div notation shows the divergence operator. The function q is the limit of the sequence q (the index m differs from the outer loop index n) generated from the following fixed point iteration: given an initial q0 and a time-step ϱ, the following iterative scheme is used:According to the authors, due to the nonconvex nature of minimization problem (69) it is hard to prove the global convergence about the whole algorithm in Algorithm 2. For, further details, see [60].Here, the model M4 that is proposed in [60] is compared with our proposed method M2 for the same images having the same size and noise variances and same parameters values that have been selected in [60]. Again, from Figs 16 and 17, and Table 4 we can observe that our proposed technique M2 has better performance in the visual quality of restoration (SNR), reducing the staircase effect and preserving the textures compared to the model M4. The values of the parameters selected for our proposed model M2 (β1, β2, α1, α2) for the two images “Peppers” and “Lena” are (0.01, 0.0085, 0.24, 1.05), and (0.02, 0.005, 0.20, 1.02), respectively. In this case, we select △x = △y = 10, and dt = 0.0002.
Fig 16
Resultant results on Peppers; (a) Original image; (b) Noisy image with multiplicative noise L = 3; (c) Recorded image by method M4; (d) Zoomed-in part of the recovered image by method M4; (e) Recorded image by our method M2; (f) Zoomed-in part of the recovered image by our method M2.
Fig 17
De-noised results on Lena; (a) Original image; (b) noisy image with multiplicative noise L = 15; (c) Reconstructed image by method M4; (d) Zoomed-in part of the reconstructed image by method M4; (e) Reconstructed image by our method M2; (f) Zoomed-in part of the reconstructed image by method M2.
Table 4
Comparison of method M4 and our method M2 regarding of PSNR.
Image
Size
Method M3
Our method M2
PSNR
PSNR
Peppers
2562
22.62
23.79
Lena
2562
26.99
28.05
8 Sensitivity analysis of parameters
To briefly comment on the choice of the various parameters used in the algorithm of our model M2, it must be noted that β1, β2, α1 and α2 are more complicated to choose according to our experience. The difficulty of tuning these parameters is that they not only depend on the noise level, but also on the type of images. However, their optimal values are adjusted and tuned according to the noise variance and the image. It was observed that the ranges of values allowed are: β1 ∈ [0.0025, 0.0097], β2 ∈ [0.0082, 0.074], α1 ∈ [0.29, 0.50] and α2 ∈ [1.22, 1.61] for natural and synthetic images according to the noise variance σ2 = 0.1. Using, these ranges, better restoration results could be achieved with improved PSNR results. Results are presented in Tables 5 and 6.
Table 5
The PSNR value of the restored image “Moon” with optimal values of β1, β2, α1 and α2 is 26.23.
Parameter sensitivity analysis of our proposed model M2 by percentage increased in values of the parameters β1, β2, α1 and α2, with the resultant percentage increase or decrease in PSNR of the de-noised image of size (3002).
Image
40% (↑)
70% (↑)
β1
β2
α1
α2
PSNR
β1
β2
α1
α2
PSNR
Moon
0.0042
0.0700
0.5600
1.9320
2.38(↓)
0.0051
0.0850
0.6800
2.3460
3.62(↓)
Table 6
The PSNR value of the restored image “Moon” with optimal values of β1, β2, α1 and α2 is 26.23.
Parameter sensitivity analysis for our proposed model M2 by percentage decreased in values of the parameters β1, β2, α1 and α2, with the resultant percentage increase or decrease in PSNR of the de-noised image of size (3002).
Image
40% (↓)
70% (↓)
β1
β2
α1
α2
PSNR
β1
β2
α1
α2
PSNR
Moon
0.0018
0.0300
0.2400
0.8280
3.15(↓)
9e-4
0.0150
0.1200
0.4140
4.21(↓)
The PSNR value of the restored image “Moon” with optimal values of β1, β2, α1 and α2 is 26.23.
Parameter sensitivity analysis of our proposed model M2 by percentage increased in values of the parameters β1, β2, α1 and α2, with the resultant percentage increase or decrease in PSNR of the de-noised image of size (3002).Parameter sensitivity analysis for our proposed model M2 by percentage decreased in values of the parameters β1, β2, α1 and α2, with the resultant percentage increase or decrease in PSNR of the de-noised image of size (3002).For brevity, the following notations are utilized in these tables.(⋅)%increase− ↑, and (⋅)%decrease− ↓For example (0.50) ↓ stands for 0.50% decrease in PSNR(0.30) ↑ stands for 0.30% increase in PSNR
9 Conclusion
In this paper, a new high-order model was introduced using Euler’s elastica curvature combined with the total variation regularization for image restoration with speckle noise. The implicit gradient descent scheme was exploited for solving nonlinear PDE arisen from the minimization of the proposed functional. The experimental results demonstrated that the proposed model improves PSNR and can preserve edges, textures and minimize the staircase effect compared with existing PDE-based models. The sensitivity analysis of parameters was also discussed in details. Our model has also been compared with other variational PDE-based models for image restoration when the noise variance is large.In our model the resulting Euler-Lagrange equation has fourth order derivatives and is also anisotropic and highly nonlinear, and thus the conventional algorithms struggle to solve it efficiently due to the stability restriction. This problem is under intense study and results will be reported in the subsequent paper.
Appendix A
To derive the Euler-Lagrange Eq (26), we use the first variation or optimality condition [28, 56, 57] of the functional (26), that isIt means that we split the energy into three parts ie E1(g), E2(g) and E3(g) respectively.Now to compute E1 from Eq (81):But at the ∂Ω ∇g ⋅ n = 0. So, the above Eq (87) can be written asTo compute E2 from Eq (82):From above Eq (90), we have
where
andFrom (92)Here, we obtained the term , is the orthogonal projection into the normal direction which salifies with t ⊕ t is orthogonal projection and I is the identity projection. So from (95) we haveBut at the ∂ΩThen, the (97) becomesAs {t ⊕ t} is symmetric. The Eq (99) can be written asBut at the ∂Ω as (2κ(|∇g|)) ⋅ n = 0. So, from (101) we haveFrom (93) we haveSince, on the ∂Ω ∇g ⋅ n = 0. So (104) becomesNow, combining Eqs (91), (102) and (105), we getTo compute E3 from Eq (83):From (80), (88), (106) and (109) we haveSince and . So, Eq (111) becomes
is the required Euler-Lagrange equation, which implies
whereHere U = (U1, U2), with