Literature DB >> 28751825

A smoothing inexact Newton method for variational inequalities with nonlinear constraints.

Zhili Ge1,2, Qin Ni1, Xin Zhang3.   

Abstract

In this paper, we propose a smoothing inexact Newton method for solving variational inequalities with nonlinear constraints. Based on the smoothed Fischer-Burmeister function, the variational inequality problem is reformulated as a system of parameterized smooth equations. The corresponding linear system of each iteration is solved approximately. Under some mild conditions, we establish the global and local quadratic convergence. Some numerical results show that the method is effective.

Entities:  

Keywords:  global convergence; inexact Newton method; local quadratic convergence; nonlinear constraints; variational inequalities

Year:  2017        PMID: 28751825      PMCID: PMC5504264          DOI: 10.1186/s13660-017-1433-9

Source DB:  PubMed          Journal:  J Inequal Appl        ISSN: 1025-5834            Impact factor:   2.491


Introduction

We consider the variational inequality problem (VI for abbreviation), which is to find a vector such that where Ω is a nonempty, closed and convex subset of and F is a continuous differentiable mapping from into . In this paper, without loss of generality, we assume that where and , are twice continuously differentiable concave functions. When , VI reduces to the nonlinear complementarity problem (NCP for abbreviation) Variational inequalities have important applications in mathematical programming, economics, signal processing, transportation and structural analysis [1-3]. So, there are various numerical methods which have been studied by many researchers; e.g., see [4]. A popular way to solve the is to reformulate (1) to a nonsmooth equation via a KKT system of variational inequalities and an NCP-function. It is well known that the KKT system of can be given as follows: and the NCP-function is defined by the following condition: Then problem (1) and (2) is equivalent to the following nonsmoothing equation: Hence, problem (1) and (2) can be translated into (6). We all know that the smoothing method is a fundamental approach to solve the nonsmooth equation (6). Recently, there has been strong interest in smoothing Newton methods for solving NCP [5-12]. The idea of this method is to construct a smooth function to approximate . In the past few years, there have been many different smoothing functions which were employed to smooth equation (6). Here, we define where and It follows from equations (4)-(9) that is equivalent to and is the solution of (6). Thus, we may solve the system of smoothing equation and reduce μ to zero gradually while iteratively solving the equation. Based on the above symmetric perturbed Fischer-Burmeister function (9), Chen et al. [13] proposed the first globally and superlinearly convergent smoothing Newton method. They dealt with general box constrained variational inequalities. And Rui et al. [14] proposed an inexact Newton-GMRES method for a large-scale variational inequality problem under the assumption of linear inequality constraints. In reality, variational inequalities with nonlinear constraints are more attractive. These problems have wide applications in economic networks [15], image restoration [3, 16] and so on. So, in this paper, under the framework of smoothing Newton method, we propose a new inexact Newton method for solving with nonlinear constraints, which extends the scope of constraints. We also prove the global and local quadratic convergence and present some numerical results which show the efficiency of the proposed method. Throughout this paper, we always assume that the solution set of problem (1) and (2), denoted by , is nonempty. and mean the nonnegative and positive real sets. Symbol stands for the 2-norm. The rest of this paper is organized as follows. In Section 2, we summarize some useful properties and definitions. In Section 3, we describe the inexact Newton method formally and then prove its local quadratic convergence. We also give global convergence in Section 4. In Section 5, we report our numerical results. Finally, we give some conclusions in Section 6.

Preliminaries

In this section, we denote some basic definitions and properties that will be used in the subsequent sections.

Definition 2.1

The operator F is monotone if, for any , F is strongly monotone with modulus if, for any , F is Lipschitz continuous with a positive constant if, for any , The following lemma gives some properties of H and its corresponding Jacobian.

Lemma 2.1

Let be defined by (7). Assume that F is continuously differentiable and strongly monotone, g is twice continuously differentiable concave, in is the solution of , the rows of are linearly independent and satisfies the strict complementarity condition. Then is continuously differentiable on . is nonsingular, where

Proof

It is not hard to show that H is continuously differentiable on . From , by (7) we get that easily. Since satisfies the strict complementarity condition, i.e., , are not equal to 0 at the same time, we have that H is also continuously differentiable on . That is, (i) holds. Now, we prove (ii). Let , , , , , and Hence, we have where We can observe easily by (12). Next, we discuss formula (15). The full form of (15) can be described as follows: According to the strict complementarity condition of , we have , , and , are not equal to 0 at the same time. If , then , and From (16) we get that and . Similarly, if , then . We get that and . Hence, . Multiplying the equation of (14) by on the left-hand side and using , we have Multiplying the equation of (13) by on the left-hand side and using (17), we have Meanwhile, because F is strongly monotone, we have that is a positive definite matrix. Besides, since g is concave and is nonnegative, we have that are nonpositive definite matrices, which implies that is a positive definite matrix. So, we get that by (18). Substituting into (13) and using the rows of are linearly independent, we get . Substituting into (14), we get . Hence we have , which implies that is nonsingular. This completes the proof. □

The inexact algorithm and its convergence

We are now in the position to describe our smoothing inexact Newton method formally by using the smoothed Fischer-Burmeister function (9) to solve the variational inequalities with nonlinear constraints. We also show that this method has local quadratic convergence.

Algorithm 3.1

Inexact Newton method Let and be an arbitrary point. Choose and such that . If , then stop. Compute by where , and such that . Set and . Set and go to Step 1.

Remark 1

In theory, we use as a termination of Algorithm 3.1. In practice, we use as a termination rule, where ε is a pre-set tolerance error. It is obvious that we have . From (7) and (19), we have for any . Now, we are ready to analyze the convergence. The quadratic convergence of Algorithm 3.1 is given below.

Theorem 3.1

Assume that satisfies . Suppose that satisfies the condition of Lemma  2.1 and is Lipschitz continuous with the constant L. Then we have the following conclusions: There exists a set which contains such that for any , the iterate points generated by Algorithm 3.1 are well defined, remain in D and converge to ; where . According to Theorem 5.2.1 in [17] and Lemma 2.1, we give the proof in detail. Denote By Step 2 of Algorithm 3.1, we have According to Lemma 2.1, we get that is nonsingular. Then there exist a positive constant and a neighborhood of such that , and for any , we have that is nonsingular and where the first inequality follows from the triangle inequality and the second inequality follows from the Lipschitz continuity. Hence we have for any . Similarly, by the perturbation relation (3.1.20) in [17], we know that is nonsingular and Besides, for any , we have and . From , we have According to Algorithm 3.1, for any , we have Taking norm of both sides, we get where the first inequality follows from the Lipschitz continuity, the second inequality follows from (21), and the third inequality follows from (25). According to the definition of β and the condition of , we get that converges to . Besides, (20) also holds. This completes the proof. □

The global inexact algorithm and its convergence

Now, we start our globally convergent method by using the global technique in Algorithm 3.1. We choose a merit function and modify such that We use line search to find a step-length such that and where .

Algorithm 4.1

Global inexact Newton method Choose to be an arbitrary point. Choose such that . Choose . If , then stop. Find by solving (19). If (28) is not satisfied, then choose and compute such that (28) is satisfied. Find a step-length satisfying (29)-(31). . Set and go to Step 1.

Remark 2

In Step 2, if (28) is not satisfied, then the technique in [18], pp.264-265, is used to choose . From Lemma 3.1 in [18], it is not difficult to find that can satisfy (29)-(31). In order to obtain the global convergence of Algorithm 4.1, throughout the rest of this paper, we define the level set for .

Theorem 4.1

Suppose that is Lipschitz continuous in . Then we have The proof follows Theorem 3.2 in [18] and condition (28). □

Theorem 4.2

Let , be defined by (7). Assume that is Lipschitz continuous in , is admissible and (28) is satisfied for all , is nonsingular where is sufficiently great, and is a limited point of generated by Algorithm 4.1. Then the sequence converges to quadratically. From Theorem 4.1, we have where . That is, the sequence is convergent. Since is nonsingular and is a limited point of generated by Algorithm 4.1, we have According to the assumption that there exists such that is admissible and (28) is satisfied for all , can be generated by Algorithm 3.1 for . We can get the conclusion from Theorem 3.1 directly. This completes the proof. □

Numerical results

In this section, we present some numerical results for Algorithm 4.1. All codes are written in Matlab and run on a RTM i5-3210M personal computer. In the algorithm, we choose . We also use as the stopping rule for all examples. It is not easy to find proper test examples for the variational inequalities with nonlinear constraints. Hence, we modify some test examples in references and solve them by Algorithm 4.1.

Example 5.1

see [19] Let and It is verified that the problem has the solution easily. The initial point is and .

Example 5.2

This example is derived from [20]. Because the original problem is an optimization problem, we give its form of variational inequalities by the optimality condition, i.e., The solution of Example 5.2 is . The initial point is and . In Tables 1-2, ‘k’ means the number of iterations, ‘’ means the 2-norm of . From Tables 1-2, we can observe that Algorithm 4.1 can find the solution in a smaller number of iterations for the above two examples. In order to further show the efficiency of Algorithm 4.1, we give other two examples where the dimension of the problems is from 100 to 1,000.
Table 1

Numerical results for Example  with

Example  5.1
k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{1}} $\end{document}x1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{2}} $\end{document}x2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{3}} $\end{document}x3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } $\end{document}H(μk,wk)
11.66630.15270.77304.0941
21.64600.34160.00002.9087
31.64310.32120.00001.7601
41.26590.39570.00000.8018
51.03860.10500.00070.2139
61.00190.01740.00080.0301
71.00210.00050.00000.0043
81.00030.00000.00007.2097e–04
91.0000001.8416e–05
101.00000.00000.00009.6147e–06
Table 2

Numerical results for Example  with

Example  5.2
k \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{1}} $\end{document}x1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{2}} $\end{document}x2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{3}} $\end{document}x3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {x_{4}} $\end{document}x4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } $\end{document}H(μk,wk)
10.33301.06614.0792−4.39840.7137
23.77273.23715.7749−2.37040.4308
31.80292.46772.9939−1.35290.1151
40.60671.75712.2497−0.90030.0399
50.40951.19362.1080−0.70000.0076
60.00931.05002.1182−0.84420.0045
70.00090.99942.0014−0.99917.8889e–04
80.00021.00002.0005−1.00003.2619e–05
90.00001.00002.0000−1.00001.3066e–06
Numerical results for Example  with Numerical results for Example  with In the following tests, we solve Δw of the linear systems by using GMRES(m) package with , allowing a maximum of 100 cycles (2,000 iterations). And we choose as a random number in .

Example 5.3

We consider the problem with nonlinear constraints. The problem is derived from [21] with different sizes. Based on the linear constraints of the original problem, we also add some nonlinear constraints to the problem. In this example,

Example 5.4

The example is the NCP. . The components of are , where is a random variable in . The matrix , where A is an matrix whose entries are randomly generated in the interval , and the skew-symmetric matrix B is generated in the same way. The vector q is generated from a uniform distribution in the interval . In Tables 3-4, ‘n’ means the dimension of problems, ‘No.it’ means the number of iterations, ‘CPU’ means the cpu time in seconds. ‘’ means the 2-norm of . From Tables 3-4, we find that Algorithm 4.1 is robust to the different sizes for these two problems. Moreover, the iterative number is insensitive to the size of problems in our algorithm. In other words, our algorithm is more effective for two problems.
Table 3

Numerical results for Example 

Example  5.3
n No.it CPU \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } $\end{document}H(μk,wk)
10080.31207.7321e–06
20080.49922.9356e–06
30090.79562.5000e–06
40091.17001.3077e–06
60092.18401.8843e–06
80093.82202.1879e–06
1,00095.60042.0789e–06
Table 4

Numerical results for Example 

Example  5.4
n No.it CPU \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\boldsymbol {\Vert H(\mu^{k},w^{k}) \Vert } $\end{document}H(μk,wk)
10080.49923.5558e–07
20081.13883.0105e–06
30082.32444.7536e–06
40084.55527.7267e–06
600913.16658.8521e–07
800925.33461.4537e–06
1,000941.40272.0736e–06
Numerical results for Example Numerical results for Example

Conclusions

Based on the framework of smoothing Newton method, we propose a new smoothing inexact Newton algorithm for variational inequalities with nonlinear constraints. Under some mild conditions, we establish the global and local quadratic convergence. Furthermore, we also present some preliminary numerical results which show efficiency of the algorithm.
  1 in total

1.  A computational algorithm for minimizing total variation in image restoration.

Authors:  Y Li; F Santosa
Journal:  IEEE Trans Image Process       Date:  1996       Impact factor: 10.856

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.