Literature DB >> 30137725

Smoothing approximation to the lower order exact penalty function for inequality constrained optimization.

Shujun Lian1, Nana Niu1.   

Abstract

For inequality constrained optimization problem, we first propose a new smoothing method to the lower order exact penalty function, and then show that an approximate global solution of the original problem can be obtained by solving a global solution of a smooth lower order exact penalty problem. We propose an algorithm based on the smoothed lower order exact penalty function. The global convergence of the algorithm is proved under some mild conditions. Some numerical experiments show the efficiency of the proposed method.

Entities:  

Keywords:  Exact penalty function; Inequality constrained optimization; Lower order penalty function; Smoothing method

Year:  2018        PMID: 30137725      PMCID: PMC5996064          DOI: 10.1186/s13660-018-1723-x

Source DB:  PubMed          Journal:  J Inequal Appl        ISSN: 1025-5834            Impact factor:   2.491


Introduction

Consider the following inequality constrained optimization problem: where , , are twice continuously differentiable functions. Throughout this paper, we use to denote the feasible solution set. This problem is widely applied in transportation, economics, mathematical programming, regional science, etc. [1-3], and it has received extensive attention on a related problem, for example, variational inequalities, equilibrium problem, minimizers of convex functions, etc. (see, e.g., [4-15]). To solve problem (P), the penalty function methods have been introduced in many literature works (see, e.g., [16-24]). Zangwill [16] introduced the classical exact penalty function where is a penalty parameter, but it is not a smooth function. The corresponding penalty optimization problem is as follows: The non-smoothness of the function restricts the application of a gradient-type or Newton-type algorithm to solving problem (). In order to avoid this shortcoming, the smoothing of the exact penalty function is proposed in [17, 18]. In addition, to overcome the non-smoothness of the function, the following smooth penalty function is proposed: However, the function is non-exact. Recently, Wu et al. [20] proposed the following low order penalty function: and proved that the low order penalty function is exact under mild conditions. But this penalty function is non-smooth, too. When , can be seen as the classical exact penalty function. The least exact penalty parameter corresponding to is much less than that of the exact penalty function. This can avoid the defects of too large parameter ρ in the algorithm. Only for , the smoothing of the lower order penalty function (1.3) is studied in [20] and [21]. In [24], a smoothing method of the low order penalty function (1.3) is given. We hope to study a new smoothing method for the low order penalty function (1.3) and compare it with the existing methods. With a different segmentation method, we will give a new piecewise smooth function and propose a new method to smooth the lower order penalty function (1.3) with in this paper. The remainder of this paper is organized as follows. In Sect. 2, a new smoothing function is proposed. The error estimates are obtained among the optimal objective function values of the smoothed penalty problem, the non-smooth penalty problem, and the original problem. In Sect. 3, the corresponding algorithm is proposed to obtain an approximate solution to (P). The global convergence of the algorithm is proved. In Sect. 4, some numerical experiments are given to illustrate the efficiency of the algorithm. In Sect. 5, some conclusions are presented.

A smoothing penalty function

For the lower order penalty problem in order to establish the global exact penalization, the following assumption is given in [20]. We will consider the smoothing method under the following assumption.

Assumption 2.1

satisfies the coercive condition The optimal solution set is a finite set. Under Assumption 2.1, problem (P) is equivalent to the following problem: where X is a box with . For any , penalty problem (LP) is equivalent to the following penalty problem: Now we consider a new smoothing technique to the lower order penalty function (1.3). Let , then Define a function () by where . It is easy to see that is continuously differentiable and The following figure shows the process of function approaching function . Figure 1 shows the behavior of (represented by the dash and dot line), (represented by the dot line), (represented by the dash line), and (represented by the solid line).
Figure 1

The behavior of and

The behavior of and Based on this, we consider the following continuously differentiable penalty function: where . The corresponding optimization problem to is as follows: For problems (P), (), and (SP), we have the following conclusion.

Lemma 2.1

For any , , and , it holds that

Proof

For all , it holds that Set Then It is easy to see that function is monotonically increasing w.r.t. t due to that . One has It follows that When , one has So, It follows from (2.1), (2.3), and (2.4) that by the fact that . □

Theorem 2.1

For a positive sequence , which converges to 0 as , assume that is an optimal solution to for some given , . If x̅ is an accumulating point of sequence , then x̄ is an optimal solution to . It follows from Lemma 2.1 that Since is a solution to , one has It follows from (2.5) and (2.6) that Letting yields Thus, x̅ is an optimal solution to . □

Theorem 2.2

Let be an optimal solution of problem (), and be an optimal solution of problem (SP) for some , , and . Then Under the hypothetical conditions, it holds that and , . Therefore, by Lemma 2.1, one has and This completes the proof. □

Corollary 2.1

Suppose that Assumption 2.1 holds, and for any , there exists such that the pair satisfies the second order sufficient condition (in [20]). Let be an optimal solution of problem (P) and be an optimal solution of problem (SP) for some , , and . Then there exists such that, for any , It follows from Corollary 2.3 (in [20]) that is an optimal solution of problem (). By Theorem 2.2, one has Since , it holds that This completes the proof. □

Definition 1

For , if is such that then is an ϵ-feasible solution of problem (P).

Theorem 2.3

Let be an optimal solution of problem (), and be an optimal solution of problem (SP) for some , , and . If is a feasible solution of problem (P), and is an ϵ-feasible solution of problem (P), then By (2.1), (2.3), and Theorem 2.2, one has Since , it holds that Note that Thus, it follows from (2.2) that By (2.7) and (2.8), one has  □ Theorems 2.1 and 2.2 show that an optimal solution of (SP) is also an approximate optimal solution of () when the error ϵ is sufficiently small. By Theorem 2.3, an optimal solution of (SP) is an approximately optimal solution of (P) if the optimal solution of (SP) is an ϵ-feasible solution of (P).

A smoothing method

Based on the discussion in the last section, we can design an algorithm to obtain an approximate optimal solution of (P) by solving (SP).

Algorithm 3.1

Take , , , , , , and , let and go to Step 2. Solve starting at . Let be the optimal solution ( can be obtained by a quasi-Newton method). Let , , and , then go to Step 2.

Remark

Since and , let , as , the sequence is gradually decreased to 0, the sequence is gradually increased to +∞ and is gradually decreased to 0. Under some mild conditions, the following conclusion shows the global convergence of Algorithm 3.1.

Theorem 3.1

Suppose that Assumption 2.1 holds, and for any , , the solution set of is nonempty. If is the sequence generated by Algorithm 3.1 satisfying , and the sequence is bounded, then is bounded. Any limit point of is an optimal solution of (P). (1) It follows from (2.3) that By hypothesis, there exists some number L such that For the sake of contradiction, suppose that is unbounded. Without loss of generality, we assume that as . By (2.2), (3.1), and (3.2), one has which results in a contradiction with Assumption 2.1(1). (2) Without loss of generality, we assume as . To prove is the optimal solution of (P), it is only needed to show that and , . To show that , we outline a proof by contradiction. We presuppose that , then there exist , , and the subset such that where N is the natural number set. By Step 2, (2.2), and (2.3), for any , one has It follows that which contradicts with , and , as . Then we have that . Next, we show that , . For this, by Step 2, (2.2), and (2.3), it holds that Letting yields that Therefore, any limit point of is an optimal solution of (P). □

Numerical examples

In this section, we will do some numerical experiments to show the efficiency of Algorithm 3.1.

Example 4.1

Consider the following optimization problem considered in [18, 22, 23]: For this problem, we let , , , , , . With different starting points, numerical results of Algorithm 3.1 are shown in Tables 1, 2, and 3.
Table 1

Numerical results for Example 4.1 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{3}(x^{j+1})$\end{document}f3(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(0.185009,0.804369,2.015460,−0.952409)10.01−4.797079−0.00109−2.028111−44.225926
1(0.169902,0.835670,2.008151,−0.965196)20.0001−9.748052−9.337847−1.883271−44.231252
Table 2

Numerical results for Example 4.1 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{3}(x^{j+1})$\end{document}f3(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(0.169693,0.835634,2.008291,−0.965082)10.01−9.502428−8.676884−1.883244−44.231403
Table 3

Numerical results for Example 4.1 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{3}(x^{j+1})$\end{document}f3(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(0.169691,0.835633,2.008294,−0.965080)10.01−9.502279−8.676796−1.883249−44.231403
Numerical results for Example 4.1 with Numerical results for Example 4.1 with Numerical results for Example 4.1 with From Tables 1, 2, 3, we know that the obtained approximate optimal solutions are similar, which shows that the numerical result of Algorithm 3.1 does not depend on the section of the starting points for this example. In [18], the objective function value was obtained in the forth iteration. From the numerical results given in [22], we know that the optimal solution is with the objective function value . In [23], the objective function value obtained in the 25th iteration is . Hence, the numerical results obtained by Algorithm 3.1 are better than the numerical results given in [18, 22, 23] for this example.

Example 4.2

Consider the following problem considered in [17]: For this problem, we let , , , , , . With different k, numerical results of Algorithm 3.1 are shown in Tables 4, 5, and 6.
Table 4

Numerical results for Example 4.2 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(3.4217,2.7082)20.0014.1300−0.0053−15.2492
1(0.8022,1.1978)200.0000010.0000−0.4066−7.1999
Table 5

Numerical results for Example 4.2 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(4.0607,3.0227)20.0015.0834−0.0153−16.0434
1(0.8027,1.1971)200.000001−0.0003−0.4086−7.1992
Table 6

Numerical results for Example 4.2 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(2.6356,2.3168)20.0012.9523−0.0020−13.7027
1(0.8005,1.1995)200.0000010.0000−0.4015−7.2000
Numerical results for Example 4.2 with Numerical results for Example 4.2 with Numerical results for Example 4.2 with From Tables 4, 5, 6, we can see that almost the same approximate optimal solutions are obtained for different k in this example. The objective function value is similar to the objective function value with obtained in the forth iteration in [17].

Example 4.3

Consider the following problem considered in [24] and [25] (Test Problem 6 in Sect. 4.6): For this problem, we set , , , , , , . The numerical results of Algorithm 3.1 are shown in Table 7.
Table 7

Numerical results for Example 4.3 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(2.329795,3.133729)510−2−0.047009−0.043471−5.463524
1(2.329238,3.173320)1010−4−0.002868−0.006501−5.502557
2(2.329452,3.177637)2010−6−0.000302−0.001176−5.507089
3(2.329626,3.177558)4010−8−0.001802−0.000436−5.507185
Numerical results for Example 4.3 with We set , , , , , , . The numerical results of Algorithm 3.1 are shown in Table 8.
Table 8

Numerical results for Example 4.3 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(2.330261,3.061875)510−1−0.1226776−0.1131323−5.392137
1(2.329664,3.161611)1510−2−0.018055−0.016207−5.491275
2(2.329639,3.171941)4510−3−0.007524−0.005993−5.501580
3(2.329560,3.177804)13510−4−0.001013−0.000503−5.507363
4(2.329593,3.177793)40510−5−0.001297−0.000357−5.507386
5(2.329622,3.177781)121510−6−0.001544−0.000234−5.507403
Numerical results for Example 4.3 with We set , , , , , , . The numerical results of Algorithm 3.1 are shown in Table 9.
Table 9

Numerical results for Example 4.3 with

j \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$x^{j+1}$\end{document}xj+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$q_{j}$\end{document}qj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon_{j}$\end{document}ϵj \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{1}(x^{j+1})$\end{document}f1(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{2}(x^{j+1})$\end{document}f2(xj+1) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$f_{0}(x^{j+1})$\end{document}f0(xj+1)
0(2.330460,3.179900)210−5−0.0062870.005832−5.510360
1(2.329672,3.179735)2010−8−0.0000010.001957−5.509408
2(2.329672,3.179735)20010−11−0.0000000.001957−5.509407
3(2.329541,3.178391)200010−14−0.000000−0.000000−5.507933
Numerical results for Example 4.3 with In [24], with three different starting points, similar numerical results are given with . The optimal solution is given with the objective function value −5.507938. In [25], the optimal solution is given with the objective function value −5.5079. The numerical results of Example 4.3 are similar to the numerical results of [24] and [25] in this example. From Tables 7, 8, 9, we can see that we need to adjust the parameters , , a, b to get the better numerical results with different k and . Usually, may be 0.5, 0.1, 0.01, 0.001, or smaller digits, and , or 0.001. may be 1, 2, 3, 5, 10, 100, or larger digits, and , or 100.

Concluding remarks

In this paper, we proposed a method to smooth the lower order exact penalty function with for inequality constrained optimization. Furthermore, we proved that the algorithm based on the smoothed penalty functions is globally convergent under mild conditions. The given numerical experiments show that the algorithm is effective.
  1 in total

1.  Smoothing approximation to the lower order exact penalty function for inequality constrained optimization.

Authors:  Shujun Lian; Nana Niu
Journal:  J Inequal Appl       Date:  2018-06-11       Impact factor: 2.491

  1 in total
  2 in total

1.  Smoothing approximation to the lower order exact penalty function for inequality constrained optimization.

Authors:  Shujun Lian; Nana Niu
Journal:  J Inequal Appl       Date:  2018-06-11       Impact factor: 2.491

2.  An IoT-Oriented Offloading Method with Privacy Preservation for Cloudlet-Enabled Wireless Metropolitan Area Networks.

Authors:  Zhanyang Xu; Renhao Gu; Tao Huang; Haolong Xiang; Xuyun Zhang; Lianyong Qi; Xiaolong Xu
Journal:  Sensors (Basel)       Date:  2018-09-10       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.