Literature DB >> 26502409

Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

Gonglin Yuan1, Xiabin Duan2, Wenjie Liu3, Xiaoliang Wang2, Zengru Cui2, Zhou Sheng2.   

Abstract

Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

Entities:  

Mesh:

Year:  2015        PMID: 26502409      PMCID: PMC4621041          DOI: 10.1371/journal.pone.0140071

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

As we know, the conjugate gradient method is very popular and effective for solving the following unconstrained optimization problem where f : ℜ → ℜ is continuously differentiable and g(x) denotes the gradient of f(x) at x, the problem Eq (1) also can be applied to model some other problems [1-5]. The iterative formula used in the conjugate gradient method is usually given by and where g = g(x ), β ∈ ℜ is a scalar, α > 0 is a step length that is determined by some line search, and d denotes the search direction. Different conjugate methods have different choices for β . Some of the popular methods [6-12] used to compute β are the DY conjugate gradient method [6], FR conjugate gradient method [7], PRP conjugate gradient method [8, 9], HS conjugate gradient method [10], LS conjugate gradient method [11], and CD conjugate gradient method [12]. β [8, 9] is defined by where denotes the Euclidean norm, y = g −g . The PRP conjugate gradient method is currently considered to have the best numerical performance, but it does not have good convergence. With an exact line search, the global convergence of the PRP conjugate gradient method has been established by Polak and Ribière [8] for convex objective functions. However, Powell [13] proposed a counter example that proved the existence of nonconvex functions on which the PRP conjugate gradient method does not have global convergence, even with an exact line search. With the weak Wolfe-Powell line search, Gilbert and Nocedal [14] proposed a modified PRP conjugate gradient method by restricting β to be not less than zero and proved that it has global convergence, with the hypothesis that it satisfies the sufficient descent condition. Gilbert and Nocedal [14] also gave an example showing that β may be negative even though the objective function is uniformly convex. When the Strong Wolfe-Powell line search was used, Dai [15] gave a example showing that the PRP method cannot guarantee that every step search direction is the descent direction, even if the objective function is uniformly convex. Through the above observations and [13, 14, 16–18], we know that the following sufficient descent condition and the condition β is not less than zero are very important for establishing the global convergence of the conjugate gradient method. The weak Wolfe-Powell (WWP) line search is designed to compute α and is usually used for the global convergence analysis. The WWP line search is as follows and where . Recently, many new conjugate gradient methods ([19-28] etc.) that possess some good properties have been proposed for solving unconstrained optimization problems. In Section 2, we state the motivation behind our approach and give a new modified PRP conjugate gradient method and new algorithm for solving problem Eq (1). In Section 3, we prove that the search direction of our new algorithm satisfies the sufficient descent property and trust region property; moreover, we establish the global convergence of the new algorithm with the WWP line search. In Section 4, we provide numerical experiment results for some test problems.

New algorithm for unconstrained optimization

Wei et al. [29] give a new PRP conjugate gradient method usually called the WYL method. When the WWP line search is used, this WYL method has global convergence under the sufficient descent condition. Zhang [30] give a modified WYL method called the NPRP method as follows The NPRP method possesses better convergence properties. The above formula for y contains only gradient value information, but some new y formulas [31, 32] contain information on gradient value and function value. Yuan et al.[32] propose a new y formula as follows and Where s = x −x . Li and Qu [33] give a modified PRP conjugate method as follows and Under suitable conditions, Li and Qu [33] prove that the modified PRP conjugate method has global convergence. Motivated by the above discussions, we propose a new modified PRP conjugate method as follows and where u 1 > 0, u 2 > 0, is the of [32]. As it follows directly from the above formula that . Next, we present a new algorithm and it’s diagram (Fig 1) as follows.
Fig 1

The diagram about Algorithm 2.1.

Algorithm 2.1 Step 0: Given the initial point , set d 1 = −∇f(x 1) = −g 1, k: = 1. Step 1: Calculate ; if , stop; otherwise, go to step 2. Step 2: Calculate step length α by the WWP line search. Step 3: Set x = x + α d , then calculate ; if , stop; otherwise, go to step 4. Step 4: Calculate the scalar β by Eq (8) and calculate the search direction d by Eq (9). Step 5: Set k: = k + 1; go to step 2.

Global convergence analysis

Some suitable assumptions are often used to analyze the global convergence of the conjugate gradient method. Here, we state it as follows Assumption 3.1 The level set Ω = {x ∈ ℜ ∣ f(x) ≤ f(x 1)} is bounded. In some neighborhood H of Ω, f is a continuously differentiable function, and the gradient function g of f is Lipschitz continuous, namely, there exists a constant L > 0 such that By Assumption 3.1, it is easy to obtain that there exist two constants A > 0 and η 1 > 0 satisfying Lemma 0.1 Let the sequence {d } be generated by Eq (9); then, we have Proof When k = 1, we can obtain by Eq (9), so Eq (12) holds. When k ≥ 2, we can obtain The proof is achieved. We know directly from above Lemma that our new method has the sufficient descent property. Lemma 0.2 Let the sequence {x } and {d , g } be generated by Algorithm 2.1, and suppose that Assumption 3.1 holds; then, we can obtain Proof By Eq (7) and the Cauchy-Schwarz inequality, we have Combining the above inequality with Assumption 3.1 ii) generates it is easy to know by lemma 0.1. By combining the above inequality with Eq (6), we obtain Summing up the above inequalities from k = 1 to k = ∞, we can deduce that By Eq (6), Assumption 3.1 and lemma 0.1, we know that {f } is bounded below, so we obtain This finishes the proof. The Eq (13) is usually called the Zoutendijk condition [34], and it is very important for establishing global convergence. Lemma 0.3 Let the sequence {β , d } be generated by Algorithm 2.1, we have where . Proof When d = 0, we directly get g = 0 from Eq (12). When d ≠ 0, by the Cauchy-Schwarz inequality, we can easily obtain and We can obtain Using Eq (8), we have Finally, when k ≥ 2 by Eq (9), we have Let ; we obtain . This finishes the proof. This lemma also shows that the search direction of our algorithm has the trust region property. Theorem 0.1 Let the sequence {d , g , β } and {x } be generated by Algorithm 2.1. Suppose that Assumption 3.1 holds; then Proof By Eqs (12) and (13), we obtain By Eq (14), we have ; then, we obtain which together with Eq (16) can yield From the above inequality, we can obtain . The proof is finished.

Numerical Results

When β and d are calculated by Eqs (4) and (3), respectively, in step 4 of Algorithm 2.1, we call it the PRP conjugate gradient algorithm. We test Algorithm 2.1 and the PRP conjugate gradient algorithm using some benchmark problems. The test environment is MATLAB 7.0, on a Windows 7 system. The initial parameters are given by We use the following Himmeblau stop rule, which satisfies If ∣f(x )∣ ≤ ɛ 2, let stop1 = ; otherwise, let . The test program will be stopped if stop1 < ɛ 3 or is satisfied, where ɛ 2 = ɛ 3 = 10−6. When the total number of iterations is greater than one thousand, the test program will be stopped. The test results are given in Tables 1 and 2: x 1 denotes the initial point, Dim denotes the dimension of test function, NI denotes the the total number of iterations, and NFG = NF+NG (NF and NG denote the number of the function evaluations and the number of the gradient evaluations, respectively). denotes the function value when the program is stopped. The test problems are defined as follows.
Table 1

Test results for Algorithm 2.1.

ProblemsDim x 1 NI/NFG f′
150(-426,-426,…,-426)2/96.363783e-004
120(-426,-426,…,-426)2/91.527308e-003
200(-426,-426,…,-426)2/92.545514e-003
1000(-410,-410,…,-410)3/121.272757e-002
250(3,3,…,3)0/2-1.520789e-060
120(5,5,…,5)0/20.000000e+000
200(6,6,…,6)0/20.000000e+000
1000(1,1,…,1)0/2-7.907025e-136
350(-0.00001,0,-0.00001,0,…)2/81.561447e-009
120(-0.00001,0,-0.00001,0,…)2/81.769900e-008
200(-0.00001,0,-0.00001,0,…)2/87.906818e-008
1000(0.000001,0,0.000001,0,…)2/89.619586e-008
450(-4,-4,…,-4)1/61.577722e-028
120(-2,-2,…,-2)1/63.786532e-028
200(1,1,…,1)1/67.730837e-027
1000(3,3,…,3)1/61.079951e-024
550(-7,0,-7,0,…)2/100.000000e+000
120(0.592,0,0.592,0,…)4/143.183458e-007
200(0.451,0,0.451,0,…)4/143.476453e-007
1000(0.38,0,0.38,0,…)1/60.000000e+000
650(1.001,1.001,…,1.001)2/364.925508e-003
120(1.001,1.001,…,1.001)2/361.198551e-002
200(1.001,1.001,…,1.001)2/362.006158e-002
1000(1.001,1.001,…,1.001)2/361.009107e-001
750(0.01,0,0.01,0,…)0/23.094491e-002
120(-0.05,0,-0.05,0,…)0/22.066363e-001
200(0.01,0,0.01,0,…)0/23.094491e-002
1000(0.07,0,0.07,0,…)0/23.233371e-001
850(0.003,0.003,…,0.003)3/260.000000e+000
120(0.005,0.005,…,0.005)2/90.000000e+000
200(0.006,0,0.006,0,…)2/90.000000e+000
1000(0.015,0.015,…,0.015)2/80.000000e+000
Table 2

Test results for the PRP conjugate gradient algorithm.

ProblemsDim x 1 NI/NFG f′
150(-426,-426,…,-426)2/246.363783e-004
120(-426,-426,…,-426)2/111.527308e-003
200(-426,-426,…,-426)3/412.545514e-003
1000(-410,-410,…,-410)3/411.272757e-002
250(3,3,…,3)0/2-1.520789e-060
120(5,5,…,5)0/20.000000e+000
200(6,6,…,6)0/20.000000e+000
1000(1,1,…,1)0/2-7.907025e-136
350(-0.00001,0,-0.00001,0,…)2/81.516186e-009
120(-0.00001,0,-0.00001,0,…)2/81.701075e-008
200(-0.00001,0,-0.00001,0,…)2/87.579825e-008
1000(0.000001,0,0.000001,0,…)2/89.198262e-008
450(-4,-4,…,-4)1/61.577722e-028
120(-2,-2,…,-2)1/63.786532e-028
200(1,1,…,1)1/67.730837e-027
1000(3,3,…,3)1/61.079951e-024
550(-7,0,-7,0,…)4/163.597123e-013
120(0.592,0,0.592,0,…)5/173.401145e-007
200(0.451,0,0.451,0,…)5/174.566281e-007
1000(0.38,0,0.38,0,…)1/60.000000e+000
650(1.001,1.001,…,1.001)2/364.925508e-003
120(1.001,1.001,…,1.001)2/361.198551e-002
200(1.001,1.001,…,1.001)2/362.006158e-002
1000(1.001,1.001,…,1.001)2/361.009107e-001
750(0.01,0,0.01,0,…)0/23.094491e-002
120(-0.05,0,-0.05,0,…)0/22.066363e-001
200(0.01,0,0.01,0,…)0/23.094491e-002
1000(0.07,0,0.07,0,…)0/23.233371e-001
850(0.003,0.003,…,0.003)2/100.000000e+000
120(0.005,0.005,…,0.005)2/100.000000e+000
200(0.006,0,0.006,0,…)2/100.000000e+000
1000(0.015,0.015,…,0.015)2/223.636160e-009
Schwefel function: Langerman function: Schwefel′s function Sphere function: Griewangk function: Rosenbrock function: Ackley function: Rastrigin function: It is easy to see that the two algorithms are effective for the above eight test problems listed in Tables 1 and 2. We use the tool of Dolan and Morè [35] to analyze the numerical performance of the two algorithms. For the above eight test problems, Fig 2 shows the numerical performance of the two algorithms when the information of NI is considered, and Fig 3 shows the the numerical performance of the two algorithms when the information of NFG is considered. From the above two figures, it is easy to see that Algorithm 2.1 yields a better numerical performance than the PRP conjugate gradient algorithm on the whole. From Tables 1 and 2 and the two figures, we can conclude that Algorithm 2.1 is effective and competitive for solving unconstrained optimization problems.
Fig 2

Performance profiles of the two algorithms (NI).

Fig 3

Performance profiles of the two algorithms (NFG).

A new algorithm is given for solving nonlinear equations in the next section. The sufficient descent property and the trust region property of the new algorithm are proved in Section 6; moreover, we establish the global convergence of the new algorithm. In Section 7, the numerical results are presented.

New algorithm for nonlinear equations

We consider the system of nonlinear equations where q : ℜ → ℜ is a continuously differentiable and monotonic function. ∇q(x) denotes the Jacobian matrix of q(x); if ∇q(x) is symmetric, we call Eq (17) symmetric nonlinear equations. As q(x) is monotonic, the following inequality holds. If a norm function is defined as follows and we define the unconstrained optimization problem as follows, We know directly that the problem Eq (17) is equivalent to the problem Eq (18). The iterative formula Eq (2) is also usually used in many algorithms for solving problem Eq (17). Many algorithms ([36-41], etc.) have been proposed for solving special classes of nonlinear equations. We are more interested in the process of dealing with large-scale nonlinear equations. By Eq (2), it is easy to see that the two factors of stepsize α and search direction d are very important for dealing with large-scale problems. When dealing with large-scale nonlinear equations and unconstrained optimization problems, there are many popular methods ([38, 42–46] etc.) for computing d , such as conjugate gradient methods, spectral gradient methods, and limited-memory quasi-Newton approaches. Some new line search methods [37, 47] have been proposed for calculating α . Li and Li [48] provide the following new derivative-free line search method where α = max{γ, ργ, ρ 2 γ, …}, ρ ∈ (0,1), σ 3 > 0 and γ > 0. This line search method is very effective for solving large-scale nonlinear monotonic equations. Solodov and Svaiter [49] presented a hybrid projection-proximal point algorithm that could conquer some drawbacks when the form Eq (18) is used with nonlinear equations. Yuan et al.[50] proposed a three-term PRP conjugate gradient algorithm by using the projection-based technique, which was introduced by Solodov et al.[51] for optimization problems. The projection-based technique is very effective for solving nonlinear equations. It involves certain methods to compute search direction d and certain line search methods to calculate α , which satisfies in which w = x + α d . For any x* that satisfies q(x*) = 0, considering that q(x) is monotonic, we can obtain Thus, it is easy to obtain the current iterate x , which is strictly separated from the zeros of the system of equations Eq (17) by the following hyperplane Then, the iterate x can be obtained by projecting x onto the above hyperplane. The projection formula can be set as follows Yuan et al. [50] present a three-term Polak-Ribière-Polyak conjugate gradient algorithm in which the search direction d is defined as follows where y = q −q . The derivative-free line search method [48] and the projection-based techniques are used by the algorithm [50], proved to be very suitable for solving large-scale nonlinear equations. The most attractive property of algorithm [50] is the the trust region property of d . Motivated by our new modified PRP conjugate gradient formula, proposed in Section 2, we proposed the following modified PRP conjugate gradient formula and Where u 3 > 0, u 4 > 0. It is easy to see that , motivated by the above observation and [50]. We present a new algorithm for solving problem Eq (17): it uses our modified PRP conjugate gradient formula Eqs (21) and (22). Here, we list the new algorithm and it’s diagram (Fig 4) as follows.
Fig 4

The diagram about Algorithm 5.1.

Algorithm 5.1 Step 1: Given the initial point x 1 ∈ ℜ,ɛ 4 > 0,ρ ∈ (0,1), σ 3 > 0, γ > 0,u 3 > 0, u 4 > 0, and k: = 1. Step 2: If stop; otherwise, go to step 3. Step 3: Compute d by Eq (22) and calculate α by Eq (19) Step 4: Set the next iterate to be w = x + α d ; Step 5: If , stop and set x = w ; otherwise, calculate x by Eq (20) Step 6: Set k: = k + 1; go to step 2.

Convergence Analysis

When we analyze the global convergence of Algorithm 5.1, we require the following suitable assumptions. Assumption 6.1 The solution set of the problem Eq (17) is nonempty. q(x) is Lipschitz continuous, namely, there exists a constant E > 0 such that By Assumption 6.1, it is easy to obtain that there exists a positive constant ζ that satisfies Lemma 0.4 Let the sequence {d } be generated by Eq (22) ; then, we can obtain and Proof As the proof is similar to Lemma 0.1 and Lemma 0.3 of this paper, we omit it here. Similar to Lemma 3.1 of [50] and theorem 2.1 of [51], it is easy to obtain the following lemma. Here, we omit this proof and only list it. Lemma 0.5 Suppose that Assumption 6.1 holds and x* is a solution of problem Eq (17) that satisfies g(x*) = 0. Let the sequence {x } be obtained by Algorithm 5.1; then, the {x } is a bounded sequence and holds. Moreover, either {x } is a infinite sequence and or the {x } is a finite sequence and a solution of problem Eq (17) is the last iteration. Lemma 0.6 Suppose that Assumption 6.1 holds, then, an iteration x = x + α d will be generated by Algorithm 5.1 in a finite number of backtracking steps. Proof We will obtain this conclusion by contradiction: suppose that does not hold; then, there exists a positive constant ɛ 5 that satisfies suppose that there exist some iterate indexes that do not satisfy the condition Eq (19). We let then it can obtain By Assumption 6.1 (b) and Eq (24), we find By Eqs (23) and (25), we can obtain Thus, we obtain which shows that is bounded below. This contradicts the definition of ; so, the lemma holds. Similar to Theorem 3.1 of [50], we list the following theorem but omit its proof. Theorem 0.2 Let the sequence {x , q } and {α , d } be generated by Algorithm 5.1. Suppose that Assumption 6.1 holds; then, we have

Numerical results

When the following d formula of the famous PRP conjugate gradient method [8, 9] is used to compute d in step 3 of Algorithm 5.1, then it is called PRP algorithm. We test Algorithm 5.1 and the PRP algorithm for some problems in this section. The test environment is MATLAB 7.0 on a Windows 7 system. The initial parameters are given by When the number of iterations is greater than or equal to one thousand and five hundred, the test program will also be stopped. The test results are given in Tables 3 and 4. As we know, when the line search cannot guarantee that d satisfies , some uphill search direction may be produced; the line search method possibly fails in this case. In order to prevent this situation, when the search time is greater than or equal to fifteen in the inner cycle of our program, we set α that is acceptable. NG, NI stand for the number of gradient evaluations and iterations respectively. Dim denotes the dimension of the testing function, and cputime denotes the cpu time in seconds. GF denotes the evaluation of the final function norm when the program terminates. The test functions all have the following form the concrete function definitions are given as follows.
Table 3

Test results for Algorithm 5.1.

FunctionDimNI/NGcputimeGF
1300055/2092.0436139.850811e-006
50008/330.8580056.116936e-006
3000026/127100.7922468.983556e-006
450007/3662.6812027.863794e-006
500005/2656.6595635.807294e-006
2300043/861.0764078.532827e-006
500042/842.7456188.256326e-006
3000038/7673.0396688.065468e-006
4500037/74164.2846538.064230e-006
5000036/72201.2880909.519786e-006
330005/60.0936011.009984e-008
50005/60.2496026.263918e-009
3000018/3332.7758102.472117e-009
4500021/3991.2293852.840234e-010
5000021/39108.2022942.661223e-010
4300095/1902.1372149.497689e-006
500097/1945.8344379.048858e-006
30000103/206194.9544508.891642e-006
45000104/208446.5684639.350859e-006
50000104/208549.5291239.856874e-006
5300064/1281.4976109.111464e-006
500065/1304.1028269.525878e-006
3000070/140132.1172478.131796e-006
4500070/140297.8683099.959279e-006
5000071/142374.9640048.502923e-006
630001/20.0312000.000000e+000
50001/20.0624000.000000e+000
300001/21.9188120.000000e+000
450001/24.2588270.000000e+000
500001/25.1948330.000000e+000
7300035/710.8424059.291878e-006
500034/692.1216148.658237e-006
3000030/6158.3911748.288490e-006
4500029/59135.6272698.443996e-006
5000029/58153.8013869.993530e-006
830000/10.0156000.000000e+000
50000/10.0468000.000000e+000
300000/11.3260080.000000e+000
450000/12.9172190.000000e+000
500000/13.5100220.000000e+000
Table 4

Test results for PRP algorithm.

FunctionDimNI/NGcputimeGF
1300058/2202.0436139.947840e-006
500024/972.4960169.754454e-006
3000029/141109.6687039.705424e-006
4500013/66118.1083579.450575e-006
5000010/51112.3831209.221806e-006
2300048/951.1388078.647042e-006
500046/912.9328199.736889e-006
3000041/8178.7337059.983531e-006
4500040/79181.7099659.632281e-006
5000040/79212.8321649.121412e-006
3300011/120.1716011.012266e-008
500011/120.5304038.539532e-009
3000023/3839.7490552.574915e-009
4500026/44100.5426452.931611e-010
5000026/44123.8647942.838473e-010
43000104/2082.2464149.243312e-006
5000106/2126.1932409.130520e-006
30000113/226219.8210098.747379e-006
45000114/228487.9087289.368026e-006
50000114/228611.9763239.874918e-006
5300035/530.5616042.164559e-006
500035/531.7160111.291210e-006
3000035/5355.9263581.336971e-006
4500033/49116.3611462.109293e-006
5000033/49147.4521452.225071e-006
630001/20.0312000.000000e+000
50001/20.0624000.000000e+000
300001/21.9656130.000000e+000
450001/24.2900280.000000e+000
500001/25.2572340.000000e+000
7300040/800.9048069.908999e-006
500039/782.3868159.198351e-006
3000034/6866.4408269.515010e-006
4500033/66140.0264989.366998e-006
5000033/66173.5979138.886013e-006
830000/10.0156000.000000e+000
50000/10.0312000.000000e+000
300000/11.2792080.000000e+000
450000/12.8080180.000000e+000
500000/13.4320220.000000e+000
Function 1. Exponential function 2 Initial guess: Function 2. Trigonometric function Initial guess: Function 3. Logarithmic function Initial guess: x 0 = (1,1,⋯,1). Function 4. Broyden Tridiagonal function [[52], pp. 471–472] Initial guess: x 0 = (−1,−1,⋯,−1). Function 5. Strictly convex function 1 [[44], p. 29] q(x) is the gradient of Initial guess: Function 6. Variable dimensioned function Initial guess: Function 7. Discrete boundary value problem [53]. Initial guess: x 0 = (h(h−1), h(2h−1),⋯,h(nh−1)). Function 8. Troesch problem [54] Initial guess: x 0 = (0, 0, ⋯, 0). By Tables 3 and 4, we see that Algorithm 5.1 and the PRP algorithm are effective for solving the above eight problems. We use the tool of Dolan and Morè [35] to analyze the numerical performance of the two algorithms when NI, NG and cputime are considered, for which we generate three figures. Fig 5 shows that the numerical performance of Algorithm 5.1 is slightly better than that of the PRP algorithm when NI is considered. It is easy to see that the numerical performance of Algorithm 5.1 is better than that of the PRP algorithm from Figs 6 and 7 because the PRP algorithm requires a bigger horizontal axis when the problems are completely solved.
Fig 5

Performance profiles of the two algorithms (NI).

Fig 6

Performance profiles of the two algorithms (NG).

Fig 7

Performance profiles of the two algorithms (cputime).

From the above two tables and three figures, we see that Algorithm 5.1 is effective and competitive for solving large-scale nonlinear equations.

Conclusion

(i) This paper provides the first new algorithm based on the first modified PRP conjugate gradient method in Sections 1–4. The β formula of the method includes the gradient value and function value. The global convergence of the algorithm is established under some suitable conditions. The trust region property and sufficient descent property of the method have been proved without the use of any line search method. For some test functions, the numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems. (ii) The second new algorithm based on the second modified PRP conjugate gradient method is presented in Sections 5-7. The new algorithm has global convergence under suitable conditions. The trust region property and the sufficient descent property of the method are proved without the use of any line search method. The numerical results of some tests function are demonstrated. The numerical results show that the second algorithm is very effective for solving large-scale nonlinear equations.
  1 in total

1.  Feasibility and finite convergence analysis for accurate on-line ν-support vector machine.

Authors:  Victor S Sheng
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2013-08       Impact factor: 10.451

  1 in total
  5 in total

1.  The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

Authors:  Gonglin Yuan; Zhou Sheng; Wenjie Liu
Journal:  PLoS One       Date:  2016-10-25       Impact factor: 3.240

2.  A modified nonmonotone BFGS algorithm for unconstrained optimization.

Authors:  Xiangrong Li; Bopeng Wang; Wujie Hu
Journal:  J Inequal Appl       Date:  2017-08-09       Impact factor: 2.491

3.  A modified three-term PRP conjugate gradient algorithm for optimization models.

Authors:  Yanlin Wu
Journal:  J Inequal Appl       Date:  2017-05-03       Impact factor: 2.491

4.  A quasi-Newton algorithm for large-scale nonlinear equations.

Authors:  Linghua Huang
Journal:  J Inequal Appl       Date:  2017-02-03       Impact factor: 2.491

5.  A tensor trust-region model for nonlinear system.

Authors:  Songhua Wang; Shulun Liu
Journal:  J Inequal Appl       Date:  2018-12-13       Impact factor: 2.491

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.