Literature DB >> 29780210

A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations.

Gonglin Yuan1, Wujie Hu1.   

Abstract

For large-scale unconstrained optimization problems and nonlinear equations, we propose a new three-term conjugate gradient algorithm under the Yuan-Wei-Lu line search technique. It combines the steepest descent method with the famous conjugate gradient algorithm, which utilizes both the relevant function trait and the current point feature. It possesses the following properties: (i) the search direction has a sufficient descent feature and a trust region trait, and (ii) the proposed algorithm globally converges. Numerical results prove that the proposed algorithm is perfect compared with other similar optimization algorithms.

Entities:  

Keywords:  Conjugate gradient; Descent property; Global convergence

Year:  2018        PMID: 29780210      PMCID: PMC5945721          DOI: 10.1186/s13660-018-1703-1

Source DB:  PubMed          Journal:  J Inequal Appl        ISSN: 1025-5834            Impact factor:   2.491


Introduction

It is well known that the model of small- and medium-scale smooth functions is simple since it has many optimization algorithms, such as Newton, quasi-Newton, and bundle algorithms. Note that three algorithms fail to effectively address large-scale optimization problems because they need to store and calculate relevant matrices, whereas the conjugate gradient algorithm is successful because of its simplicity and efficiency. The optimization model is an important mathematic problem since it has been applied to various fields such as economics, engineering, and physics (see [1-12]). Fletcher and Reeves [13] successfully address large-scale unconstrained optimization problems on the basis of the conjugate gradient algorithm and obtained amazing achievements. The conjugate gradient algorithm is increasingly famous because of its simplicity and low requirement of calculation machine. In general, a good conjugate gradient algorithm optimization algorithm includes a good conjugate gradient direction and an inexact line search technique (see [14-18]). At present, the conjugate gradient algorithm is mostly applied to smooth optimization problems, and thus, in this paper, we propose a modified LS conjugate gradient algorithm to solve large-scale nonlinear equations and smooth problems. The common algorithms of addressing nonlinear equations include Newton and quasi-Newton methods (see [19-21]), gradient-based, CG methods (see [22-24]), trust region methods (see [25-27]), and derivative-free methods (see [28]), and all of them fail to address large-scale problems. The famous optimization algorithms of spectral gradient approach, limited-memory quasi-Newton method and conjugate gradient algorithm, are suitable to solve large-scale problems. Li and Li [29] proposed various algorithms on the basis of modified PRP conjugate gradient, which successfully solve large-scale nonlinear equations. A famous mathematic model is given by where and . The relevant model is widely used in life and production. However, it is a complex mathematic model since it needs to meet various conditions in the field [30-33]. Experts and scholars have conducted numerous in-depth studies and have made some significant achievements (see [14, 34, 35]). It is well known that the steepest descent algorithm is perfect since it is simple and its computational and memory requirements are low. It is regrettable that the steepest descent method sometimes fails to solve problems due to the “sawtooth phenomenon”. To overcome this flaw, experts and scholars presented an efficient conjugate gradient method, which provides high performance with a simple form. In general, the mathematical formula for (1.1) is where is the next iteration point, is the step length, and is the search direction. The famous weak Wolfe–Powell (WWP) line search technique is determined by and where , , and . The direction is often defined by the formula where . An increasing number of efficient conjugate gradient algorithms have been proposed by different expressions of and (see [13, 36–42] etc.). The well-known PRP algorithm is given by where , , and denote , , and , respectively; is the gradient function at the point . It is well known that the PRP algorithm is efficient but has shortcomings, as it does not possess global convergence under the WWP line search technique. To solve this complex problem, Yuan, Wei, and Lu [43] developed the following creative formula (YWL) for the normal WWP line search technique and obtained many fruitful theories: and where , , , and . Further work can be found in [24]. Based on the innovation of YWL line search technique, Yuan pay much attention to normal Armijo line search technique and make further study. They proposed an efficient modified Armijo line search technique: where , , and is the largest number of . In addition, experts and scholars pay much attention to the three-term conjugate gradient formula. Zhang et al. [44] proposed the famous formula Nazareth [45] proposed the new formula where and . These two conjugate gradient methods have a sufficient descent property but fail to have the trust region feature. To improve these methods, Yuan et al. [46, 47] make a further study and get some good results. This inspires us to continue the study and extend the conjugate gradient methods to get better results. In this paper, motivated by in-depth discussions, we express a modified conjugate gradient algorithm, which has the following properties: The search direction has a sufficient descent feature and a trust region trait. Under mild assumptions, the proposed algorithm possesses the global convergence. The new algorithm combines the steepest descent method with the conjugate gradient algorithm. Numerical results prove that it is perfect compared to other similar algorithms. The rest of the paper is organized as follows. The next section presents the necessary properties of the proposed algorithm. The global convergence is stated in Sect. 3. In Sect. 4, we report the corresponding numerical results. In Sect. 5, we introduce the large-scale nonlinear equations and express the new algorithm. Some necessary properties are listed in Sect. 6. The numerical results are reported in Sect. 7. Without loss of generality, and are replaced by and , and is the Euclidean norm.

New modified conjugate gradient algorithm

Experts and scholars have conducted thorough research on the conjugate gradient algorithm and have obtained rich theoretical achievements. In light of the previous work by experts on the conjugate gradient algorithm, a sufficient descent feature is necessary for the global convergence. Thus, we express a new conjugate gradient algorithm under the YWL line search technique as follows: where , , and (). The search direction is well defined, and its properties are stated in the next section. Now, we introduce a new conjugate gradient algorithm called Algorithm 2.1. Modified three-term conjugate gradient algorithm for optimization model

Important characteristics

This section lists some important properties of sufficient descent, the trust region, and the global convergence of Algorithm 2.1. It expresses the necessary proof.

Lemma 3.1

If search direction meets condition of (2.1), then and

Proof

It is obvious that formulas of (3.1) and (3.2) are true for . Now consider the condition . Similarly to (2.1), we have and Thus, the statement is proved. □ Similarly to (3.1) and (3.2), the algorithm has a sufficient descent feature and a trust region trait. To obtain the global convergence, we propose the following necessary assumptions.

Assumption 1

The level set of is bounded. The objective function is bounded from below, and its gradient function g is Lipschitz continuous, thats is, there exists a constant ζ such that The existence and necessity of the step length are established in [43]. In view of the discussion and established technique, the global convergence of the proposed algorithm is expressed as follows.

Theorem 3.1

If Assumptions (i)–(ii) are satisfied and the relative sequences of , , , and are generated by Algorithm 2.1, then By (1.7), (3.1), and (3.3) we have Summing these inequalities from to ∞, under Assumption (ii), we obtain This means that Similarly to (1.8) and (3.1), we obtain Thus, we obtain the following inequality: where the last inequality is obtained since the gradient function is Lipschitz continuous. Then, we have By (3.6) we arrive at the conclusion as claimed. □

Numerical results

In this section, we list the numerical result in terms of the algorithm characteristics NI, NFG, and CPU, where NI is the total iteration number, NFG is the sum of the calculation frequency of the objective function and gradient function, and CPU is the calculation time in seconds.

Problems and test experiments

The tested problems listed in Table 1 stem from [48]. At the same time, we introduce two different algorithms into this section to measure the objective algorithm efficiency through the tested problems. We denote the two algorithms as Algorithm 2 and Algorithm 3. They are different from Algorithm 2.1 only at Step 5. One is determined by (1.10), and the other is computed by (1.11).
Table 1

Test problems

No.Problem
1Extended Freudenstein and Roth Function
2Extended Trigonometric Function
3Extended Rosenbrock Function
4Extended White and Holst Function
5Extended Beale Function
6Extended Penalty Function
7Perturbed Quadratic Function
8Raydan 1 Function
9Raydan 2 Function
10Diagonal 1 Function
11Diagonal 2 Function
12Diagonal 3 Function
13Hager Function
14Generalized Tridiagonal 1 Function
15Extended Tridiagonal 1 Function
16Extended Three Exponential Terms Function
17Generalized Tridiagonal 2 Function
18Diagonal 4 Function
19Diagonal 5 Function
20Extended Himmelblau Function
21Generalized PSC1 Function
22Extended PSC1 Function
23Extended Powell Function
24Extended Block Diagonal BD1 Function
25Extended Maratos Function
26Extended Cliff Function
27Quadratic Diagonal Perturbed Function
28Extended Wood Function
29Extended Hiebert Function
30Quadratic Function QF1 Function
31Extended Quadratic Penalty QP1 Function
32Extended Quadratic Penalty QP2 Function
33A Quadratic Function QF2 Function
34Extended EP1 Function
35Extended Tridiagonal-2 Function
36BDQRTIC Function (CUTE)
37TRIDIA Function (CUTE)
38ARWHEAD Function (CUTE)
38ARWHEAD Function (CUTE)
40NONDQUAR Function (CUTE)
41DQDRTIC Function (CUTE)
42EG2 Function (CUTE)
43DIXMAANA Function (CUTE)
44DIXMAANB Function (CUTE)
45DIXMAANC Function (CUTE)
46DIXMAANE Function (CUTE)
47Partial Perturbed Quadratic Function
48Broyden Tridiagonal Function
49Almost Perturbed Quadratic Function
50Tridiagonal Perturbed Quadratic Function
51EDENSCH Function (CUTE)
52VARDIM Function (CUTE)
53STAIRCASE S1 Function
54LIARWHD Function (CUTE)
55DIAGONAL 6 Function
56DIXON3DQ Function (CUTE)
57DIXMAANF Function (CUTE)
58DIXMAANG Function (CUTE)
59DIXMAANH Function (CUTE)
60DIXMAANI Function (CUTE)
61DIXMAANJ Function (CUTE)
62DIXMAANK Function (CUTE)
63DIXMAANL Function (CUTE)
64DIXMAAND Function (CUTE)
65ENGVAL1 Function (CUTE)
66FLETCHCR Function (CUTE)
67COSINE Function (CUTE)
68Extended DENSCHNB Function (CUTE)
69DENSCHNF Function (CUTE)
70SINQUAD Function (CUTE)
71BIGGSB1 Function (CUTE)
72Partial Perturbed Quadratic PPQ2 Function
73Scaled Quadratic SQ1 Function
Test problems Stopping rule: If the inequality is correct, let or . The algorithm stops when one of the following conditions is satisfied: , the iteration number is greater than 2000, or , where and . In Table 1, “No” and “problem” represent the index of the the tested problems and the name of the problem, respectively. Initiation: , , , , , , , . Dimension: 1200, 3000, 6000, 9000. Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium(R) Dual-Core CPU E5800@3.20 GHz, and the 64-bit Windows 7 operation system. A list of the numerical results with the corresponding problem index is listed in Table 2. Then, based on the technique in [49], the plots of the corresponding figures are presented for the three discussed algorithms.
Table 2

Numerical results

NODimAlgorithm 2.1Algorithm 2Algorithm 3
NINFGCPUNINFGCPUNINFGCPU
190004200.12480114480.4056035260.249602
29000713271.96561327890.670804321360.858005
390007200.0312371600.249602271470.202801
4900012490.280802341610.717605422190.951606
5900013560.20280120630.2496025240.0624
69000652520.421203431430.280802390.0312
7900011370.06244789792.21521446514792.558416
890005200.062422550.15600114540.156001
990006160.03125210.0624380.0312
1090002130.01562130.0000012130.000001
1190003170.03127340.062417870.218401
1290003100.031219400.20280114500.202801
1390003240.06243240.03123240.0156
1490004124.3056285145.3820345145.226033
15900019779.98406422669.516061217110.296066
1690003110.06246270.0786180.0624
17900011450.37440227690.78000527870.811205
1890005230.03123100.0000013100.0312
199000390.0624390.03123190.0312
20900019760.12480115360.0624390.0312
21900012470.15600113610.18720115590.218401
2290007460.7956058700.5772046460.686404
2390009450.2184011013572.090413461500.873606
2490005470.09360114880.15600114970.249602
2590009280.0312402140.2496028460.0624
269000241020.327602241000.2496023240.0312
2790006200.0312341090.187201923210.530403
28900013500.12480120830.10920123840.140401
2990006360.04684210.03124210.0312
30900011370.06244549311.45080942413461.747211
31900018630.12480115510.0936013100.0312
32900018700.21840123610.2184013180.0624
339000250.000001250.0312250.000001
3490008160.03126120.0312360.0312
3590004130.03124100.0312380.000001
3690007234.6020298285.56923610478.673656
3790007230.0624141228296.9420442000602111.356873
3890004180.03128350.1872014110.0312
3990005190.031228560.124801380.0312
40900013430.561604835293636.2234329410.421203
41900010320.062417410.09360122810.124801
4290004330.062413350.1248019470.109201
43900016621.02960716380.95160613480.780005
4490003170.1560019500.6240043170.187201
459000211181.4976112810.8580063240.202801
46900020811.43520920944311.2476721103626.630042
479000113727.066173309768.640443711287.220159
48900013549.718862319218.610919235011.980877
49900011370.06244789791.5132150415921.887612
50900011377.971651472967263.688494441273299.381519
5190006310.1560017250.2184013170.124801
529000621860.998406631950.8424054210.0624
53900010320.0312200040597.72205186556187.971651
5490004110.031221790.15600117790.124801
55900010243.0108197253.2136213101.076407
5690007210.0156200040036.489642139041075.335234
5790005390.358802672204.0248263240.202801
5890005240.3432021142826.411641823155.257234
5990005390.343202683104.726833230.171601
60900018741.29480820643711.1072711193636.957645
6190005390.358802852474.9296323240.218401
6290004320.2340014320.2496023220.187201
6390003220.1872013220.1872013220.187201
6490005390.343202231471.7472113230.218401
659000125915.334898145114.9448967216.130839
669000391.62241200040221114.7675465292196443.526443
6790005280.09360115580.2808023230.0312
68900013550.10920111270.06249250.0624
69900016730.21840124550.18720120700.171601
7090004132.5428164120336.3326333523137.783442
71900011350.093601200040146.708043149146315.600436
72900093021.85574108938972675.5887512871015704.391315
73900019650.09360160712691.85641266920622.293215
Numerical results Test problems Other case: To save the paper space, we only list the data of dimension of 9000, and the remaining data are listed in the attachment.

Results and discussion

Obviously, the objective algorithm (Algorithm 2.1) is more effective than the other algorithms since the point value on the algorithm curve is largest among the three curves. In Fig. 1, the proposed algorithm curve is above the other curves. This means that the objective algorithm solves complex problems with fewer iterations, and Algorithm 3 is better than Algorithm 2. In Fig. 2, we obtain that the proposed algorithm has a large initial point, which means that it has high efficiency and its curve seems smoother than others. It is well known that the most important metric of an algorithm is the calculation time (CPU time), which is an essential aspect to measure the efficiency of an algorithm. Based on Fig. 3, the objective algorithm successfully fully utilizes its outstanding characteristics. Therefore, it saves time compared to the other algorithms in addressing complex problems.
Figure 1

Performance profiles of these methods (NI)

Figure 2

Performance profiles of these methods (NFG)

Figure 3

Performance profiles of these methods (CPU time)

Performance profiles of these methods (NI) Performance profiles of these methods (NFG) Performance profiles of these methods (CPU time)

Nonlinear equations

The model of nonlinear equations is given by where the function of h is continuously differentiable and monotonous, and , that is, Scholars and writers paid much attention to this model since it significantly influences various fields such as physics and computer technology (see [1–3, 8–11]), and it has resulted in many fruitful theories and good techniques (see [47, 50–54]). By mathematical calculations we obtain that (5.1) is equivalent to the model where , and is the Euclidean norm. Then, we pay much attention to the mathematical model (5.2) since (5.1) and (5.2) have the same solution. In general, the mathematical formula for (5.2) is . Now, we introduce the following famous line search technique into this paper [47, 55]: where , , , and . Solodov [56] proposes a projection proximal point algorithm in a Hilbert space that finds the zeros of set-valued maximal monotone operators. Ceng and Yao [57-60] paid much attention to the research in Hilbert spaces and obtained successful achievements. Solodov and Svaiter [61] applied the projection technique to large-scale nonlinear equations and obtained some ideal achievements. For the projection-based technique, the famous formula is flexible, where . The search direction is extremely important for the proposed algorithm since it largely determines the efficiency. Likewise, the algorithm contains the perfect line search technique. By the monotonicity of we obtain where is the solution of . We consider the hyperplane It is obvious that the hyperplane separates the current iteration point of from the zeros of the mathematical model (5.1). Then, we need to calculate the next iteration point through projection of current point . Therefore, we give the following formula for the next point: In [55], it is proved that formula (5.5) is effective since it not only obtains perfect numerical results but also has perfect theoretical characteristics. Thus, we introduce it here. The formula of the search direction is given by where , , and (). Now, we express the specific content of the proposed algorithm.

The global convergence of Algorithm 5.1

First, we make the following necessary assumptions.

Assumption 2

The objective model of (5.1) has a nonempty solution set. The function h is Lipschitz continuous on , which means that there is a positive constant L such that By Assumption 2(ii) it is obvious that where θ is a positive constant. Then, the necessary properties of the search direction are the following (we omit the proof): and Now, we give some lemmas, which we utilize to obtain the global convergence of the proposed algorithm.

Lemma 6.1

If Assumption 2 holds, the relevant sequence is produced by Algorithm 5.1, and the point is the solution of the objective model (5.1). We obtain that the formula is correct and the sequence is bounded. Furthermore, either the last iteration point is the solution of the objective model and the sequence of is bounded, or the sequence of is infinite and satisfies the condition Modified three-term conjugate gradient algorithm for large-scale nonlinear equations This paper merely proposes, but omits, the relevant proof since it is similar to the proof in [61].

Lemma 6.2

Algorithm 5.1 generates an iteration point in a finite number of iteration steps, which satisfies the formula of if Assumption 2 holds. We denote . We suppose that Algorithm 5.1 has terminated or the formula is erroneous. This means that there exists a constant such that We prove this conclusion by contradiction. Suppose that certain iteration indexes fail to meet the condition (5.3) of the line search technique. Without loss of generality, we denote the corresponding step length as , where . This means that By (6.3) and Assumption 2(ii) we obtain By (6.3) and (6.4) we have By (6.6) we obtain It is obvious that this formula fails to meet the definition of the step length . Thus, we conclude that the proposed line search technique is reasonable and necessary. In other words, the line search technique generates a positive constant in a finite frequency of backtracking repetitions. By the established conclusion we propose the following theorem on the global convergence of the proposed algorithm. □

Theorem 6.1

If Assumption 2 holds and the relevant sequences are calculated using Algorithm 5.1, then We prove this by contradiction. This means that there exist a constant and an index such that On the one hand, by (6.2) and (6.4) we obtain On the other hand, from (6.3) we have These inequalities indicate that the sequence of is bounded. This means that there exist an accumulation point and the corresponding infinite set such that By Lemma 6.1 we obtain that the sequence of is bounded. Thus, there exist an infinite index set and an accumulation point that meet the formula By Lemmas 6.1 and 6.2 we obtain Since is bounded, we obtain By the definition of we obtain the following inequality: where . Now, we take the limit on both sides of (6.10) and (6.3) and obtain and The obtained contradiction completes the proof. □

The results of nonlinear equations

In this section, we list the relevant numerical results of nonlinear equations and present the objective function , where the relevant functions’ information is listed in Table 1. To measure the efficiency of the proposed algorithm, in this section, we compare this method with (1.10) (as Algorithm 6) using three characteristics “NI”, “NG”, and “CPU” and the remind that Algorithm 6 is identical to Algorithm 5.1. “NI” presents the number of iterations, “NG” is the calculation frequency of the function, and “CPU” is the time of the process in addressing the tested problems. In Table 1, “No” and “problem” express the indices and the names of the test problems. Stopping rule: If or the whole iteration number is greater than 2000, the algorithm stops. Initiation: , , , , , , . Dimension: 3000, 6000, 9000. Calculation environment: The calculation environment is a computer with 2 GB of memory, a Pentium(R) Dual-Core CPU E5800@3.20 GHz, and the 64-bit Windows 7 operation system. The numerical results with the corresponding problem index are listed in Table 4. Then, by the technique in [49], the plots of the corresponding figures are presented for two discussed algorithms.
Table 4

Numerical results

NODimAlgorithm 5.1Algorithm 6
NINFGCPUNINFGCPU
130001611623.9312251461474.149627
1600012612712.76088211511611.122871
1900011111222.4641449910019.515725
230005761.1856085761.060807
260006914.7580315764.009226
290005626.9264445626.754843
33000332283.276021181061.778411
360004027515.490899181066.084039
390004028533.2438131810612.54248
430004610.8424054610.936006
460004472.6988174613.322821
490004475.2260334616.817244
53000232373.244821232373.354022
560002526314.1336912526313.930889
590002627830.1861932627830.092593
63000199929986382.951255199929986365.369942
6600088130768.1412371999299861484.240314
6900065962101.8062531999299863113.998361
730004470.7488053460.624004
760004472.5896173462.386815
790004475.2572343465.054432
83000251562.854818171421.872012
860003218910.826469181628.377254
890002819221.5125381917418.938521
93000101511.9344125761.014007
960004613.5100235763.884425
990004616.6144426919.609662
103000199929986386.804479199929986359.816306
1060001999299861523.0689631999299861469.59182
1090001999299863164.3398841999299863087.712193
113000498745798.32743499747293.101397
1160004987457385.0260684997472367.787958
1190004987457794.076294987457774.825767
1230001999200051.0591271999200046.238696
12600019992000199.32247819992000185.71919
12900019992000405.68060119992000391.234908
133000120.0312120.0624
136000120.156001120.187201
139000120.140401120.249602
143000199929972400.220565199929973362.671125
1460001999299721544.3162991999299731460.294161
1490001999299723197.2872951999299733105.168705
1530004610.7332054610.733205
1560004613.7908244613.026419
1590004616.5520424616.146439
1630005621.0608075620.858006
1660005623.4008225623.291621
1690005626.9420445626.25564
1730006771.3260096911.216808
1760006774.2432276914.570829
1790006778.5488556919.40686
1830005760.9360065760.920406
1860005763.9000255763.775224
1890005768.5332555767.86245
193000108106015.5689141127217.565713
1960008178844.429085114102953.820345
1990006362870.51245210090399.715839
Numerical results From the above figures, we safely arrive at the conclusion that the proposed algorithm is perfect compared to similar optimization methods since the algorithm (1.10) is perfect to a large extent. In Fig. 4 we see that the proposed algorithm quickly arrives at a value of 1.0, whereas the left one slowly approaches 1.0. This means that the objective method is successful and efficient for addressing complex problems in our life and work. It is well known that the calculation time is one of the most essential characteristics in an evaluation index of the efficiency of an algorithm. From Figs. 5 and 6, it is obvious that the two algorithms are good since their corresponding point values arrive at 1.0. This result expresses that the above two algorithms solve all of the tested problems and that the proposed algorithm is efficient.
Figure 4

Performance profiles of these methods (NI)

Figure 5

Performance profiles of these methods (NG)

Figure 6

Performance profiles of these methods (CPU time)

Performance profiles of these methods (NI) Performance profiles of these methods (NG) Performance profiles of these methods (CPU time)

Conclusion

This paper focuses on the three-term conjugate gradient algorithms and use them to solve the optimization problems and the nonlinear equations. The given method has some good properties. The proposed three-term conjugate gradient formula possesses the sufficient descent property and the trust region feature without any conditions. The sufficient descent property can make the objective function value be descent, and then the iteration sequence converges to the global limit point. Moreover, the trust region is good for the proof of the presented algorithm to be easily turned out. The given algorithm can be used for not only the normal unstrained optimization problems but also for the nonlinear equations. Both algorithms for these two problems have the global convergence under general conditions. Large-scale problems are done by the given problems, which shows that the new algorithms are very effective.
Table 3

Test problems

No.Problem
1Exponential function 1
2Exponential function 2
3Trigonometric function
4Singular function
5Logarithmic function
6Broyden tridiagonal function
7Trigexp function
8Strictly convex function 1
9Strictly convex function 2
10Zero Jacobian function
11Linear function-full rank
12Penalty function
13Variable dimensioned function
14Extended Powel singular function
15Tridiagonal system
16Five-diagonal system
17Extended Freudentein and Roth function
18Extended Wood problem
19Discrete boundary value problem
  1 in total

1.  The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

Authors:  Gonglin Yuan; Zhou Sheng; Wenjie Liu
Journal:  PLoS One       Date:  2016-10-25       Impact factor: 3.240

  1 in total
  1 in total

1.  Accelerated sparsity based reconstruction of compressively sensed multichannel EEG signals.

Authors:  Muhammad Tayyib; Muhammad Amir; Umer Javed; M Waseem Akram; Mussyab Yousufi; Ijaz M Qureshi; Suheel Abdullah; Hayat Ullah
Journal:  PLoS One       Date:  2020-01-07       Impact factor: 3.240

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.