Literature DB >> 33332433

Evolutionary algorithm using surrogate models for solving bilevel multiobjective programming problems.

Yuhui Liu1, Hecheng Li2, Hong Li3.   

Abstract

A bilevel programming problem with multiple objectives at the leader's and/or follower's levels, known as a bilevel multiobjective programming problem (BMPP), is extraordinarily hard as this problem accumulates the computational complexity of both hierarchical structures and multiobjective optimisation. As a strongly NP-hard problem, the BMPP incurs a significant computational cost in obtaining non-dominated solutions at both levels, and few studies have addressed this issue. In this study, an evolutionary algorithm is developed using surrogate optimisation models to solve such problems. First, a dynamic weighted sum method is adopted to address the follower's multiple objective cases, in which the follower's problem is categorised into several single-objective ones. Next, for each the leader's variable values, the optimal solutions to the transformed follower's programs can be approximated by adaptively improved surrogate models instead of solving the follower's problems. Finally, these techniques are embedded in MOEA/D, by which the leader's non-dominated solutions can be obtained. In addition, a heuristic crossover operator is designed using gradient information in the evolutionary procedure. The proposed algorithm is executed on some computational examples including linear and nonlinear cases, and the simulation results demonstrate the efficiency of the approach.

Entities:  

Year:  2020        PMID: 33332433      PMCID: PMC7746194          DOI: 10.1371/journal.pone.0243926

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Problem models

The bilevel programming problem (BLPP), a hierarchical optimisation problem, has a nested optimisation structure because of a leader and a follower. In a BLPP, optimisation procedures are executed successively at both the leader’s and follower’s levels. The upper-level optimisation is provided by the leader, whereas the lower-level optimisation is controlled by the follower. The related problems are called the upper-level/leader’s problem and lower-level/follower’s problem. Both the leader’s and follower’s problems have their own decision variables, constraints and objective functions [1]. The general BLPP can be formulated as follows: Where x = (x1, …, x) are the leader’s variables, y = (y1, …, y) the follower’s variables, the leader’s objective function, the follower’s objective function, G: R → R, k = 1, 2, ⋯, K, the leader’s constraints, and g: R → R, j = 1, 2, ⋯, J, the follower’s constraints. When the number of objectives in a leader’s and/or follower’s problems exceeds one, the problem is called a bilevel multiobjective programming problem (BMPP). The BMPP is one of the most difficult optimisation problems, because it accumulates all computational costs of hierarchical and multiobjective optimisations. Consequently, it is strongly NP-hard. The BMPP can be formulated as follows: Here, a ≤ x ≤ b, where a, and b are the lower and upper bounds of x, respectively. In general, the optimisation procedure of problem (2) can be described as follows. The leader first selects a strategy x to optimise his objective, and then the follower reacts to the leader’s selection by providing a group of non-dominated solutions y to the follower’s problem. The pairs (x, y) are then used as bilevel feasible solutions to (2). When the leader selects another strategy, the related feasible point pairs can be obtained. To solve (2) is analogous to determining a set of non-dominated solutions that is distributed well for the leader’s objective in all bilevel feasible solutions. Unlike BLPPs with a single objective at each level, the BMPP incurs an additional computational cost for the selection of the follower’s non-dominated solutions. In addition, the selection of Pareto solutions in the leader’s problem renders the BMPP much harder than the single objective case.

Related work

For problem (1), BLPPs have been extensively applied in economic management and engineering, urban traffic and transportation, supply chain planning, resource assignment, engineering design, structural optimisation, and game-playing strategies. For example, Zhu and Guo [2] proposed a BLPP with a maxmin or minmax operator in the follower’s levels for a manufacturer, where the manufacturer plans to produce multiple short life cycle products using one-shot decision theory. The model was solved using typically used optimisation methods. Based on a bilevel complementary model, Nasrolahpouret et al. [3] developed an energy storage system decision tool for merchant price-making, that can determine the most advantageous trading behaviour in a pool-based market. More examples in real-world applications are summarised in [4-8]. As the applications of BLPPs are becoming increasingly extensive and diverse, researchers have focused on developing efficient solution strategies for such problems. However, owing to the intrinsic non-convexity and non-differentiability of BLPPs, a general BLPP is always complex and extremely challenging to solve using canonical optimisation methods that involve gradient information. Thus far, only a limited number of BLPPs can be solved effectively, such as linear [9-13] and convex quadratic BLPPs [14-16]. Liu et al. [10] used a new variant of the penalty method and Karush-Kuhn-Tucker(K-K-T) conditions to solve weak linear BLPPs. Franke et al. [16] used K-K-T conditions and the optimal value approach to transform a bilevel convex programming problem into a single-level optimisation problem, and introduced M-stationarity for mathematical programs with complementarities in Banach spaces. For most general BLPPs, only local optima can be obtained using these gradient approaches. To effectively solve BLPPs, another algorithm framework, i.e. the swarm optimisation method, has been widely adopted. This method is based on population search technology and affords good global convergence. Evolutionary algorithms [17-19], as representative swarm optimisation techniques, have been widely adopted to develop various bilevel optimisation algorithms in the past decades. For example, Sinha et al. [19] presented a single-level reduction of BLPPs using approximate K-K-T conditions and embedded the neighbourhood measure idea into an evolutionary algorithm. Based on a new constraint-handling scheme, Wang, et al. [20] proposed an evolutionary algorithm using K-K-T conditions. In Wang’s method, the new constraint-handling scheme enables individuals to satisfy linear constraints. For problem (2), even though it is extremely difficult to design efficient solution approaches for the BMPP, a large number of practical applications stimulate the researches on theoretical results and algorithmic design. Guo and Xu [21] developed a BMPP model to study the seismic risk of transportation system reconstruction in large construction projects, and fuzzy random variable transformation and fuzzy variable decomposition methods were proposed to solve the model. Brian et al. [22] proposed a BMPP model for coordinating multiple design problems according to conflicting criteria. The design of a hybrid vehicle layout was expressed as a two-stage decomposition problem including vehicle class and battery class, and a multiobjective decomposition algorithm was developed. Alves et al. [23] developed a particle swarm optimisation algorithm to solve linear BMPPs with simple multiple objectives at the leader’s level. Chakraborti et al. [24] investigated environmental-economic power generation and despatch problems using the BMPP. The existing BMPPs can be classified into three categories: 1) Only the leader’s problem involves multiobjective optimisation [25, 26]; 2) only the follower’s problem includes multiobjective optimisation [27-29]; and 3) both levels involve multiobjective optimisation [30-39]. In fact, once a the non-dominated procedure is hierarchically executed, the computational amount for bilevel optimisation will be significant. Consequently, a few efficient approaches exist for addressing linear or nonlinear BMPPs. For linear BMPPs, Calice et al. [34] first converted the linear BMPP into two multiobjective problems, and then used some specific optimisation conditions to prove that the common solutions to each multiobjective problem are optimal to the original problem. Kirlik et al. [35] proposed the use of a global optimisation method for discrete linear BMPPs. However, this method is computationally intensive. Liu et al. [36] investigated a class of pessimistic semi-rectorial BLPPs, and used secularisation to, transform the original problem into a scalar objective optimisation problem. Subsequently, the authors used the generalised differentiation calculus of Mordukhovich to establish optimality conditions for linear multiobjective problems. Lv and Wan [37] used the duality gap as a penalty on the leader’s objective, and the follower’s problem was transformed into a single objective case by adopting a weighted sum scheme. Alves [25] used a multiobjective particle swarm optimisation algorithm to solve linear BMPPs with multiple objective functions at the leader’s level. Only a few studies have been reported regarding nonlinear BMPPs. Deb and Sinha [29] presented an evolutionary-local-search algorithm for solving nonlinear BMPPs. In that study, the mapping method was used for the first time to solve the follower’s programming problems. Jia et al. [30] used genetic algorithms to solve BMPPs, in which a uniform design was used to generate weights, crossover and mutation operators. In addition, the follower’s objectives were converted into a single objective function by the weight sum method. In this method, MOEA/D was used to solve the leader’s multiobjective convex programming problem. Besides, based on the sensitivity theorem, an iterative method was used to solve a class of nonlinear BMPPs [33]. Zhang et al. [38] presented an improved particle swarm optimisation for solving BMPPs.

Research motivation

As mentioned above, because BMPPs have high computational complexity, only a few efficient solution methods exist for general BMPPs, and most of the existing approaches have been developed to solve cases where only the leader’s problem is multiobjective. When the follower’s problem involves multiple objectives, the non-dominated procedure can cause a large amount of computation. In addition, recall that in BLPPs, the optimisation procedure of the follower’s problem is frequently executed to obtain bilevel feasible points, which can increase the computational cost. Consequently, if one intends to develop an efficient approach, two key issues must be addressed. One is to reduce the computational cost caused by the follower’s optimisation procedure, whereas the other is to avoid excessive dominant comparisons among individuals. Hence, the weighted sum method is always adopted to delete non-dominated sorting procedures [40], and K-K-T conditions are applied to transform BLPPs into single-level ones. Furthermore, either MOEA/D [41, 42] or NSGA-II [43, 44] can be used to solve multiobjective optimisation problems. However, weight sum and K-K-T transformation techniques cannot perform the computation effectively when addressing the follower’s problems. In the literature [26] Li et al used evolutionary algorithms to solve linear BMPPs in which the leader’s objective is multiobjective. In Li’s method, the programming problem was first transformed into a single-level multiobjective problem by K-K-T conditions; subsequently. The multiobjective problem was solved via the weighted sum method, Tchebycheff approach and NSGA-II method, separately. Finally, the computational results obtained using the three approaches were compared. Sinha et al. [31] presented an approximation of the set-mapping method to solve the follower’s problem. This method can effectively reduce the amount of computation caused by the follower’s optimisation. However, it is complicated to establish multiple quadratic fibers between the variables of the leader’s and follower’s problems. The methods used in the literature [26] and [31] requires a high computational cost. Motivated by both weighted aggregation and approximate solution methods, in this study, an evolutionary algorithm using surrogate models was developed to solve BMPPs. Unlike the existing algorithms, the proposed algorithm is characterised as follows. First, a weighted aggregation method using a uniform design is adopted to convert the follower’s problem into several single objective ones. Second, surrogate optimisation models and interpolation functions are used to replace the solution functions of the follower’s problem and updated periodically using new sample points. Finally, as heuristic information, the gradient direction is embedded into genetic operators to produce potentially high-quality offspring.

Basic notions

Some basic definitions for problem (2) are summarized as follows: Let B1 = {(x, y)|G(x, y) ≤ 0, x ∈ R, y ∈ R, k = 1, 2, …, K} and B2 = {(x, y)|g(x, y) ≤ 0, x ∈ R, y ∈ R, j = 1, 2, …, J}. Definition 0.1 (Dominated relations): Vector is dominated by vector , denoted by F2 ≺ F1, if , and there exists at least j ∈ {1, 2, …, q}, such that . If the objective value of x1 is dominated by the objective value of x2, we also denote the relation by x2 ≺ x1. After a decision x ∈ R is made by the leader, the optimal solution set to the follower’s problem is defined as follows: Let Q = {(x, y)∈B, y ∈ Q}. The set is known as an inducible region in BMPPs. Definition 0.2 (Non-dominated solution set): For a BMPP, the Non-dominated solution set P* for the leader’s problem is defined as: Definition 0.3 (Pareto front): For BMPP, the Pareto front for the leader’s problem is defined as: According to the above definitions, BMPPs can be equivalently written as: Assumptions: For each x taken in B1∩B2 by the leader, the follower has room to react, that is to say, O ≠ Ø. F(x, y) is differential in x, and f(x, y) and g(x, y) is differentiable and convex in y for x fixed. Here, A1 is a common assumption in BLPPs, which makes BLPPs well-posed. A2 is presented to conveniently utilize gradient information.

Algorithm design

Transformation of follower problem

BMPPs will incur a high computational cost if the follower’s problem is addressed as a general multiobjective optimisation. In the proposed approach, the solution of the follower’s problem is categorised into two steps. First, multiobjective problem is converted into several single-objective problems by the weighted sum of the objectives. Subsequently, these converted problems are solved using simplified surrogate models that can efficiently decrease the computational cost; this will be presented in the next Section. 1). The uniform design of weights: Uniform design and spherical coordinates are used to generate weights uniformly distributed in . As a classical experimental design method, uniform design has become popular since the 1980’s. It was originally developed to obtain designed points that are uniformly distributed over the experimental domain [45, 46]. A uniform design array for k factors with q levels and H combinations is often denoted by L(K). For example, L16(45) involves four factors, and each factor has five levels. Therefore, 1024(= 45) combinations exist. However, in a uniform design, one must only select 16 combinations from those 1024 possible combinations. H selected combinations can be represented by a uniform matrix U(n, H) = [U], where U is the level of the ℓ factor in the i combination. In this subsection, we use the concept of uniform design to generate uniformly distributed points in . For a closed and bounded set φ ⊂ R, the main purpose of the uniform design is to sample a small group of points from set φ, such that the sampled points are uniformly scattered in set φ. Herein, we consider only sample points in the set . A brief description of the uniform design in the set is presented as follows. For any point θ = (θ1, θ2, …, θ) in φ, a hyper-rectangle between 0 and θ, denoted by φ(θ), can be defined as [40]: For a set of v points on φ, we assume that v(θ) points exist in the hyper-rectangle φ(θ). Therefore, the ratio of points in the hyper-rectangle φ(θ) is , the volume of the hyper-rectangle is , and the percentage of the hyper-rectangle volume is . Subsequently, the uniform design on φ is defined as determining v points on φ such that the following discrepancy is minimised. To determine uniformly scattered points on φ, we used the good-lattice-point method [47] to generate a set of v uniformly scattered points on φ, denoted by c(p − 1, v). The procedure is as follows: a v × (p − 1) uniform array is first generated: where U = (ℓσ modv) + 1, i = 1, 2, …, p − 1, ℓ = 1, 2, …, v. Then, each row of matrix U(p − 1, v) can define a row θ = (θ1, θ2, …, θ) of c(p − 1, v) by Hence, we have For example, when v = 7 and p = 5, it can be seen that σ = 3 from Table 1, Thus:
Table 1

Values of parameter σ for different values of q and M.

qMσ
52-42
72-63
112-107
1325
34
4-126
2). Weight vector We can obtain v weight vectors as follows: where if p = 2, we have otherwise, if p > 2, we also have 3). Transformed problems We used the weight vectors designed in the above to address the follower’s multiple objectives, transforming the follower’s problem of the original problem into several single-objective ones. Subsequently, then (2) can be transformed into v BMPPs with a single objective in their follower’s problems: The follower’s problems of (14) are:

Surrogate models

In BLPPs/BMPPs, for each the leader’s variable values, the follower’s problem must be optimised. The procedure result in can a significant amount of computation in solving BLPPs/BMPPs, particularly when the problem is large. According to the optimisation procedure, the optimal solutions to the follower’s problem are always determined by the leader’s variables. This means that the optimal solution of the follower’s problem is a function of the leader’s variables. However, the function is often implicit and can not be obtained analytically. In the proposed approach, we used the interpolation function as surrogate models to estimate the optimal solutions to the follower’s problems. Therefore, some values x of the leader’s variables are first used, and then the follower’s problems are optimised individually. The optimal solutions are denoted as y, point pairs (x, y) are regarded as interpolation sample points and used to generate interpolation functions. The interpolation function demonstrates better performance in fitting unknown functions [48] and can efficiently decrease the computational times of the follower’s problems. In the proposed algorithm, the cubic spline interpolation method is adopted when the interpolation function is one-dimensional, whereas the linear interpolation method is used for other cases. As an example, on the one-dimensional coordinate plane, for w sample points x, i = 1, 2, …, w, we constructed an interpolation function y = ψ(x) to approximate the optimal solution function of the follower’s problem. Therefore, y = ψ(x*) can be regarded as the approximate solution to the follower’s problem when x is fixed at x*. The relationship between the approximate and real solutions is shown in Fig 1.
Fig 1

Interpolation approximation and the real optimal solutions.

In Fig 1, the solid red line represents the real solutions to the follower’s problem, whereas the dotted yellow line represents the approximate points provided by interpolation.

Interpolation approximation and the real optimal solutions.

In Fig 1, the solid red line represents the real solutions to the follower’s problem, whereas the dotted yellow line represents the approximate points provided by interpolation. For interpolation functions, denser sample interpolation points resulted in a better approximation effect. It is noteworthy that for these interpolation points, each follower’s variable value must be optimal when the leader’s components are fixed. In the proposed algorithm, the interpolation function can be obtained as follows. First, an initial population of N points is randomly generated for the leader’s variable space, and these points are denoted by x, i = 1, 2, …, N. Subsequently, for each leader’s value among x, i = 1, 2, …, N, the optimal solutions to the follower’s problem (15) are denoted by y, i = 1, 2, …, N, ℓ = 1, 2, …, v. Consequently, v × N point pairs of (x, y), i = 1, …, N, ℓ = 1, 2, …, v can be obtained. These point pairs are used as interpolation nodes to generate v interpolation functions. Where i.e. each of , is a function of x and . According to the above-mentioned interpolation method, we can obtain the approximate optimal solutions to the follower’s problem (15). In the proposed algorithm, we periodically update both the interpolation sample points and to improve interpolation functions.

Proposed algorithm

When solving BMPPs, Transformation of follower problem guarantees the reduction of the computational complexity of the follower’s problems, surrogate models guarantees the saving of the calculation cost of the follower’s problems. Combining the methods in above, in this paper, an evolutionary algorithm [49-51]. based on surrogate models, denoted by SMEA, is developed to solve BMPPs. Fig 2 gives the flowchart of SMEA.
Fig 2

Basic flowchart of the proposed algorithm.

The specific procedure is described as follows: Step 0 (Transformation of the follower’s problems) The weight vectors uniformly designed in section 3 are used to deal with the follower’s multiple objectives, making the follower’s problem become v single-objective ones. As a result, we obtain v BMPPs with different follower’s objectives. Step 1 (Initial population) Randomly generate N initial points x = a + (b − a) ⋅ rand, i = 1, …, N, where b, and a are the upper and lower bounds of the x, respectively, rand is random in [0, 1]. Then the initial population pop(0) with size N is obtained. Set gen = 0. Step 2 (Fitness assessment) For each x, to solve the follower’s problems (15) and obtain the optimal solution y, i = 1, …, N, ℓ = 1, 2, …, v. and the value of the leader’s objectives are taken as F(x, y), i = 1, …, N, ℓ = 1, 2, …, v, k = 1, 2, …, q. Construct the interpolation functions (surrogate models) just as in Section 4. Use MOEA/D to deal with the leader’s problem, a multi-objective optimisation, and take non-dominated solutions in the population pop(gen) into the set D1. Step 3 (Crossover) Note that F(x, y), k = 1, 2, …, q are differential in x. To obtain a potential descent direction, we take the negative gradient direction of each leader’s function F(x, y). a. Negative gradient vector: b. Normalize the direction: Set Here, τ ∈ (0, 1), k = 1, ⋯, q, are randomly generated and satisfy . c. Crossover operator is designed as follows: where s is positive. In fact, the operator can be extended to non-differential case of the leader’s functions. One only needs to replace the gradient function by an approximate gradient: Step 4 (Mutation) Gaussian mutation is adopted. Suppose that is an individual chosen for mutation, then the offspring of is generated as follows: Step 5 (Offspring population pop′(gen)) For offspring set (x, x, …, x) generated by the crossover and mutation operation, interpolation function is used to obtain the approximated solutions to (15). Then an offspring set pop′(gen) with size v × λ is obtained and the values of the leader’s objective functions are F(x, y), i = 1, 2, …, λ, ℓ = 1, 2, …, v, k = 1, 2, …, q. Step 6 (Update interpolation function) Use MOEA/D to select non-dominated solutions from set {(x, y), i = 1, 2, …, λ, ℓ = 1, 2, …, v}, and put these non-dominated solutions into the set D2. For each point in D2, update the optimal solutions y by solving the follower’s problems. And update the interpolation function with the exact solutions obtained above. Step 7 (Update Non-dominated solution set D1) Set D1 = D1 ⋃ D2 and D2 = ϕ. Remove the dominant solutions in D1. Once the scale of D1 exceeds the pre-determined threshold value, then the crowding distance method can be applied to delete some redundant points just as done in NSGA-II. Step 8 (Selection) Select the best N individuals from set pop′(gen)⋃D1 to form the next generation of population pop(gen + 1); Step 9 (Termination condition) If the stopping criterion is satisfied, then stop and output the non-dominated solutions set D1; otherwise, set gen = gen + 1, go to Step 3.

Simulation results

Test examples

To demonstrate the feasibility and efficiency of the proposed algorithm, we test SMEA on 20 Examples which are taken from literatures [26, 31, 32, 52] and. In [26] and [53], two EAs have been developed. In spite of the fact that the two algorithms are proposed only for dealing with BMPPs in which the follower’s problem is single objective, as a special case, the two approaches can be taken as compared algorithms to demonstrate the performance of the proposed SMEA. According to the number of objectives, we have divided the test Examples into two categories. One (Examples 1-13) is the bilevel multiobjective case, that is to say, at least one between the leader and the follower involves multiobjective optimisation. The problems of this type are mainly used to test the performance of the weighted sum method and the interpolation approximation. The other (Examples 14-20) only involves bilevel single objective optimization which is utilized to test the performance of surrogate models as well as crossover operators. All 20 examples are presented as follows: Example 1(F01) [26]: Example 2(F02) [26]: Example 3(F03) [26]: Example 4(F04) [26]: Example 5(F05) [26]: Example 6(F06) [26]: Example 7(F07) [26]: Example 8(F08) [26]: Example 9(F09) [26]: Example 10(F10) [26]: Example 11(F11) [32]: Example 12(F12) [32]: Example 13(F13) [31]: Example 14(F14) [52]: Example 15(F15) [32]: Example 16(F16) [53]: Example 17(F17) [53]: Example 18(F18) [53]: Example 19(F19) [53]: Example 20(F20) [53]:

Parameter setting

To compare the experimental results, the parameter settings in this study are consistent with in [26]. Population size: 15; Weight vector: v = 10; Crossover rate: pc = 0.6; Gaussian mutation probability: pm = 0.05; Maximum number of generations: maxgen = 300; Step size in the crossover operator: s = 8.

Performance measure

To test the effectiveness and practicability of the SMEA algorithm, we tested the non-dominated solutions, HV, IGD, C − metric, CPU time and optimal solution according to the characteristics of the problems. Definitions of several metrics are as follows: 1). HV [54] Select a reference point in the objective space e = (e1, e2, ⋯, e), denote A = SMEA, NSGA-II, Weighted sum approach or Tchebycheff approach. Compute For reference points in the objective space, the large HV means better quality. 2). IGD [55] Let G be a uniformly distributed subset selected to form the true Pareto Front and G′ is the approximated set that is obtained by a multiobjective optimisation algorithm. The IGD value of G to G′, i.e., IGD(G, G′) is defined as where |G| returns the number of solutions in the set G and d(G, G′) computes the minnmum Euclidean distance from G to the solutions of G′ in objective space. Generally, a lower value of IGD(G, G′)is preferred as it indicates that G′ is distributed more uniformly and closer to the true Pareto front. 3). C − metric [54] We give the following notations: Set SM: non-dominated solution set of SMEA; Set WS: non-dominated solution set of Weighted sum approach; Set TE: non-dominated solution set of Tchebycheff approach; Set NS: non-dominated solution set of NSGA-II. Let: C(A′, B) is defined as the percentage of the solutions in B that are dominated by at least one solution in A′. C(A′, B) is not necessarily equal to 1 − C(B, A′). C(A′, B) = 1 implies that no solution in B is dominated by a solution in A′.

Simulation results and comparisons

We execute an algorithm on a computer Intel(R) Core(TM)i5-8250U CPU@ 160 GHz 1.80 GHz using Matlab software. The algorithm was independently run ten times on every test instance. For the first 10 Examples, SMEA is compared with the weighted sum method, the Tchebycheff approach and NSGA-II. For each algorithm, we randomly take one among all 10 computational results of a single compared algorithm and show them in Figs 3–12. The analytical(theoretical) solution set of Examples 11 to 13 is provided, as a result, we can compare the computational results of SMEA with these known analytical solutions as shown in Figs 13–15.
Fig 3

Non-dominated solutions on example 1.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 1.

Fig 12

Non-dominated solutions on example 10.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 10.

Fig 13

Non-dominated solutions on example 11.

Analytical solutions and non-dominated solutions obtained by SMEA on example 11.

Fig 15

Non-dominated solutions on example 13.

Analytical solutions and non-dominated solutions obtained by SMEA on example 13.

Non-dominated solutions on example 1.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 1.

Non-dominated solutions on example 2.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 2.

Non-dominated solutions on example 3.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 3.

Non-dominated solutions on example 4.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 4.

Non-dominated solutions on example 5.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 5.

Non-dominated solutions on example 6.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 6.

Non-dominated solutions on example 7.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 7.

Non-dominated solutions on example 8.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 8.

Non-dominated solutions on example 9.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 9.

Non-dominated solutions on example 10.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 10.

Non-dominated solutions on example 11.

Analytical solutions and non-dominated solutions obtained by SMEA on example 11.

Non-dominated solutions on example 12.

Analytical solutions and non-dominated solutions obtained by SMEA on example 12.

Non-dominated solutions on example 13.

Analytical solutions and non-dominated solutions obtained by SMEA on example 13. In Figs 3–12 (example 1–10), for Example 1 (shown in Fig 3), the non-dominated solutions set obtained by SMEA is superior to that obtained by the three compared methods. Example 2 is a maximization problem, from Fig 4 we can see that the non-dominated solutions set obtained by the SMEA is better than those obtained by the three compared methods. In addition, for Example 3, 4, 5, 7 and 9, the non-dominated solutions set obtained by the SMEA is similar to that by the Tchebycheff approach, but better than those by the other two methods. For Example 10, the results presented by the SMEA is almost the same as those obtained by both the Tchebycheff approach and the weighted sum approach, but better than that by the NSGA-II. For Examples 6 and 8, SMEA can also find approximately the non-dominated solutions provided by literature. Figs 13–15 show that the non-dominated solutions sets obtained by SMEA are almost in line with the analytical solutions sets.
Fig 4

Non-dominated solutions on example 2.

Comparison of the non-dominated solutions obtained by SMEA, NSGA-II, Weighted sum approach and Tchebycheff approach on example 2.

Table 2 represents the average value of HV obtained by SMEA, Weighted sum approach, Tchebycheff approach and NSGA-II running 10 times independently on Examples 1–10. While Table 3 represents the average value of HV and IGD obtained by SMEA and Analytical points running 10 times independently on Examples 11–13. The symbols “+”, “−” and “≈” mean the computational result is better than, worse than and almost equal to that obtained by our algorithm, respectively. Calculated the standard deviation(std)of HV value in 10 times. Best results are highlighted bold color in Table 2, we can see that the HV obtained by SMEA is much larger than those obtained other three algorithms on test problems 2–5, 7, 9 in Table 2. From Table 2, we can see in SMEA 10 values of HV better than the Weighted sum approach, 10 values of HV better than the Tchebycheff approach and 6 values of HV better than NSGA-II. Which illustrates that the diversity of solutions obtained by SMEA is better than obtained by other algorithms. Meanwhile, it is obvious that the IGD obtained by SMEA is small on Examples 11–13, which indicates for these test problems, the coverage of solutions obtained by SMEA quite well to the analytical solutions.
Table 2

HV obtaind by SMEA and the approaches in the literature.

Test problemSMEAstd)Weighted sum approachTchebycheff approachNSGA-II
F013.8841E + 00(±3.1298E − 03)3.7641E+003.5249E+002.4364E+01
F021.6698E + 03(±5.7438E − 02)0.0000E+008.2293E+011.2895E+03
F036.0071E + 03(±2.2216E − 03)1.5274E+034.8412E+031.4086E+02
F047.0414E + 07(±6.0845E − 04)5.5236E+053.2912E+054.6756E+02
F052.8771E + 03(±3.5607E − 03)2.5212E+032.6888E+032.7711E+03
F062.0669E + 02(±1.2233E − 04)2.5281E+012.6506E+013.0948E+02
F071.8901E + 03(±4.7892E − 03)2.5252E+012.8871E+011.5958E+03
F083.2278E + 00(±8.6756E − 03)2.0780E-012.0460E-015.6428E+00
F093.2245E + 03(±4.5582E − 03)3.0200E+022.7828E+032.5540E+01
F103.8894E + 00(±2.5514E − 02)3.7641E+003.5249E+002.4363E+01
+/ − /≈10/0/010/0/06/4/0
Table 3

HV, IGD obtaind by SMEA and HV obtaind by analytical points.

Test problemSMEA HVstd)Analytical points HVIGD
F113.1070E − 01(±1.3322E − 03)3.1160E-013.4000E-03
F121.9964E − 01(±4.7843E − 03)2.0840E-018.2817E-04
F139.8030E − 01(±2.53366E − 03)1.0000E+004.6000E-03
+/−/≈2/0/1
In terms of the results of the multiple problem Wilcoxon’s test in Table 4, all the R+ values are bigger than the R− ones, which implies that the performance of our algorithm is superior to that of other competitors. Moreover, the significant difference at α = 0.05 can be found in four cases, i.e., SMEA versus Tchebycheff approach, SMEA versus Weighted sum approach and SMEA versus NSGA-II. Besides, our algorithm ranks first in the Friedmans test(see Table 5 and Fig 16).
Table 4

Results of the multiple problem Wilcoxon’s test on the problems of F01-F10.

SMEA VSR+Rp-valueα = 0.05
Weighted sum approach5500.005Yes
Tchebycheff approach5500.005Yes
NSGA-II45100.044Yes
Table 5

Ranking of the SMEA by the Friedmans test on the problems of F01-F10.

AlgorithmRanking
SMEA3.60
Weighted sum approach1.70
Tchebycheff approach1.90
NSGA-II2.80
Fig 16

Ranking of SMEA by Friedman test on the problems of F01-F10.

Table 6 represents the average value of C-metrics running 10 times independently on Examples 1–10. We used SMEA to make a pairwise comparison with three algorithms in the literature. We can found C(SM, WS)≥C(WS, SM), C(SM, TE)≥C(TE, SM) and C(SM, NS)≥C(NS, SM) on problems 1, 2, 5, 6, 8, 10. It means that SMEA found a better non-dominated set than algorithms in [26].
Table 6

Comparison of C-metric between SMEA and algorithms in [26].

Text problemsF01F02F03F04F05F06F07F08F09F10+/−/≈
C(SM,WS)1.00000.67000.47000.57000.54000.00000.02000.09300.00670.88006/3/1
C(WS,SM)0.00000.57001.00000.51000.41000.00000.32000.00670.84000.5400
C(SM,TE)1.00000.67000.55000.26000.00000.00000.09300.52000.67000.41007/1/2
C(TE,SM)0.00000.50000.39000.53000.00000.00000.08000.40000.48000.0000
C(SM,NS)1.00000.69000.62000.37000.00670.00000.00130.04000.04700.96008/1/1
C(NS,SM)0.00000.01300.02700.29000.00000.00000.00670.02000.00670.0000
In addition, CPU time is used to compare the efficiency of the algorithms. For the first 10 Examples, Table 7 shows the CPU times running 10 times independently used by both SMEA and two compared algorithms in [26]. From Table 7 one can see that SMEA can save more computation costs than the compared methods. For other Examples, we also provided their CPU times in Table 7.
Table 7

CPU time used by the SMEA and the CPU time of the approaches in the literature [26].

Test problemWeighted sum approach(s)Tchebycheff approach(s)SMEA approach(s)
F01288.8705269.1355184.0437
F02290.4008305.5410279.4890
F031153.42081169.1095329.9828
F041514.56711652.7553579.3628
F05316.4801381.0618290.3375
F06344.4778256.3112238.1882
F07487.5152579.9305351.7422
F08608.9422550.5114453.6247
F09674.3858691.0868328.4653
F101060.90201102.1321837.5767
F11424.1344
F12385.6785
F13506.1157
F1412.5431
F1510.8668
F1611.7438
F1712.2328
F186.8531
F1912.1533
F208.5941
It is difficult to solve BMPPs, especially to solve the follower’s problem, and there exist only a small number of computational Examples. In the proposed algorithm, the weighted sum approach can transform the original multiobjective problem into single objective ones. Hence, SMEA can also be used to solve BLPPs with a single objective on which the effectiveness of the surrogate models can be verified. When SMEA is used in solving Examples 14–20, the best solution (x*, y*) in all 10 runs is recorded as well as the corresponding objective values F(x*, y*) and f(x*, y*). All results are presented in Table 8 in which the objective values are denoted by Ref-f(x*, y*) and Ref-F(x*, y*), respectively.
Table 8

Comparison of the best results found by SMEA and the compared approaches.

Test problemsRef-f(x*, y*)f(x*, y*)Ref-F(x*, y*)F(x*, y*) ± std(x*, y*)
F14−152.5005−152.2950−351.8333-352.0990 ± 0x* = (0, 0, 0, 0, 0, 0, 0, 0, 0, 0)y* = (0, 0.6600, 8.8330, 0, 0, 0, 0, 0, 0, 0)
F1557.480057.4800−6600.0000-6648.1400 ± 0x* = (7.3600, 3.5500, 12.3500, 17.4600)y* = (0.9100, 10.0000, 29.0900, 0)
F16−1.0156−1.0156−18.6787-18.6835 ± 0x* = (0, 2)y* = (1.8768, 0.9076)
F173.20003.2000−29.2000−29.2000 ± 0x* = (0.0001, 0.8999)y* = (0, 0.5999, 0.4001)
F187.61457.6148−1.2091-1.2092 ± 0x* = 1.8885y* = (0.8892, 0)
F191.00001.00000.00000.0000 ± 0x* = (−0.8885, −0.1115, 0.7173, 0.2510, 0.1672, −0.6012, −0.5287, −0.7114, 0.3353, 0.9776)y* = (0.0797, 0.9856, 0.4052, 0 −0.9677, −0.5822, 0.6768, 0, 0.8700, −0.2090)
F20100.00000.00000.00000.0000 ± 0x* = (0.9857, 28.6942)y* = (−19.0141, 8.6941)
Table 8 shows the average value of optimal results running 10 times independently on Examples 14–20. The best results obtained are highlighted bold in this table. Especially, the optimal solutions of Examples 14–16 and 18 are obviously better than those provided in the existing literature. It follows that the surrogate model technique is effective to solve the problems of this type.

Conclusion

BMPP is one of the known hardest optimisation models in knowing that this problem always accumulates computational complexity of both the hierarchical structure and multi-objective optimization. In order to reduce the computational cost of the problem, two efficient techniques are embedded in the proposed algorithm. One is the weighted sum method used to deal with multiple objectives in the follower’s problem. The other is the surrogate model which can efficiently save the computational cost in obtaining bilevel feasible solutions. In addition, a heuristic crossover operator is designed by making use of the gradient information. The simulation results in 20 computational examples show the efficiency of the proposed algorithm. (XLS) Click here for additional data file. (XLS) Click here for additional data file. 17 Sep 2020 PONE-D-20-21853 An Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multi-objective Programming Problems PLOS ONE Dear Dr. Liu, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 01 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Weinan Zhang Academic Editor PLOS ONE Additional Editor Comments: To read the reviews and the reading of the manuscript by myself. I decide to give a major revision for the current version of the manuscript. Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: 'The research work was supported by the National Natural Science Foundation of China under Grant Nos. 61966030 and 11661067, the Natural Science Foundation of Qinghai Province under Grant No. 2018-ZJ-901 and the Key Laboratory of IoT of Qinghai under grant 2020-ZJ-Y16.' We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. a. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 'NO - Include this sentence at the end of your statement: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript ' b. Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 4. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical. 5. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This paper proposed an evolutionary algorithm using surrogate models for solving bilevel multiobjective programming Problems. It is an interesting topic and can be accepted with revised. My detailed comments are as follows: 1) Figure 2 shows the framework of the entire algorithm, but in Figure 2 we can seen that the represents "Archive D2" it is unclear. And the color of the partial graph in Figure 15 is inconsistent with the color in other Figures (13-14), it is recommended to modify. 2) Please carefully check the representation of the numbers in Table 2 and Table 3. For example, did you miss "E" in the representation of numbers in Example F08-F10? 3) It can be seen from Table 6 that the C-Measure of different algorithms is compared, But the description of Table 6 on page 24 seems inappropriate, such as "… C(SM,WS) > C(WS,SM), C(SM,TE) > C(TE,SM) and C(SM,NS) > C(NS,SM) on problems 1,2,5,6,8,10… ", but I found that in F06, it should be " …C(SM,WS) = C(WS,SM), C(SM,TE) = C(TE,SM) and C(SM,NS) = C(NS,SM) …". 4) Table 8 shows the comparison of objective function values on Examples 14-20. Please clearly explain the relationship between F(x∗,y∗) and f(x∗,y∗). Whether the comparison of their values is mainly reflected in F(x∗,y∗)? Reviewer #2: The paper presents a novel evolutionary algorithm for the bilevel multi-objective programming problem, which is a very difficult problem being studied extensively in previous studies. The experimental results demonstrate that the proposed method is very efficient for the problem. Overall, the paper needs major improvements in written, especially for the organization structure. It seems that the related work and the introduction are mixed together, which makes the reader difficult to understand the major motivation or how the idea is developed. In addition, the Section 5, the authors should pay more attention to let the algorithm being presented more understandable. The direct showing can make readers hard to follow. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 3 Nov 2020 We have revised and uploaded the file according to the reviewer’s comment Submitted filename: Response to reviewers.docx Click here for additional data file. 1 Dec 2020 Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multiobjective Programming Problems PONE-D-20-21853R1 Dear Dr. Li, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Weinan Zhang Academic Editor PLOS ONE Additional Editor Comments (optional): The main concerns of reviewers are addressed in the revision. However, please also mind the comment pointed by the reviewer "The flowchart Fig.2 has no start and end marks." and revise accordingly. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this study, the authors proposed an evolutionary algorithm using surrogate optimisation models to solve the computational complexity of both hierarchical structures and multiobjective optimisation in BMPP. The authors have revised the paper according to the reviewer's suggestions. The flowchart Fig.2 has no start and end marks. Reviewer #2: The paper addresses an important problem and present a good solution. All previous issues have been handled. I think the paper is well done and suitable to be published directly. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 7 Dec 2020 PONE-D-20-21853R1 Evolutionary Algorithm Using Surrogate Models for Solving Bilevel Multiobjective Programming Problems Dear Dr. Li: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Weinan Zhang Academic Editor PLOS ONE
  4 in total

1.  An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

Authors:  Kalyanmoy Deb; Ankur Sinha
Journal:  Evol Comput       Date:  2010       Impact factor: 3.277

2.  Adaptive Replacement Strategies for MOEA/D.

Authors:  Zhenkun Wang; Qingfu Zhang; Aimin Zhou; Maoguo Gong; Licheng Jiao
Journal:  IEEE Trans Cybern       Date:  2015-03-27       Impact factor: 11.448

3.  The Evolutionary Origins of Hierarchy.

Authors:  Henok Mengistu; Joost Huizinga; Jean-Baptiste Mouret; Jeff Clune
Journal:  PLoS Comput Biol       Date:  2016-06-09       Impact factor: 4.475

4.  Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.

Authors:  Azmat Ullah; Suheel Abdullah Malik; Khurram Saleem Alimgeer
Journal:  PLoS One       Date:  2018-01-19       Impact factor: 3.240

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.