Literature DB >> 35602624

Equalized Grey Wolf Optimizer with Refraction Opposite Learning.

Lijun Sun1,2,3, Binbin Feng1,2,3, Tianfei Chen1,2,3, Dongliang Zhao1,2,3, Yan Xin1,2.   

Abstract

Grey wolf optimizer (GWO) is a global search algorithm based on grey wolf hunting activity. However, the traditional GWO is prone to fall into local optimum, affecting the performance of the algorithm. Therefore, to solve this problem, an equalized grey wolf optimizer with refraction opposite learning (REGWO) is proposed in this study. In REGWO, the issue about the low swarm population variety of GWO in the late iteration is well overcome by the opposing learning of refraction. In addition, the equilibrium pool strategy reduces the likelihood of wolves going to the local extremum. To investigate the effectiveness of REGWO, it is evaluated on 21 widely used benchmark functions and IEEE CEC 2019 test functions. Experimental results show/ that REGWO performs better than the other competitors on most benchmarks.
Copyright © 2022 Lijun Sun et al.

Entities:  

Mesh:

Year:  2022        PMID: 35602624      PMCID: PMC9117049          DOI: 10.1155/2022/2721490

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Complex optimization problems in practice are often discontinuous, nondifferentiable, and nonconvex. Prior to the advent of metaheuristic optimization technology [1], the most widely used optimization methods were the gradient descent algorithm and the Gauss-Newton method [2]. However, the gradient-based optimization approach is prone to obtaining the local extremum interference, resulting in a reduction in optimization precision. On the contrary, metaheuristic optimization algorithms are able to identify optimum or near-optimum solutions within an acceptable time. Therefore, many scholars have researched metaheuristic optimization algorithms in order to address challenging optimization problems, such as particle swarm algorithm (PSO) [3], artificial bee colony algorithm (ABC) [4], ant colony optimization algorithm (ACO) [5], slime mould algorithm (SMA) [6], hunger games search (HGS) [7], Runge–Kutta method (RUN) [8], grey wolf optimizer (GWO) [9], weighted mean of vectors (INFO) [10], and so on. Among these algorithms, GWO has become the focus of research in recent years due to its advantages, such as few parameters and straightforward principle. GWO is a swarm intelligence stochastic optimization algorithm introduced in 2014, as a result of information interaction between the social level and hunting behavior of grey wolves in nature [9]. GWO is an effective metaheuristic, and it attracted the interest of academics when it was initially introduced. It has been widely used in many fields such as feature selection [11, 12], image processing [13, 14], path planning [15], weld shop inverse scheduling [16], and so on. In GWO, the search process is guided by the leading wolves in each iteration, which shows great convergence toward leading wolves. The leading wolves sometimes fall into the local extremum, especially in multimodal problems. However, when leading wolves get trapped at local optima, other individuals in the population are also vulnerable to local extremes. This is the cause of the decrease in population diversity. Therefore, the standard GWO suffers from the same issues as most swarm intelligence algorithms, such as lack of the population diversity and ease of falling into local optimum [17]. The motivation of this paper is to solve the above problem; in order to overcome these weaknesses, an equalized grey wolf optimizer with refraction opposite learning (REGWO) is proposed in this paper. In REGWO, two search strategies with different features are introduced to generate candidate solutions. Among them, the opposite learning of refraction strategy is inspired by the principle of light refraction in nature. This strategy is introduced to improve population diversity during the search and expand the scope of solution space. At the same time, the fuzzy theory is used to adjust the parameter so that the refraction solution is more random and the algorithm can find more potential solutions. Moreover, the equilibrium pool strategy is designed to impair the leadership of the leading wolves. This method can make wolves update their position following nonoptimal solution with a certain probability, so wolves have the ability to jump out of the local extremum even when the optimal solution falls into the local optimum. REGWO can achieve better performance by combining the aforementioned tactics. The remainder of this paper is organized as follows. The related work is discussed in Section 2. Section 3 presents the original GWO algorithm. The proposed REGWO algorithm is introduced in Section 4. In Section 5, the performance of the proposed REGWO is evaluated on different benchmark functions; furthermore, the significance of the results is proved by statistical analysis. Finally, we end the paper with conclusions and future work in Section 6.

2. Related Work

The metaheuristic optimization algorithms have been widely used to solve optimization problems. These algorithms are divided into three categories: physics-based algorithms, evolutionary algorithms, and swarm intelligence algorithms. Physics-based algorithms mimic the physical rules in nature in which the individuals communicate around the search space by the physical concepts, such as inertia force, light refraction law, gravitational force, and so on. There are some popular algorithms in this category, such as atom search optimization (ASO) [18] and Henry gas solubility optimization [19]. Evolutionary algorithms are a kind of iterative optimization algorithms simulating natural evolutionary processes. The best individuals are combined to form a new generation, which is the main advantage of EAs because it promotes population improvement during iteration, such as genetic algorithm (GA) [20] and differential evolution (DE) [21]. Swarm intelligence algorithms are inspired by the collective behavior of swarm organisms, such as bird flocking, animal grazing, and so on. Individuals in a population with cooperation and interaction move collectively to the promising areas in the search space. Some recently proposed swarm-based intelligence algorithms are grey wolf optimizer (GWO) [9], monarch butterfly optimization (MBO) [22], moth search algorithm (MSA) [23], Harris hawks optimization (HHO) [24], colony predation algorithm (CPA) [25], and so on. Swarm intelligence algorithms have been shown to be effective at solving optimization problems, but they may fall into local optimum and loss of diversity. As a result, some scholars have proposed modified variations to tackle the flaws. DEWCO algorithm has improved the initial population through a hyperheuristic to increase its convergence speed [26]. EFSABC algorithm is proposed by a search strategy for group escape and foraging based on Levy flight to exit from local optima [27]. GWO is a kind of swarm intelligence algorithm, which imitates the social level of wolves and the group hunting behavior. It has fewer parameters and is easy to implement. Therefore, this algorithm has been widely used to solve different optimization problems, such as multidimensional knapsack problem [28], path planning [29], parameter estimation [30], economic dispatch [31], feature selection [32], large scale unit commitment problem [33], wind speed forecasting [34], and so on. In recent years, numerous scholars have developed variants of the basic GWO to address the weaknesses of GWO and provide better performance. Three different position update methods are proposed [35], which are weighted average, fitness-based, and fuzzy logic. Further experimental analysis reveals that the GWO improved using the fuzzy logic method has better performance. In order to improve the search ability of grey wolf, a modified algorithm RW-GWO based on a random walk has been imported [36]. A cellular grey wolf optimizer with a topological structure (CGWO) is introduced. In CGWO, each wolf has its own topological neighbors, and interactions among wolves are restricted to their neighbors, which favors exploitation. Furthermore, the information diffusion mechanism by overlap among neighbors can allow maintaining the population diversity for longer, usually contributing to exploration [37]. Grey wolf optimizer with crossover and opposition-based learning (GWO-XOBL) is presented to the jump out local optima [38]. An improved grey wolf optimizer is proposed using the explorative equation and opposition-based learning (OBL) [39]. To get a more stable sense of balance between exploitation and exploration, a new modified GWO called memory-based grey wolf optimizer (mGWO) is introduced [40]. Randomized balanced grey wolf optimizer (RBGWO), which improves the overall efficiency of the search process by establishing a balance between its exploitation and exploration capability incorporating three successive enhancement strategies equipped with a social hierarchy mechanism and random walk with student's t-distributed random numbers [41]. By dividing the search process into three stages and using different population updating strategies at each stage, an improved GWO called multistage grey wolf optimizer (MGWO) is proposed; the MGWO is improved while maintaining a certain convergence speed [42]. Another area of interest for researchers is to combine other evolutionary algorithms or operators to improve the performance of GWO. PSO-GWO algorithm merged with PSO [43], the idea of PSO was introduced into the GWO to update the position information of each individual using the optimal value of the individual and the optimal value of the group, which enhanced the diversity of the population and improved the global search ability. The crossover operator is introduced into GWO to promote population diversity [44]. The purpose of the crossover operator is to enhance information sharing among individuals in the population. At the same time, the search accuracy and convergence speed of the algorithm are improved. Grey wolf optimizer has been hybridized with differential evolution (DE) mutation, and two versions, namely DE-GWO and gDE-GWO, have been proposed to avoid the stagnation of the solution [45]. To improve the performance of the GWO, a new variant of the GWO called a mutation-driven modified grey wolf optimizer and denoted by MDM-GWO is proposed. The MDM-GWO combines a new update search mechanism, modified control parameter, mutation-driven scheme, and greedy approach of selection in the search procedure of the GWO [46]. SCGWO algorithm combines GWO with an improved spread strategy and a chaotic local search mechanism to accelerate the convergence rate of the evolving agents [47]. GWO variant enhanced with a covariance matrix adaptation evolution strategy, Levy fight mechanism, and orthogonal learning strategy named GWOCMALOL is proposed. GWOCMALOL algorithm uses these strategies to bring more effective exploratory inclinations [48]. According to the various improvement strategies mentioned above, the main aim of the GWO variants is to improve search accuracy and convergence speed. Although the above GWOs can overcome some drawbacks of the original GWO, GWO still faces the problem of poor global exploration ability in the late iteration. Therefore, an equalized grey wolf optimizer with refraction opposite learning (REGWO) is presented in this paper.

3. Grey Wolf Optimizer

GWO is a typical swarm intelligence optimization algorithm. The model of GWO originates from the leading class and hunting behavior of grey wolves. There is a clear division of labor and cooperation among grey wolf individuals. As shown in Figure 1, the grey wolf population is separated into four levels, namely α, β, δ, and ω wolves. The first layer is the α wolf, and the next layer is called the β wolf. The δ wolf is located in the third layer. The grey wolf in the population is called the ω wolf (search wolf), which is located in the bottom layer. The α, β, and δ wolves are called leading wolves, and their number is set to 1. In GWO, the ω wolf must update its position to obtain the optimal solution, and α, β, and δ wolves represent the optimal value, suboptimal value, and third optimal value, respectively. The hunting process of grey wolves is mainly guided by α, β, and δ wolves, and ω wolves update iteratively according to the position of leading wolves.
Figure 1

Four-layer pyramid model in GWO.

The formula of the grey wolf around prey can be expressed as follows [41]: Here, t is the current number of iterations; ∘ is the Hadamard product operation; and denote the position vector of the prey and a grey wolf, respectively; and the calculation formulas of random vectors and are as follows:where 1 and 2 are random vectors between [0, 1] and the vector is linearly decreased from 2 to 0 over the course of iterations. To better understand the optimization rules of GWO, the possible areas are shown in Figure 2 when the position of the grey wolf is updated. It can be seen from Figure 2 that ω wolf can reach different positions around the prey by adjusting the values of parameters and . Furthermore, random variables r1 and r2 can assist search wolves to reach any of the points depicted in Figure 2. Parameters and are responsible for exploration and exploitation behavior in GWO. A is a random value between [−, ]. When  > 1 and  > 1, the population is inclined to exploration. In addition, when A < 1 and  < 1, the population is prone to exploitation.
Figure 2

2D and 3D position vectors and the possible next position of grey wolf: (a) 2D view of grey wolf in GWO and (b) 3D view of grey wolf in GWO.

The formulas of grey wolf tracking target prey are as follows [41]: Here, , , and denote the distance between α, β, and δ wolves and other individuals, respectively; , , and represent the positions of α, β, and δ, severally, respectively; 1, 2, and 3 are random vectors; 1, 2, and 3 represent the step length and direction of ω wolf toward α, β, and δ, respectively; and ω wolf determines the final position according to equation (3).

4. Proposed Algorithm REGWO

To improve the global search ability of GWO, the opposite learning of refraction and equilibrium pool technique are introduced in this work. Opposite learning of refraction is chosen in the proposed algorithm to generate more potential solutions. Besides, the equilibrium pool strategy can achieve a better exploration by weakening the leadership of the leading wolf. The two strategies are introduced in Sections 4.1 and 4.2, respectively. The proposed algorithm is named REGWO and described in Section 4.3.

4.1. Opposite Learning of Refraction

The opposition-based learning (OBL) technique is proposed in 2005 [49]. The fundamental idea is to expand the search space of the population by calculating the opposite solution of the current solution, so as to select the candidate solution that is more suitable for the optimization problem. Applying this method to the optimization algorithm can effectively improve the search accuracy of the algorithm [50]. However, the standard OBL has certain shortcomings. OBL only speeds up the convergence of GWO and obtains only one opposite solution of fixed position. Therefore, the opposite solution may fall near the local optimal solution, causing the algorithm to fall into the local optimum with the iteration [51]. In order to tackle this problem, this paper introduces the refraction principle to improve the traditional opposite learning process. Refracted opposition-based learning (ROBL) strategy is based on the OBL, combined with the principle of light refraction to identify a better solution. ROBL strategy not only considers the opposite direction of individuals but also considers other directions of individuals. The schematic is shown in Figure 3.
Figure 3

Refraction opposite learning model.

In the one-dimensional space where the individual of the population is located, the x-axis is separated into the upper and lower parts. Above the x-axis is the natural vacuum part of the refraction model, and below the x-axis is the other propagation medium of the refraction model. In Figure 3, the search range of individuals on the x-axis is [a, b], that is, x∈[a, b], and the y-axis is normal. x′ is the incident point of the light source, and the length of the incident light segment x′o is denoted by h. The incident light refracts at the intersection o, and x′ is the refraction point; the length of refraction light segment ox′ is denoted by h; and θ and φ are the incidence angle and the refraction angle, respectively. The x coordinate (position) of intersection o is (a + b)/2, which is the midpoint of the individual search range [a, b]. From the geometric relationship in Figure 3, we can obtain The refractive index (n=sin  θ/sin  φ) of light obtained by equations (4) and (5) is Let k = h/h; equation (6) can be transformed into According to equation (7), we can obtain When n and k are both 1, equation (8) is the standard opposite learning formula: Obviously, the OBL strategy is a special case of the ROBL strategy. In order to improve the ability of GWO to jump out of the local extremum, the above ROBL model is applied to GWO. Since the individuals in GWO are multidimensional, equation (8) can be extended to the D-dimensional space as follows: Here, x represents the value of the j-th dimension of the i-th individual; x is the opposite solution obtained by the ROBL model; and a and b are the j-th dimension upper bound and lower bound, respectively. As shown in Figure 3 and above, as the k value changes, the position of the opposite solution generated by (10) will be changed. That is to say, the adjustment of k improves the randomness of the solution. The k is calculated bywhere kmax = 1, kmin = 0, t denotes the current iteration number, and T is the total number of iterations. Meanwhile, in order to make the decline rate of k well match the convergence rate of the fitness value, this paper proposes a method to adjust the k value as shown in Figure 4. The fuzzy membership degree μ(t) of k iswhere μ(t)∈[0,1], μ(t) increases with the increase of k(t), μ(t) decrease with the decrease of k(t), and the optimal fitness relative change rate η is
Figure 4

Fuzzy control structure.

Here, f(t)is the objective function, namely the optimal fitness value of the t-th iteration of the population; f(t − 10) represents the optimal fitness value of the (t − 10)-th iteration of the population; then η denotes the relative change rate of the optimal fitness value in 10 iterations of evolution; and when η value is large, it indicates that the change rate of the optimal fitness value is large, and at this time, the k value should be larger to improve global search capability; otherwise, when η value is small, it indicates that the change rate of the optimal fitness value is small, and at this time, the k value should be smaller. In the ROBL model, firstly, the current η and μ(t) are judged. Then, the k of the next iteration is adaptively adjusted by fuzzy rules, which can accelerate the convergence speed of the algorithm. Fuzzy rules to adjust (k): Rule 1. If (μ(t) > 0.5 and η > γ) or (μ(t) ≤ 0.5 and η ≤ γ), then Rule 2. If (μ(t) ≤ 0.5 and η > γ), then k(t)=τ1/4+(kmax+kmin)/2 Rule 3. If (μ(t) > 0.5 and η ≤ γ), then k(t)=τ2/4+kmin Here, γ is the threshold, and its value is 0.05; τ1 and τ2 are the parameters between [0, 1] in the rules. On further observation of Figure 3, the purpose of k is to adjust the population, improve population diversity, expand the search space, and improve the global search ability of the algorithm. Therefore, the adjustment of parameter k based on fuzzy rules in the ROBL model can effectively improve the diversity of individual distribution in the search space. It makes up for the weak exploration ability of GWO in the late iteration. The advantage of the ROBL model over the OBL model is that the candidate solution can be obtained dynamically by parameter adjustment, which enhances the chance of the algorithm jumping out of the local optimum to a great extent. However, OBL can only obtain a fixed candidate solution. That is, parameter k has the ability to extend search space.

4.2. Equilibrium Pool Strategy

In GWO, the search is primarily guided by α, β, and δ wolves. If the leading wolves fall into the local optimum, the entire population will update their position in the direction of the local optimum. To address this issue, this paper introduces the equilibrium pool strategy to enhance population diversity [52]. The fundamental idea of the strategy is to calculate the fitness value of each individual after population initialization and choose three candidate solutions (1, 2, and 3) based on the fitness value. Among them, 1, 2, and 3 represent α, β, and δ wolves, respectively. In addition, the average value of three candidate solutions is calculated as the average candidate solution , and then the equalization pool pool is constructed. The mathematical model is as follows: Here, three candidate solutions (1, 2, 3) are contributed to exploitation and the average candidate solution () is contributed to exploration. ω wolf randomly selects candidate individuals from the candidate pool with equal probability for location updating. Besides, parameter is used to balance exploration and exploitation, and the mathematical description is as follows:where λ is the random vector between [0,1], r denotes a random number between [0,1], sign(r-0.5) is used to control the direction of exploration and exploitation, t represents the current number of iterations, and T notes the maximum number of iterations. In addition, the generation rate is used to improve exploitation capability. The mathematical expression is as follows:where X is a candidate solution randomly chosen from the equilibrium pool with equal probability. In summary, when GWO applies equilibrium pool strategy, the position update formula of the individual is as follows:

4.3. REGWO Algorithm

It is well known that exploration and exploitation are necessary for population-based optimization algorithms, such as PSO, ABC, ACO, and so on. In standard GWO, the issue is that since all of the other individuals are attracted toward leader wolves, they may converge prematurely without enough exploration of search space, that is, standard GWO is prone to premature convergence. To improve the performance of GWO, each grey wolf obtains the opposite solution via the ROBL strategy, which enhances individual randomness. Refraction opposite learning strategy makes up for the shortcomings of traditional opposite learning, expands the search space, and effectively enhances population diversity. In addition, an equilibrium pool strategy is introduced to reduce the likelihood of the algorithm falling into the local extremum. The equilibrium pool retains four individuals, namely α, β, and δ wolves as well as their mean values. The ability of exploration is properly improved by randomly selecting an individual from the equalization pool to lead the position update of ω wolf. The process of REGWO is described in Algorithm 1, and the flowchart of the proposed REGWO algorithm is shown in Figure 5. In REGWO, the α, β, and δ wolves are chosen by population initialization and fitness calculations. Then, the position is updated with equal probability by equation (3) or (17). Finally, the refraction opposite solution and its fitness value are calculated by equation (10); furthermore, the refraction solution is retained if the refraction solution is better than the original solution; otherwise, the original solution is retained. Until the end of the iteration, α wolf is the ultimate optimization result.
Figure 5

The flowchart of the REGWO algorithm.

5. Experiments and Analysis

5.1. Experimental Settings

To fairly compare the performance of different algorithms, the function test set is needed. The numerical efficiency of REGWO developed in this paper was tested by solving 31 mathematical optimization problems. The first 21 benchmark functions are the classical functions utilized in literature [9, 53]; they (f1∼f21) are composed of unimodal, multimodal, fixed-dimensional multipeak, and shifted functions. The specific expressions and search intervals of these functions are shown in Table 1. The unimodal function (f1∼f9) with just one local and also global optimal solution is commonly used to evaluate the local exploitation ability of the algorithm; f10∼f14 are multimodal function and often used to test the ability of the algorithm to explore. The f15 and f16 are fixed-dimensional multipeak functions with many extreme points but low dimensions, so it is easy to optimize and can be used to assess the stability of the algorithm. The last 5 functions in Table 1 are shifted functions, which are mainly to avoid the situation that some algorithms copy one parameter to another to generate a neighbor solution [53]. The other 10 test problems (f22∼f31) considered in this paper (see Table 2) regard composite benchmark functions considered in the IEEE CEC 2019 special session [54]. These benchmark functions are more complex than the first 21 benchmark functions, and f22∼f31 are designed to have a minimum value of 1. The optimization performance of each algorithm can be further verified by solving complex problems.
Table 1

Benchmark functions (f1∼f21).

No.Functions
f 1 f 1(x)=∑i=1nxi2
f 2 f 2(x)=∑i=1n|xi|+∏i=1n|xi|
f 3 f 3(x)=∑i=1n(∑i=1jxj)2
f 4 f 4(x)=max{|xi|, 1 ≤ in}
f 5 f 5(x)=∑i=1n([xi+0.5])2
f 6 f 6(x)=∑i=1nixi4+random[0,1)
f 7 f 7(x)=∑i=1n(106)i − 1/n−1xi2
f 8 f 8(x)=∑i=1nixi2
f 9 f 9(x)=∑i=1n−1[100(xi+1xi2)2+(xi − 1)2]
f 10 f10x=20  exp0.21/ni=1nxi2exp1/ni=1ncos2πxi+e
f 11 f11x=i=1nyi210  cos2πyi+10,yi=xi,xi<0.5round2xi/2xi0.5
f 12 f12x=1/4000i=1nxi2i=1ncosxi/i+1
f 13 f 13(x)=1/ni=1n(xi4 − 16xi2+5xi)
f 14 f 14(x)=∑i=1n|xi · sin(xi)+0.1 · xi|
f 15 f 15(x)=(1/500+∑j=1251/∑i=12(xiaij)6)−1
f 16 f16x=1+x1+x2+121914x1+3x1214x2+6x1x2+3x2230+2x13x22×1832x1+12x12+48x236x1x2+27x22.
f 17 f 17(x)=∑i=1nzi2, z=xo
f 18 f 18(x)=∑i=1n(zi2 − 10  cos(2πzi)+10), z=xo
f 19 f19x=1/4000i=1nzi2i=1ncoszi/i+1,z=xo
f 20 f20x=20  exp0.21/ni=1nzi2exp1/ni=1ncos2πzi+e,z=xo
f 21 f 21(x)=∑|zi · sin(zi)+0.1 · zi|, z=xo
Table 2

IEEE CEC 2019 test suite (f22∼f31).

No.Function names F i = Fi (x)DimRange
f 22 (F1)Storn's Chebyshev polynomial fitting problem19[−8,192, 8,192]
f 23 (F2)Inverse Hilbert matrix problem116[−16,384, 16,384]
f 24 (F3)Lennard–Jones minimum energy cluster118[−4, 4]
f 25 (F4)Rastrigin's function110[−100, 100]
f 26 (F5)Griewank's function110[−100, 100]
f 27 (F6)Weierstrass function110[−100, 100]
f 28 (F7)Modified Schwefel's function110[−100, 100]
f 29 (F8)Expanded Schaffer's F6 function110[−100, 100]
f 30 (F9)Happy Cat function110[−100, 100]
f 31 (F10)Ackley function110[−100, 100]
Two sets of experiments are conducted in this paper. In the first experiment, REGWO is compared with some popular algorithms and novel algorithms to evaluate convergence speed and optimization accuracy. Furthermore, the comparisons of GWO, RGWOL (GWO improved only by refraction principle with linear control parameter k), RGWOF (GWO improved only by refraction principle with fuzzy control parameter k), EGWO (GWO improved only by equilibrium pool strategy), and REGWO are executed. The influences of two strategies and dynamic changes of parameter k on the optimization results can be observed through the experiment. In the second experiment, the comparisons of mean and standard deviation are performed between REGWO and the other six different GWO variants. Meanwhile, the population size is set to 30; the maximum number of iterations is set to 5000, and experimental results are based on 30 independent experiments.

5.2. Comparison with Swarm Intelligence Algorithms and Strategies Analysis

To validate the performance of the proposed REGWO algorithm in this paper, it is used to solve the functions f1∼f31, and the performance is compared with RGWOL, RGWOF, EGWO, and other swarm intelligence algorithms, including standard GWO [9], sparrow search algorithm (SSA) [55], Archimedes optimization algorithm (AOA) [56], particle swarm optimization (PSO) [57], firefly algorithm (FA) [58], and artificial bee colony (ABC) [4]. Among them, SSA and AOA are novel intelligent algorithms, and PSO, FA, and ABC are popular intelligent algorithms. For better comparison, the other parameters of algorithms are shown in Table 3.
Table 3

Parameter setting (PSO, FA, ABC, SSA, AOA, and GWO).

AlgorithmsParameter setting
PSO [57]Inertia weight w=0.75, b1 = b2 = 2, vmax=0.1(xmax − xmin)
FA [58]Light absorption coefficient ζ = 1, step size s = 0.2
ABC [4]Control parameter limit = 0.6population sizedim
SSA [55]Discoverers n = 0.2population size
AOA [56]Constants d1 = 2, d2 = 6, d3 = 1, d4 = 2
GWO [9]Control parameter a decrease linearly from 2 to 0
The results on each benchmark function of the algorithms are shown in Table 4. It can be seen from Table 4 that the effectiveness of each strategy has been verified. RGWOL outperforms six test functions. RGWOF outperforms nine test functions. EGWO outperforms six test functions. Among them, the results of the RGWOF algorithm on most functions are better than RGWOL. The experimental results in Table 4 show that the effect of fuzzy control parameter k is better than the linear decrease of parameter k. Therefore, the REGWO algorithm proposed in this paper combines the equilibrium pool strategy (EGWO) with the refraction opposite learning strategy (RGWOF). At the same time, the fuzzy theory control parameter k is used in the refraction opposite learning. Although the RGWOF and EGWO can find the solutions, combining the two strategies has more benefits since the solutions obtained by REGWO are always better than theirs.
Table 4

Search result (comparisons of PSO, FA, ABC, SSA, AOA, GWO, RGWOL, RGWOF, EGWO, and REGWO).

No.Performance indexPSOFAABCSSAAOAGWORGWOLRGWOFEGWOREGWO
f 1 Mean1.96e–091.18e–862.91e–115.22e–1913.09e–2631.34e–311 0 0 0 0
Std4.81e–091.90e–873.05e–110000000

f 2 Mean9.12e–024.56e–445.29e–121.41e–1018.14e–1654.26e–1791.00e–323 0 3.60e–214 0
Std1.60e–012.49e–452.76e–117.73e–101000000

f 3 Mean6.57e–101.19e–444.88e+041.05e–667.19e–601.28e–751.13e–92 0 1.28e–78 0
Std1.22e–093.76e–446.73e+033.33e–662.27e–594.06e–752.11e–9204.06e–780

f 4 Mean3.78e–061.99e+005.39e+012.21e–451.09e–351.75e–751.91e–2413.32e–2451.04e–84 5.06e–251
Std2.37e–062.16e+005.19e+006.99e–451.74e–354.81e–75001.47e–840

f 5 Mean5.31e–07 0 3.04e–112.61e–322.99e–014.75e–016.25e–015.24e–019.05e–075.01e–02
Std1.66e–0602.54e–113.10e–323.06e–013.43e–013.17e–012.18e–013.44e–071.05e–01

f 6 Mean5.32e–036.36e–042.11e–013.03e–041.02e–031.20e–047.23e–048.63e–052.74e–04 2.60e–05
Std2.93e–033.40e–044.83e–022.79e–045.02e–046.23e–055.53e–047.74e–051.37e–041.48e–05

f 7 Mean5.13e+005.73e–821.82e–091.76e–905.07e–2646.67e–306 0 0 0 0
Std2.80e+011.30e–822.59e–099.38e–90000000

f 8 Mean1.36e–071.81e–871.17e–127.25e–1681.37e–2704.75e–312 0 0 0 0
Std4.32e–071.44e–889.66e–130000000

f 9 Mean2.87e+017.71e–013.19e+03 3.12e–07 2.59e+012.66e+012.71e+012.70e+012.38e+012.88e+00
Std2.09e+011.74e+001.75e+036.46e–079.00e–018.27e–011.48e+003.59e–021.30e+005.51e–01

f 10 Mean1.30e+002.18e–144.57e–05 8.88e–16 1.99e+017.99e–154.44e–154.44e–157.28e–154.44e–15
Std7.84e–015.41e–153.93e–0501.46e–030001.49e–150

f 11 Mean5.92e+017.33e+011.96e+02 0 1.54e+014.66e–012.53e–027.21e–042.12e–03 0
Std1.76e+012.21e+012.18e+0104.19e+001.77e+003.55e–024.65e–031.42e–030

f 12 Mean2.15e–024.18e–038.41e–031.56e–051.09e–024.19e–04 0 0 7.39e–06 0
Std3.06e–027.94e–033.58e–025.77e–042.28e–022.29e–03002.32e–050

f 13 Mean−6.65e+01−7.08e+01−5.13e+01−7.20e+01−6.93e+01−6.31e+01−7.58e+01−7.62e+01−6.80e+01 −7.83e+01
Std2.76e+002.28e+001.79e+002.36e+002.83e+003.36e+003.82e–131.62e–132.23e+002.34e–14

f 14 Mean4.84e–036.31e–152.26e+011.54e–172.89e–042.81e–1701.37e–3191.33e–3228.72e–13 0
Std4.02e–033.27e–151.89e+002.53e–173.79e–040002.75e–120

f 15 Mean3.55e+001.06e+001.39e+006.51e+009.97e–014.26e+001.65e+006.21e+00 9.98e–01 9.98e–01
Std3.16e+003.62e–011.81e+005.76e+001.63e–064.23e+001.14e+005.59e+006.25e–131.82e–12

f 16 Mean 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00
Std1.09e–151.22e–152.71e–079.70e–161.50e–072.66e–073.15e–073.73e–077.40e–164.44e–16

f 17 Mean4.54e–101.20e–861.42e–104.30e–1929.24e–2691.15e–310 0 0 0 0
Std1.40e–092.11e–873.86e–100000000

f 18 Mean5.23e+016.31e+012.12e+026.61e–041.42e+017.61e–023.51e–082.89e–103.51e–10 0
Std1.49e+012.01e+011.03e+014.97e–037.59e+004.97e–012.28e–071.53e–099.52e–090

f 19 Mean3.61e–021.72e–035.27e–043.17e–062.10e–032.17e–048.27e–108 0 1.72e–03 0
Std2.93e–024.21e–034.80e–041.66e–066.67e–031.66e–04005.44e–030

f 20 Mean2.04e+002.57e–145.54e–05 8.88e–16 1.99e+017.99e–154.44e–124.44e–157.63e–154.44e–15
Std7.38e–018.53e–154.13e–0501.93e–030001.12e–150
f 21 Mean8.50e–115.26e–162.22e–029.54e–184.86e–035.87e–055.56e–112.62e–133.12e–08 1.54e–46
Std1.43e–104.41e–152.39e–011.79e–172.21e–022.28e–049.24e–107.55e–121.56e–078.43e–46

f 22 Mean2.01e+042.55e+037.02e+06 1.00e+00 3.63e+024.77e+031.50e+001.21e+001.18e+03 1.00e+00
Std2.98e+044.41e+032.62e+0609.35e+021.39e+047.65e–1502.72e+030

f 23 Mean2.48e+023.75e+027.63e+034.25e+011.99e+021.57e+026.27e+005.23e+007.59e+01 4.26e+00
Std1.22e+021.40e+021.13e+032.60e–011.29e+021.08e+024.79e–022.73e–025.39e+013.85e–02

f 24 Mean4.36e+005.52e+001.20e+017.43e+002.95e+005.21e+004.59e+005.02e+004.42e+00 2.04e+00
Std3.13e+002.83e+004.12e–012.65e+001.86e+003.20e+001.08e+006.63e–012.67e+001.11e–00

f 25 Mean2.52e+011.43e+012.72e+013.80e+013.02e+011.37e+011.45e+011.01e+017.76e+00 5.87e+00
Std1.09e+015.83e+003.62e+001.18e+017.47e+006.43e+008.07e+007.02e+002.60e+003.49e+00

f 26 Mean1.14e+00 1.06e+00 1.29e+001.23e+001.83e+001.94e+001.62e+001.48e+001.19e+001.16e+00
Std4.71e–022.95e–023.47e–021.63e–012.20e–011.12e+001.98e–012.39e–016.74e–026.89e–02

f 27 Mean2.83e+001.63e+009.18e+005.46e+003.53e+002.69e+002.61e+002.28e+002.26e+00 1.56e+00
Std1.38e+008.87e–017.59e–011.43e+001.85e+001.03e+006.85e–011.06e+001.33e+001.08e+00

f 28 Mean8.45e+024.63e+021.48e+038.96e+025.42e+027.62e+025.31e+027.39e+022.75e+02 2.16e+02
Std2.72e+022.75e+021.70e+022.50e+022.09e+023.17e+023.59e+023.22e+021.72e+021.35e+02

f 29 Mean4.06e+003.84e+005.07e+004.28e+003.47e+003.78e+003.77e+003.22e+003.25e+00 2.96e+00
Std4.90e–012.96e–017.67e–024.13e–016.17e–015.17e–013.27e–014.09e–015.21e–016.79e–01

f 30 Mean1.14e+001.08e+001.18e+001.38e+001.19e+001.09e+001.11e+001.19e+001.10e+00 1.04e+00
Std3.76e+002.11e–022.31e–022.06e–011.09e–016.76e–026.13e–026.87e–022.63e–022.96e–02

f 31 Mean2.09e+011.31e+012.12e+012.10e+012.11e+012.05e+011.88e+011.87e+011.94e+01 1.12e+01
Std2.88e–031.04e+017.46e–028.72e–021.37e–012.40e+005.28e+001.17e–016.06e+001.05e+01

Friedman test average rank 7.85 5.83 8.69 5.59 6.83 5.91 4.50 3.67 4.17 1.90
On the other hand, it can be seen from Table 4 that the average fitness of REGWO outperforms on 26 test functions compared with other swarm intelligence optimization algorithms. Astonishingly, REGWO can converge into the global optimal solution in solving functions f1∼f3, f7, f8, and f11∼f19, indicating that REGWO has the potential to converge to the global optimal value. On further observation in Table 4, the standard GWO is still competitive compared with SSA and AOA for unimodal functions. It is proved that the standard GWO has good exploitation ability when solving unimodal functions. However, the optimization performance of standard GWO is relatively weak for multimodal functions, while REGWO shows better performance both in unimodal functions and multimodal functions. Moreover, it can be seen from the Friedman test average rank in Table 4 that the order from low to high is REGWO, RGWOF, EGWO, RGWOL, SSA, FA, GWO, AOA, PSO, and ABC. Obviously, REGWO has preferable competitiveness compared with novel algorithms (SSA, AOA) and the classical algorithms (PSO, FA, ABC). The superior performances of REGWO should be attributed to the improved strategies. Individuals maintain high diversity during optimization due to refraction opposite solution strategy, and the equalization pool strategy weakens the leadership of the optimal solution. Therefore, the combination of the two strategies can effectively improve the performance of standard GWO in solving multimodal functions. That is to say, the ability of the algorithm to jump out of the local extremum is enhanced. Due to the stochastic nature of these algorithms, the statistical test is necessary for providing confidential comparisons [44]. Therefore, the Wilcoxon sign rank test is conducted in this paper. The test results of the REGWO algorithm and the other 9 selected algorithms on 31 test functions are shown in Table 5. The sign + (–) denotes that the REGWO algorithm is better (worse) than its compared algorithms. The symbol = indicates that the REGWO algorithm gets the same results as its competitors. It can be seen from Table 5 that REGWO provides higher R+ values than R− values in all cases. Moreover, the p values of 9 algorithms are less than 0.05, indicating that they are significantly different from REGWO, and REGWO is superior to other algorithms.
Table 5

Results of Wilcoxon signed-rank test (comparison with PSO, FA, ABC, SSA, AOA, GWO, RGWOL, RGWOF, and EGWO).

Case+/=/− R R + p value
REGWO vs. PSO30/1/0214441.36e–05
REGWO vs. FA28/1/2554102.61e–04
REGWO vs. ABC30/1/0114545.21e–06
REGWO vs. SSA25/3/3573498.85e–04
REGWO vs. AOA30/1/0104554.72e–06
REGWO vs. GWO30/1/004651.73e–06
REGWO vs. RGWOL24/7/01232762.70e–05
REGWO vs. RGWOF20/11/01782108.85e–05
REGWO vs. EGWO25/6/0963116.45e–05
There are some reasons for REGWO has such a good performance. Firstly, the refraction principle and fuzzy control parameter are introduced in GWO; individual diversity is enhanced. RGWO improves the optimization precision and convergence speed of the standard GWO algorithm by retaining better (original solution and refraction inverse solution). Secondly, the equilibrium pool strategy enhances the global search ability of the original GWO algorithm by reducing the leadership of the leading wolves. The advantage of this strategy is particularly obvious when solving the multimodal function. Then, REGWO, as an improved GWO algorithm, combines the advantages of the two strategies. It not only improves the convergence speed of the GWO but also improves the optimization precision of the original GWO algorithm. The convergence histories of the compared algorithms are shown in Figure 6. Through the convergence histories in Figure 6, we can find that the convergence speed of REGWO is faster than other swarm intelligence optimization algorithms on unimodal functions except f5. Although the convergence speed of the REGWO algorithm on multimodal function is not as fast as that on unimodal function, REGWO has high search precision than other algorithms. Especially, the optimization performance of REGWO is remarkable when solving more complex functions (IEEE CEC 2019 test suite). It demonstrates that REGWO algorithm not only improves the convergence speed of the standard GWO algorithm on unimodal function but also enhances the optimization precision on complex functions.
Figure 6

Convergence diagrams.

5.3. Comparison with GWOs

To further validate the effectiveness of the REGWO. The performances of different GWO variants are compared, and the benchmark functions (f1∼f31) are solved by REGWO, WGWO [35], DGWO [59], AGWO [60], IGWO [61], RLGWO [62], and GNHGWO [63]. To make a fair comparison, the 6 algorithms use the same parameter settings as their original literature. Then, the results are analyzed by Friedman test average rank and Wilcoxon signed-rank test, and statistical results (mean cost and standard deviation) for 30 independent experiments are reported in Table 6.
Table 6

Search result comparisons of GWOs.

No.Performance indexWGWODGWOAGWOIGWORLGWOGNHGWOREGWO
f 1 Mean 0 0 0 3.72e–317 0 1.02e–04 0
Std000002.18e–040

f 2 Mean2.41e–200 0 1.82e–2811.69e–187 0 1.64e–04 0
Std000002.93e–040

f 3 Mean8.92e–981.80e–1721.46e–1441.53e–62 0 2.09e–03 0
Std2.82e–9704.64e–1444.53e–6201.14e–030

f 4 Mean5.31e–823.54e–1513.00e–1287.94e–603.54e–465.59e–03 3.06e–252
Std2.28e–811.11e–1501.64e–1283.25e–591.84e–451.83e–020

f 5 Mean5.90e–015.56e–011.03e+00 1.09e–27 4.75e–016.85e+001.66e–02
Std2.94e–013.55e–014.55e–019.26e–272.48e–013.08e–016.35e–02

f 6 Mean1.53e–042.14e–043.59e–056.72e–048.89e–041.28e–05 7.43e–06
Std6.91e–041.14e–045.84e–059.98e–049.56e–041.61e–051.38e–06

f 7 Mean 0 0 0 0 0 3.52e–03 0
Std000001.72e–020

f 8 Mean 0 0 0 0 0 3.15e–08 0
Std000001.02e–070

f 9 Mean5.68e+006.51e+006.00e+005.75e+006.75e+008.83e+00 2.99e+00
Std7.70E–016.68e–013.93e–014.23e+001.02e+003.51e–015.53e–01

f 10 Mean7.51e–157.99e–156.21e–157.63e–15 1.24e–15 1.73e–034.44e–15
Std1.22e–1501.77e–151.08e–151.12e–153.36e–030

f 11 Mean 0 0 0 3.70e+00 0 8.65e–06 0
Std0001.82e+0002.73e–050

f 12 Mean 0 8.20e–03 0 1.45e–02 0 6.51e–07 0
Std01.17e–0201.39e–0201.25e–060

f 13 Mean−7.22e+01−7.25e+01−7.46e+01−7.74e+01−7.23e+01−5.82e+01 −7.83e+01
Std3.43e+004.01e+002.80e+001.31e+003.80e+003.74e+001.25e+00

f 14 Mean 0 0 0 1.42e–074.63e–062.60e–07 0
Std0004.51e–071.15e–055.86e–070

f 15 Mean1.39e+004.53e+003.54e+00 9.98e–01 4.90e+001.06e+00 9.98e–01
Std8.36e–014.39e+003.91e+005.03e–015.02e+001.44e–011.81e–12

f 16 Mean 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00 3.00e+00
Std3.63e–075.57e–073.33e–072.17e–153.89e–071.94e–021.18e–15

f 17 Mean 0 0 0 0 0 4.66e–06 0
Std000001.43e–050

f 18 Mean 0 0 0 3.97e–01 0 3.83e–10 0
Std0001.20e–0108.87e–100

f 19 Mean7.44e–042.35e–03 0 2.02e–026.26e–065.09e–06 0
Std2.46e–033.61e–0302.49e–024.12e–051.59e–050

f 20 Mean4.44e–154.41e–154.44e–154.43e–15 8.88e–16 7.16e–074.44e–15
Std009.01e–1601.70e–151.16e–040

f 21 Mean1.77e–072.44e–069.87e–095.01e–146.54e–104.14e–06 1.54e–46
Std6.75e–071.26e–055.40e–082.58e–135.88e–096.09e–068.43e–46

f 22 Mean2.85e+011.98e+048.64e+011.29e+03 1.00e+00 1.00e+00 1.00e+00
Std5.43e+016.25e+042.48e+021.90e+0306.83e–066.37e–15

f 23 Mean4.68e+004.62e+004.64e+004.31e+004.89e+004.98e+00 4.24e+00
Std3.53e–013.73e–013.98e–012.77e–012.61e–018.71e–024.77e–02

f 24 Mean4.75e+003.27e+005.39e+007.19e+004.83e+009.07e+00 1.46e+00
Std3.01e+003.06e+003.54e+001.69e+003.28e+008.73e–014.87e–00

f 25 Mean1.33e+011.64e+011.65e+018.19e+002.01e+017.50e+01 7.64e+00
Std5.30e+008.47e+008.05e+002.09e+006.01e+009.40e+001.14e+00

f 26 Mean1.39e+001.42e+001.63e+001.42e+001.42e+014.74e+01 1.16e+00
Std2.30e–012.41e–017.57e–016.68e–023.91e+018.78e+008.47e–02

f 27 Mean2.46e+002.17e+003.88e+001.51e+002.77e+001.00e+01 1.50e+00
Std8.17e–019.82e–013.57e–011.24e+001.65e+007.06e–016.23e–01

f 28 Mean5.54e+026.64e+026.04e+022.82e+027.79e+021.60e+03 2.17e+02
Std2.41e+022.91e+022.37e+022.70e+023.37e+021.49e+022.19e+02

f 29 Mean3.72e+003.73e+004.02e+003.18e+003.69e+004.96e+00 2.69e+00
Std6.63e–016.74e–014.03e–016.61e–014.71e–011.47e–017.66e–01

f 30 Mean1.11e+001.10e+001.14e+001.09e+001.13e+002.80e+00 1.06e+00
Std5.04e–025.58e–023.89e–023.48e–026.24e–025.32e–012.88e–02

f 31 Mean2.13e+012.13e+012.07e+011.99e+012.10e+012.13e+01 1.11e+01
Std9.81e–021.98e+009.55e–026.42e+001.38e–019.34e–029.80e+00
Friedman test average rank 3.85 3.98 4.09 3.98 4.04 6.09 1.93
It can be seen from Table 6 that the average fitness of the REGWO is superior except for the functions f5, f10, and f20. In addition, the corresponding standard deviation is much smaller than other algorithms for most functions. The average fitness of the REGWO outperforms the other 6 enhanced GWOs on 14 benchmark functions. It can be seen that the combination of the refraction opposite learning approach and equilibrium pool strategy effectively improves the optimization accuracy of the standard GWO. The above algorithms can achieve the theoretical optimal value for the function f16. Because f16 is a fixed-dimensional multimodal function with low dimension, it is simple to solve. However, REGWO shows better stability in terms of standard deviation. The f22∼f31 are more complicated than the test functions listed in Table 1. It can better test the algorithm exploration and exploitation ability. Especially, REGWO converges to the theoretical optimal value 1 on function f22. For functions f26 and f30, the iterative optimization results of REGWO are also close to the theoretical optimal value 1. It demonstrates that REGWO still has the ability to converge to the global optimum for more complex mathematical optimization problems. Moreover, REGWO also achieves better performance in most functions in terms of standard deviation, indicating that REGWO has a better stability. From the Friedman test average rank in Table 6, the order from low to high is REGWO, IGWO, DGWO, RLGWO, AGWO, WGWO, and GNHGWO (IGWO is equal to DGWO). It shows that the performance of REGWO is much superior to other GWOs in accuracy. From the results of the Wilcoxon sign rank test in Table 7, REGWO provides higher R+ values than R− values in all cases. Moreover, it can be seen that the p values of WGWO, DGWO, AGWO, IGWO, RLGWO, and GNHGWO are less than 0.05, indicating that they are significantly different from REGWO, and REGWO is far superior to the other six algorithms.
Table 7

Results of Wilcoxon signed-rank test (comparison with GWOs).

Case+/ = /− R R + p value
REGWO vs. WGWO21/10/01272315.95e–05
REGWO vs. DGWO22/9/01172506.08e–05
REGWO vs. AGWO20/11/01462108.85e–05
REGWO vs. IGWO25/5/1723345.68e–05
REGWO vs. RLGWO20/9/21612051.89e–04
REGWO vs. GNHGWO29/2/0214352.56e–06
In summary, REGWO has such good performance because of the contribution of the two strategies. Firstly, the diversity of solutions is increased by refractive opposite learning. Secondly, the equilibrium pool strategy weakens the leadership of the leading wolves to increase the probability of individuals jumping out of the local optimum. Therefore, the REGWO algorithm combining two strategies is able to achieve competitive optimization results on the test problems compared with other GWOs.

6. Conclusion and Future Work

In order to further improve the optimization performance of GWO, this paper proposes an equalized grey wolf optimizer with refraction opposite learning (REGWO). The main idea of the algorithm is to improve the opposite learning process of OBL based on the refraction principle of light. This strategy further expands the search space of the population, increases population variety, and enhances the ability of individuals to jump out of the local extremum. At the same time, the equilibrium pool strategy is combined to weaken the leadership of the leading wolves, which effectively avoids the situation that the rest of the individuals move to the leading wolves when the leading wolves fall into the local optimum. Therefore, the combination of the two strategies effectively enhances the exploration of GWO in the late iteration. In addition, REGWO is tested on 31 benchmark functions. The experimental results show that REGWO has higher convergence speed, search accuracy, and stability compared with standard GWO, other state-of-art GWOs, and other swarm intelligence algorithms. On the whole, REGWO is more effective in solving complex optimization problem. In our future work, the selection of search strategies still needs to be further investigated. Furthermore, the REGWO algorithm can be extended to solve multiobjective optimization, binary optimization, and application-designed problems in the future.
  3 in total

1.  Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding.

Authors:  Linguo Li; Lijuan Sun; Jian Guo; Jin Qi; Bin Xu; Shujing Li
Journal:  Comput Intell Neurosci       Date:  2017-01-03

2.  An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis.

Authors:  Qiang Li; Huiling Chen; Hui Huang; Xuehua Zhao; ZhenNao Cai; Changfei Tong; Wenbin Liu; Xin Tian
Journal:  Comput Math Methods Med       Date:  2017-01-26       Impact factor: 2.238

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.