Literature DB >> 31073211

An Improved Grey Wolf Optimizer Based on Differential Evolution and Elimination Mechanism.

Jie-Sheng Wang1,2, Shu-Xia Li3.   

Abstract

The grey wolf optimizer (GWO) is a novel type of swarm intelligence optimization algorithm. An improved grey wolf optimizer (IGWO) with evolution and elimination mechanism was proposed so as to achieve the proper compromise between exploration and exploitation, further accelerate the convergence and increase the optimization accuracy of GWO. The biological evolution and the "survival of the fittest" (SOF) principle of biological updating of nature are added to the basic wolf algorithm. The differential evolution (DE) is adopted as the evolutionary pattern of wolves. The wolf pack is updated according to the SOF principle so as to make the algorithm not fall into the local optimum. That is, after each iteration of the algorithm sort the fitness value that corresponds to each wolf by ascending order, and then eliminate R wolves with worst fitness value, meanwhile randomly generate wolves equal to the number of eliminated wolves. Finally, 12 typical benchmark functions are used to carry out simulation experiments with GWO with differential evolution (DGWO), GWO algorithm with SOF mechanism (SGWO), IGWO, DE algorithm, particle swarm algorithm (PSO), artificial bee colony (ABC) algorithm and cuckoo search (CS) algorithm. Experimental results show that IGWO obtains the better convergence velocity and optimization accuracy.

Entities:  

Year:  2019        PMID: 31073211      PMCID: PMC6509275          DOI: 10.1038/s41598-019-43546-3

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

The swarm intelligence algorithms are proposed to mimic the swarm intelligence behavior of biological in nature, which has become a hot of cross-discipline and research field in recent years. The appearance of swarm intelligent optimization algorithm provides the fast and reliable methods for finding solutions on many complex problems[1,2]. Because the swarm intelligence algorithm have characteristics of self-organization, parallel, distributive, flexibility and robustness, now they have been very widespread used in many cases, such as electric power system, communication network, system identification and parameter estimation, robot control, transportation and other practical engineering problems[3-5]. Therefore, the research on the swarm intelligence optimization algorithms has an important academic value and practical significance. At present, a variety of swarm intelligence optimization algorithms have been proposed by simulating the biotic population and evolution process in nature, such as particle swarm optimization (PSO) algorithm, shuffled frog leaping algorithm (SFLA), artificial bee colony (ABC) algorithm, ant colony optimization (ACO) algorithm, biogeography-based optimization (BBO) algorithm, and cuckoo search (CS) algorithm. Particle Swarm Optimization (PSO) algorithm put forward by Kennedy and Eberhart to mimic the the foraging behavior of birds and fish flock[6], but the convergence velocity and searching accuracy of PSO algorithm are unsatisfactory to some extend. Shuffled Frog-leaping Algorithm (SFLA) put forward by Eusuff in 2003 is a novel swarm intelligent cooperative searching strategy based on the natural memetics[7,8]. On the one hand, individuals exchange information in the global searching process, and its search precision is high. On the other hand, SFLA has the disadvantage of slow convergence velocity and easy to falling into the local optimum. Artificial Bee Colony (ABC) Algorithm put forward by Karaboga in 2005 to mimics the finding food source behavior of bees[9]. In order to mimic the social behavior of the ant colony, Dorigo et al. Proposed the an novel Ant Colony Optimization (ACO) Algorithm in 2006[10]. But their disadvantages are the slow convergence speed and easy to premature. Biogeography-Based Optimization (BBO) algorithm was put forward by Simon in 2008[11], whose idea is based on the geographical distribution principle in the biogeography. Cuckoo Search (CS) Algorithm was proposed by Yang and Deb in 2009 based on the cuckoo’s parasitic reproduction mechanism and Levy flights searching strategy[12,13], whose advantage is that CS algorithm is not easy to fall into the local optimum compared with other intelligent algorithms and has less parameters, and whose disadvantage is that the adding of Levy Flight search mechanism leads to strong leap in the process of search, thus, its local search is not careful. The common shortcoming of these algorithms is that each swarm intelligence algorithm has problem in different degrees that the convergence velocity is slow, the optimization precision is low, and easy to fall into the local optimum[5]. The key reason cause these shortcoming is that whether an algorithm is able to achieve the proper compromise between exploration and exploitation in its each searching phase or not[14]. Exploration and exploration are contradictory. Exploration reflects the ability of the algorithm to search for new space, while exploration reflects the refining ability of the algorithm. These two criteria are generally used to evaluate stochastic optimization algorithms. Exploration is refers to that a particle leave the original search path in a certain extent and search towards a new direction, which reflects the ability of exploiting unknown regions. Exploitation is refers to that a particle continue to search more carefully on the original trajectory in a certain extent, which can insure the wolf make a detailed search to the region that have been explored. Too small exploration can cause a premature convergence and falling into a local optimum, however, too small exploitation will make the algorithm converge too slowly. The grey wolf optimizer (GWO) as a novel swarm intelligent optimization algorithm was put forward by Seyedali Mirjalili etc in 2014, which mainly mimics wolf leadership hierarchy and hunting mechanism in nature[15]. Seyedali and Mirjalili etc has proved that the optimization performance of standard GWO is superior to that of PSO, GSA, DE and FEP algorithm. Due to the wolves algorithm with the advantages of simple in principle, fast seeking speed, high search precision, and easy to realize, it is more easily combined with the practical engineering problems. Therefore, GWO has high theoretical research value. But GWO is as a new biological intelligence algorithm, the research about it is just at the initial phase, so research and development of the theory are still not perfect. In order to make the algorithm plays a more superior performance, further exploration and research is needed. Many swarm intelligence algorithms are mimic the hunting and searching behaviors of some animals. However, GWO simulates internal leadership hierarchy of wolves, thus, in the searching process the position of best solution can be comprehensively assessed by three solutions. But for other swarm intelligence algorithms, the best solution is searched only leaded by a single solution. So GWO can greatly decrease the probability of premature and falling into the local optimum. So as to achieve the proper compromise between exploration and exploitation, an improved GWO with evolution and elimination mechanism is proposed. The biological evolution and the SOF principle of biological updating of nature are added to the basic wolf algorithm. In order to verify the performance of the improved GWO, 12 typical benchmark functions are adopted to carry out simulation experiments, meanwhile, experimental results are compared with PSO algorithm, ABC algorithm and CS algorithm. The experimental results show that the improved grey wolf optimizer (IGWO) obtains the better convergence velocity and optimization accuracy. The paper is organized as follows. In section 2, the grey wolf optimizer is introduced. A grey wolf optimizer with evolution and SOF mechanism is presented in section 3. In section 4, the simulation experiments are carried out and the simulation results are analyzed in details. Finally, the conclusion illustrates the last part.

Grey Wolf Optimizer

The grey wolf optimizer is a novel heuristic swarm intelligent optimization algorithm proposed by Seyedali Mirjalili et al. in 2014. The wolf as top predators in the food chain, has a strong ability to capture prey. Wolves generally like social life and in the interior of the wolves exists a rigid social hierarchy[15]. In order to mimic wolves internal leadership hierarchy, the wolves is divided into four types of wolf: alpha, beta, delta and omega, where the best individual, second best individual and third best individual are recorded as alpha, beta, and delta, and the rest of the individuals are considered as omega. In the GWO, the hunting (optimization) is guided by alpha, beta, and delta[8]. They guide other wolves (W) tend to the best area in searching space. In iterative searching process, the possible position of prey is assessed by three wolves alpha, beta, and delta. In optimization process, the locations of wolves are updated based on Eqs (1) and (2).where, t represents the t-th iteration, and are coefficient vector, is the position vector of prey, represents the wolf position. The vector and can be expressed by:where, the coefficient linearly decreases from 2 to 0 with the increasing of iteration number, and are random vector located in the scope [0, 1]. Principle of the position updating rules described in Eqs (1) and (2) are shown in Fig. 1. It can be seen from Fig. 1 the wolf at the position (X, Y) can relocate itself position around the prey according to above updating formulas. Although Fig. 1 only shows 7 positions that the wolf possible move to, by adjusting the random parameters C and A it can make the wolf to relocate itself to any position in the continuous space near prey. In the GWO, it always assumes that position of alpha, beta and delta is likely to be the prey (optimum) position. In the iteration searching process, the best individual, second best individual and third best individual obtained so far are respectively recorded as alpha, beta, and delta. However, other wolves who are regarded as omega relocate their locations according to the locations of alpha, beta, and delta. The following mathematical formulas are used to re-adjust positions of the wolf omega. The conceptual model that wolf update its position is shown in Fig. 2.where, , and are the position vector of alpha, beta, and delta, respectively. , , are randomly generated vectors, represents the position vector of current individual. The Eqs (5), (6) and (7) respectively calculate the distances between the position of current individual and that of individual alpha, beta, and delta. So the final position vectors of the current individual are calculated by:where, , , are randomly generated vectors, and t represents the number of iterations.
Figure 1

2D position vectors and possible next locations.

Figure 2

Position updating of IGWO.

2D position vectors and possible next locations. Position updating of IGWO. In the plane, three points is able to determine a region. Thus, the scope of position of the prey can be determined by the best three wolves. The GWO that whose target solution is comprehensively assessed by three solutions, can greatly decrease the probability of trapping into the local extreme. It can be seen from the above formula that Eqs (5–7) respectively define the step size of the omega tend to alpha, beta, and delta. The final positions of the omega wolves are defined by Eqs (8–11). The exploration ability and exploitation ability have important influence on the searching performance of an algorithm. For the GWO, exploration is refers to a wolf leave the original search path in a certain extent and search towards a new direction, which reflects the wolf’s ability of exploiting unknown regions. Exploitation is refers to that a wolf continue to search more carefully on the original trajectory in a certain extent, which can insure the wolf make a detailed search to the region that have been explored. So how to make the algorithm achieve a proper compromise between exploration and exploitation is a question that worth research. It can be observed that the two random and adaptive vectors and can be used to obtain a proper compromise between exploration ability and exploitation ability of the GWO. As is shown in Fig. 3, when is greater than 1 and is less than −1, that is , the wolf shows exploration ability. When the value of vector is greater than 1, it can also enhance the exploration ability of the wolf. In contrast, when and C < 1 the wolf’s exploitation capacity is enhanced. For increasing the exploitation ability of the wolf gradually, the vector decreases linearly with the iterations number increasing. However, in the course of optimization the value of is generated randomly, which can make exploration and exploitation of the wolf reach a equilibrium at any stage. Especially in the final stages of the iteration, it is able to avoid the algorithm from trapping into a local extreme. The pseudo codes of the GWO are described as follows.
Figure 3

Exploration and exploitation of wolf in GWO.

Exploration and exploitation of wolf in GWO. Initialize the population X(i = 1, 2, … n) of GWO Initialize GWO parameters (a, A, C) Calculate the individual fitness value in the population Record the best individual, second best individual and third best individual as , and While (t< maximum iteration number) For each individual Update the position of current individual by Eqs (5–11) End for Update a, A, C Calculate the fitness value of all individual in the population Update , , t = t + 1 End While Return The GWO has strong exploration ability, which can avoid the algorithm falling into the local optimum. For the GWO, the proper compromise between exploration ability and exploitation ability is very simple to be achieved, so it can effectively solve many complicated problems.

Improved Grey Wolf Optimizer (IGWO)

For increasing the search performance of the GWO, an improved grey wolf optimizer (IGWO) is proposed. In the IGWO, the biological evolution and the SOF principle of biological updating of nature are added to the standard GWO. Due to the differential evolution algorithm having the advantages of simple principle, less algorithm parameters and easy implementation, differential evolution (DE) strategy is chose as the evolutionary pattern of wolves in this paper. The wolf pack is updated according to the SOF principle so as to make the algorithm not fall into the local optimum. That is, after each iteration of the algorithm sort the fitness value that corresponds to each wolf by ascending order, and then eliminate R wolves with larger fitness value, meanwhile randomly generate wolves that equal to the number of eliminated wolves.

Grey Wolf Optimizer with evolution operation

In nature, organisms evolve from the low level to advanced level continually under the action of heredity, selection and mutation[16,17]. Similarly, there are also a series of changes like heredity, selection and mutation in the searching process of the wolves. The evolution law of SOF make wolves gradually strong. In the same way, for increasing the searching performance of the algorithm, the evolution operation is added to basic GWO. Based on the biological evolution of nature, many evolution methods have been developed, such as differential evolution(DE), quantum evolution and cooperative evolution, etc.[18]. For all the evolution methods, the differential evolution strategy has simple principle, less parameters and easy implementation and it has been extensively researched and applied[19,20]. Therefore, the DE strategy is chose as the evolution method of GWO. The basic principle of DE operator is to adopt the difference among individuals to recombine the population and obtain intermediate individuals, and then get the next generation population through a competition between parent individual and offspring individual[21,22]. The basic operations of DE include three operations: mutation, crossover and selection. After the operation of evolution is added to GWO, the wolf’s position updating is shown as Fig. 2.

Mutation operation

The most prominent feature of differential evolution is mutation operation. When an individual is selected, two differences with weight are added to the individual to accomplish its variation[23]. The basic variation ingredient of DE is the difference vector of the parents, and each vector contains two different individuals of parent (the t-th generation). The difference vector is defined as follows.where, r1 and r2 express index number of two different individuals of the population. Thus the mutation operation can be described as:where, r1, r2 and r3 are different integers in the scope (1, 2, … n) from the current target vector index i. F is the scaling factor to control the scaling of differential vector. In order to produce an ideal variation factor, to ensure that wolves can evolve toward the direction that good for development of the wolves. So in this paper, chooses outstanding individuals of wolves as parents. After a large number of simulation experiments, beta and delta are chose as two parents, and then combined with the alpha wolf to form a variation factor. Therefore, the variation factor is designed as Eq. (14). In order to make the algorithm has a high exploration ability in the early stage to avoid falling into local optimum, and has a high exploitation ability in the latter stage to increase the convergence speed, a dynamic scaling factor is employed. So scaling factor F change from large to small according to the iteration number in Eq. (15).where, fmin and fmax are the minimum and maximum of the scaling factor, Max_iter is maximum iteration number. iter is the iter-th iteration number.

Crossover operation

For the target vector individual of the wolves, make it have a crossover operation with the variation vector , and produce a test individual . In order to guarantee the individual taking place a evolution, a random choice method is adopt to insure at least one bit of is contributed by . For other bits of , the crossover probability factor CR is used to decide which bit of is contributed by , and which bit is contributed by . Crossover operation is express as follows.where, rand(j) ∈ [0, 1] obeys the random-uniform distribution, j is the j-th variable (gene), CR is crossover probability, and rand(i) ∈ [1, 2, … D]. It can be known from the Eq. (16), if the CR is larger, is able to make more contribution to . When CR = 1, . If the CR is smaller, is able to make more contribution to .

Selection operation

The “greedy choice” strategy is applied to the selection operation. After mutation operation and crossover operation generate the experiment individual and the compete it with . It can be expressed as Eq. (17).where, f is the fitness function, is the individual of t-th generation. from and choose the individual with best fitness as an individual of (t + 1)- generation, and replace the individual of the t-th generation. In the early stages of the algorithm, difference of the population is large, so mutation operation makes the algorithm has strong exploration ability. In the later stages of the algorithm iteration, namely when the algorithm tends to converge, difference between individuals of the population is small, which makes the algorithm has a strong exploitation ability.

SOF wolves updating mechanism

SOF is a role of nature that formed in the process of the biological evolution[23-25]. In nature, some vulnerable wolves will be eliminated because of uneven distribution of the prey, hunger, disease and other reasons. Meanwhile, new wolves will join to this wolf organization to enhance fighting capacity of the organization, which can insure the wolf organization survival well in the complicated world. The wolf pack is updated according to the SOF principle so as to make the algorithm not fall into the local optimum[26-29]. In the new algorithm, assume the number of wolves in the pack is fixed, and the strength of wolves is measured by fitness value. The higher the fitness, the better the solution. Therefore, after each iteration of the algorithm sort the fitness value that corresponds to each wolf in ascending order, and then eliminate R wolves with larger fitness value, meanwhile randomly generate new wolves that equal to the number of eliminated wolves. When R is large, the number of wolves that new generating is big, which will help to increase the diversity of wolves. But if the value of R is too large, the algorithm tends to be searching randomly, which will results in the convergence speed becoming slow. If the value of R is too small, it is not conducive to maintain the diversity of population, which results in the ability of exploring new solution space weakened. Therefore, in this paper, R is a random integer between n/(2 × ε) and n/ε.where, n is the total number of wolves, ε is scale factor of wolves updating. The flow chart of the improved grey wolf optimizer (IGWO) is illustrated in Fig. 4. The main procedure steps are described as follows.
Figure 4

Flow chart of IGWO algorithm.

Initialize the grey wolf population. Randomly generated position of wolves X(i = 1, 2, … n) Initialize parameters a, A and C. Calculate fitness of each wolf, choose the first three best wolves and save them as alpha beta and Delta in turn. Update position. According to Eqs (5–11) update position of the other wolves, that is, update position of omega wolf. Evolution operation. Use alpha, beta, and delta to form variation factor according to the Eq. (14). After crossover and selection operation, select the individual with good fitness as the wolf of next generation. Choose the first three best wolves and save them as alpha beta and Delta in turn. Update wolves. Sort fitness values that correspond by wolves from small to large, eliminate R wolves with larger fitness value. Meanwhile, randomly generate new R wolves. Update parameter a, A and C. Judge whether the termination condition is satisfied, if satisfied, output the position and fitness value of alpha as the optimal solution. If not satisfied, return to step (2). Flow chart of IGWO algorithm.

Simulation Experiments and Results Analysis

Before carrying out the simulation experiments to compare the performances of adopted optimization algorithm, twelve benchmark functions are selected[2], which are listed in Table 1. The experiment consists of two parts. For validating the performance of two improvements to the GWO, one part is that separately do experiments for the GWO with differential evolution (DGWO), GWO with SOF mechanism (SGWO) and IGWO. And meanwhile, compare results with DE algorithm. The second part is that do experiments to compare IGWO with other swarm intelligence algorithm, including PSO algorithm, ABC algorithm and CS algorithm. The parameter settings of GWO, PSO algorithm, ABC algorithm, CS algorithm and, DE algorithm are defined according with literature and can be found respectively in[8,9,12,13,23], which are listed in Table 2.
Table 1

Test functions.

Function nameFunctionRangFmin
Sphere \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{1}(x)=\sum _{i=1}^{d}{x}_{i}^{2}$$\end{document}f1(x)=i=1dxi2 [−100, 100]0
Sumsquares \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{2}(x)=\sum _{i=1}^{d}i{x}_{i}^{2}$$\end{document}f2(x)=i=1dixi2 [−10, 10]0
Schwefel \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{3}(x)={\sum _{i=1}^{d}({\sum }_{j-1}^{i}{x}_{j})}^{2}$$\end{document}f3(x)=i=1d(j1ixj)2 [−100, 100]0
Schwefel 2.21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{4}(x)={{\rm{\max }}}_{i}\{|{x}_{i}|,1\le i\le n\}$$\end{document}f4(x)=maxi{|xi|,1in} [−100, 100]0
Rosenbrock \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{5}(x)=\sum _{i=1}^{d-1}(100({x}_{i+1}-{x}_{i}^{2})+{({x}_{i}-1)}^{2})$$\end{document}f5(x)=i=1d1(100(xi+1xi2)+(xi1)2) [−30, 30]0
step \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{6}(x)={\sum }_{i=1}^{d}{([{x}_{i}+0.5])}^{2}$$\end{document}f6(x)=i=1d([xi+0.5])2 [−100, 100]0
Quartic \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{7}(x)={\sum }_{i=1}^{d}i{x}_{i}^{4}+random[0,1)$$\end{document}f7(x)=i=1dixi4+random[0,1) [−1.28, 1.28]0
Schwefel \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{8}(x)={\sum }_{i=1}^{n}-\,{x}_{i}\,\sin (\sqrt{{x}_{i}})$$\end{document}f8(x)=i=1nxisin(xi) [−500, 500]−418.9 × 5
Rastrigrin \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{9}(x)=\sum _{i=1}^{d}({{x}_{i}}^{2}-10\,\cos (2\pi {x}_{i})+10)$$\end{document}f9(x)=i=1d(xi210cos(2πxi)+10) [−5.12, 5.12]0
Ackley \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{10}(x)=-\,20\,\exp (-0.2\sqrt{\frac{1}{n}{\sum }_{i=1}^{n}{x}_{i}^{2}})-\exp (\frac{1}{n}{\sum }_{i=1}^{n}\cos (2\pi {x}_{i}))+20+e$$\end{document}f10(x)=20exp(0.21ni=1nxi2)exp(1ni=1ncos(2πxi))+20+e [−32, 32]0
Griewank \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f}_{11}(x)=\frac{1}{4000}(\sum _{i=1}^{n}({x}^{2}i))-(\prod _{i=1}^{n}\cos (\frac{xi}{\sqrt{i}}))+1$$\end{document}f11(x)=14000(i=1n(x2i))(i=1ncos(xii))+1 [−600, 600]0
Penalty#1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{array}{c}{f}_{12}(x)=\frac{\pi }{n}\{10\,\sin (\pi {y}_{1})+{\sum }_{i=1}^{n-1}{({y}_{i}-1)}^{2}[1+10{\sin }^{2}(\pi {y}_{i+1}]+{({y}_{n}+1)}^{2}\}\\ \,+{\sum }_{i=1}^{n}u({x}_{i},10,100,4)\end{array}$$\end{document}f12(x)=πn{10sin(πy1)+i=1n1(yi1)2[1+10sin2(πyi+1]+(yn+1)2}+i=1nu(xi,10,100,4) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{array}{c}{y}_{i}=1+\frac{{x}_{i}+1}{4}\\ u({x}_{i},a,k,m)=\{\begin{array}{c}k{({x}_{i}-a)}^{m}\\ 0\\ k{(-{x}_{i}-a)}^{m}\end{array}\begin{array}{c}{x}_{i} > a\\ -a < {x}_{i} < a\\ {x}_{i} < -a\end{array}\end{array}$$\end{document}yi=1+xi+14u(xi,a,k,m)={k(xia)m0k(xia)mxi>aa<xi<axi<a [−50, 50]0
Table 2

Parameters Settings of each algorithms.

AlgorithmMain parameters Settings
PSOParticle number n = 30; Learning factor c1 = 2, c2 = 2; Inertia weight w = 0.9
ABCBees number n = 30; Follow-bees number n/2 = 10; Lead-bees number n/2 = 10
CSBird nest number n = 30; Detection probabilityPa = 0.25; Step length control α = 0.01
DEpopulation size n = 30; CR = 0.7; F = 1
GWOWolves number N = 30
DGWOWolves number N = 30; fmin = 0.25, fmax = 1.5; CR = 0.7
SGWOWolves number N = 30; ε = 5
IGWOWolves number N = 30; fmin = 0.25, fmax = 1.5; CR = 0.7; ε = 5
Test functions. Parameters Settings of each algorithms.

Experiment and analysis for two improvement of IGWO

For validating the performance of two improvements to the GWO, first of all, simulation experiments are separately carried out for the grey wolf optimizer with differential evolution (DGWO), grey wolf optimizer with SOF mechanism (SGWO) and IGWO. And meanwhile, The simulation results compared with GWO and DE algorithm are shown in Figs 5–11. It can be seen from the simulation convergence curves for the adopted testing functions, compared with GWO, the convergence velocity and optimization precision of DGWO, SGWO and IGWO all have been improved, but IGWO is the best.
Figure 5

Convergence curves Function F8 (D = 30).

Figure 11

Convergence curves Function F9 (D = 100).

Convergence curves Function F8 (D = 30). Convergence curves Function F9 (D = 30). Convergence curves Function F10 (D = 30). Convergence curves Function F12. (D = 30). Convergence curves Function F4. (D = 100). Convergence curves Function F8 (D = 100). Convergence curves Function F9 (D = 100). Then for further validating the searching accuracy of DGWO, SGWO and IGWO, every optimization algorithm is run independently thirty times and the best, worst and average values are recorded for the adopted twelve testing functions under the 30-dimension and 100-dimension. The maximum iterations number is Max Max_iter = 500. The statistical results of D = 30 and D = 100 are shown in Tables 3 and 4 respectively.
Table 3

Numerical statistics results of D = 30.

FunctionIGWOSGWODGWOGWODE
F1 Best3.9273e0694.2813e−0641.3776e−0662.0291e−0292.8252e−004
Ave1.1783e  0648.6129e  0614.3208e  0621.0402e  0275.4593e  004
Worst1.2505e  0621.0843e  0593.2370e  0616.8810e  0270.0010
Std5.5606e  0638.4600e  0601.4297e  0611.4561e  0273.7365e  004
F2 Best7.6878e0681.3615e  0483.9913e  0562.0613e  0300.0015
Ave3.2484e  0644.5866e  0435.7834e  0556.5600e  0290.0022
Worst8.4385e  0637.2218e  0386.8134e  0414.8634e  0280.0031
Std4.1566e  0662.4788e  0615.4083e  0591.5256e  0280.0017
F3 Best1.7846e0156.9725e  0121.9482e  0132.6272e  0082.3690e + 004
Ave3.6220e  00104.8496e  0083.1035e  081.4089e  0052.3690e + 004
Worst5.5878e  0081.3048e  0068.3608e  071.7721e  0044.0729e + 004
Std4.3091e  0081.0902e  0071.0736e  0065.1567e  0053.8734e + 003
F4 Best4.0775e0141.6260e  0131.5917e  0138.7151e  0088.9197
Ave4.2932e  0132.7047e  0117.9211e  0121.0129e  00612.5708
Worst7.1302e  0111.7101e  0101.0844e  0117.7390e  00616.8491
Std3.7329e  0123.7329e  0127.9664e  0113.5107e  0072.5643
F5 Best 23.1787 26.068025.331125.796146.7705
Ave25.387327.024627.779528.0325147.9340
Worst28.569228.758928.748028.7800229.5079
Std0.69970.81190.87230.9258156.3534
F6 Best7.5296e0050.39790.24870.00630.5679
Ave0.65830.96710.75440.89930.7668
Worst2.11691.73471.50541.51921.1668
Std0.29170.35000.30960.40550.3643
F7 Best1.2695e0043.7651e  0048.5420e  0044.5944e  0040.0343
Ave7.8361e  0040.00129.1816e  0040.00220.0544
Worst0.00170.00500.00110.00640.0761
Std3.7904e  0044.0563e  0044.0635e  0048.4619e  0040.0356
F8 Best −3.9429e + 004 −8.1054e + 003−7.3195e + 003−6.1310e + 003−1.1275e + 004
Ave−9.0105e + 003−4.7468e + 003−5.8337e + 003−3.6813e + 003−7.0046e + 003
Worst−3.3482e + 003−3.5050e + 003−3.1866e + 003−2.9262e + 003−3.8532e + 003
Std−1.7912e + 003−1.2625e + 003−1.4600e + 003−958.0854−5.4353e + 003
F9 Best 0 0 0 1.1369e  01359.0260
Ave1.07831.22741.17543.214385.4876
Worst12.16969.288712.524615.835698.3991
Std1.50382.33492.67044.880956.1783
F10 Best1.5099e0171.1546e  0141.5099e  0147.5495e  0130.0035
Ave1.2204e  0162.0073e  0141.7468e  0141.0048e  0120.0055
Worst2.2204e  0142.5757e  0142.2204e  0141.4655e  0130.0082
Std4.3110e  0154.7283e  0145.4372e  0141.4373e  0140.0025
F11 Best 0 0 0 0 5.3482e  004
Ave0.00160.01050.00600.00480.0057
Worst0.01470.01840.04410.02860.0271
Std0.00740.00850.01010.01140.0432
F12 Best 0.0065 0.01920.01890.01880.0942
Ave0.04810.06500.05350.05940.1663
Worst0.10910.07690.14260.08190.2087
Std0.01510.01610.03180.04190.0426
Table 4

Numerical statistics results of D = 100.

FunctionIGWOSGWODGWOGWODE
F1 Best1.5561e0353.0615e  0341.8785e  0343.4355e  0131.5008e + 003
Ave9.5901e  0345.6751e  0321.1231e  0321.4097e  0121.8397e + 003
Worst1.8708e  0328.7514e  0316.8749e  0323.6831e  0122.3330e + 003
Std7.1710e  0348.0479e  0349.9774e  0342.8268e  0125.5463e + 003
F2 Best2.4553e0366.3490e  0348.1124e  0351.6794e  01398.3138
Ave7.1368e  0342.1103e  0319.6300e  0321.0558e  0124.3298e + 003
Worst5.9594e  0332.9801e  0301.2238e  0324.7932e  0121.2236e + 004
Std1.6942e  0343.2532e  0343.9658e  0343.1877e  0121.4523e + 004
F3 Best 23.6476 43.052825.381465.13833.5233e + 005
Ave679.36751.6097e + 003690.0015818.93364.1657e + 005
Worst2.5006e + 0031.1074e + 004719.81345.5568e + 0034.7246e + 005
Std1.0607e  0091.4133e  0081.2798e  0073.0654e  0053.4534e  004
F4 Best 0.0018 0.00440.00350.060786.3861
Ave0.03820.44160.15850.956290.0565
Worst0.62472.55031.60333.184192.5876
Std0.40720.41140.66020.65130.5346
F5 Best 95.8954 96.094897.094696.85011.4284e + 006
Ave96.815797.847597.552898.02172.1571e + 006
Worst98.524498.585198.442698.52073.7347e + 006
Std0.61620.63850.65790.67081.4353e + 005
F6 Best 7.4830 10.39308.71559.74801.2876e + 003
Ave9.321512.099611.237912.13791.7987e + 003
Worst12.682413.823111.762012.78602.3616e + 003
Std0.77220.78820.97110.97441.2354e + 003
F7 Best3.5264e0040.00136.4848e  0040.00262.2367
Ave0.00240.00890.00770.01063.4390
Worst0.00430.05810.02400.04307.1933
Std7.0880e  0049.1433e  0040.00100.00212.3451
F8 Best −8.2275e + 004 −8.6793e + 003−1.9539e + 004−1.9447e + 004−1.8209e + 004
Ave−6.0161e + 004−7.1528e + 003−1.6209e + 004−1.5641e + 004−1.6567e + 004
Worst−5.0981e + 003−6.3427e + 003−6.4163e + 003−5.5875e + 003−1.5388e + 004
Std672.49192.3324e + 0033.3000e + 0034.6259e + 0032.3453e + 003
F9 Best 0 0 0 4.8431e  011754.1063
Ave1.66435.77674.61379.4635805.4216
Worst12.796314.79019.470730.0252860.7482
Std2.66053.25576.35436.4071764.3451
F10 Best6.4837e0146.8390e  0146.7837e  0145.9873e  0086.7720
Ave7.7153e  0148.9943e  0148.5771e  0141.1140e  0077.4253
Worst9.3259e  0141.1102e  0138.9153e  0142.2161e  0079.0925
Std6.0990e  0151.0364e  0149.9761e  0156.2626e  0088.3465
F11 Best 0 0 0 1.4655e  0136.2536e  004
Ave0.00280.00570.00420.00650.0091
Worst0.02540.07020.07480.02700.0420
Std0.00760.00820.00900.01100.0432
F12 Best 0.1056 0.17060.20340.21546.4979e + 005
Ave0.15510.35350.31030.39681.5242e + 006
Worst0.33310.46310.42810.43063.1513e + 006
Std0.04060.04270.04210.06122.4325e + 006
Numerical statistics results of D = 30. Numerical statistics results of D = 100.

Simulation contrast experiments and results analysis

Three swarm intelligence algorithms (PSO algorithm, ABC algorithm and CS algorithm) are selected to carry out the simulation contrast experiments with the proposed IGWO so as to verify its superiority on the convergence velocity and searching precision. When dimension D = 30, the simulation convergence results on the adopted testing functions are shown in Figs 12–23. when D = 100, the simulation convergence results on the adopted testing functions are shown in Figs 24–27.
Figure 12

Convergence curves Function F1.

Figure 23

Convergence curves Function F12.

Figure 24

Convergence curves Function F5 (D = 100).

Figure 27

Convergence curves Function F12 (D = 100).

Convergence curves Function F1. Convergence curves Function F2. Convergence curves Function F3. Convergence curves Function F4. Convergence curves Function F5. Convergence curves Function F6. Convergence curves Function F7. Convergence curves Function F8. Convergence curves Function F9. Convergence curves Function F10. Convergence curves Function F11. Convergence curves Function F12. Convergence curves Function F5 (D = 100). Convergence curves Function F8 (D = 100). Convergence curves Function F9 (D = 100). Convergence curves Function F12 (D = 100). It can be seen from the simulation convergence curves, IGWO has better convergence velocity and searching precision than other three algorithms (ABC, CS and PSO). Especially for function F8, F9 and F12, compared with GWO, the IGWO make their convergence speed are improved obviously. It can be seen from their surface figure that the three functions are multimodal function. So there are a lot of local minimum values within the scope of search. For IGWO, the addition of differential evolution and the SOF mechanism can improving the weakness of easily falling into the local extreme and obtain smaller function value. Thus, the convergence speeds of the three functions are improved significantly. At the same time, it suggests that the IGWO has the superiority to jump out of the local optimal. For further validating the searching accuracy, every optimization algorithm is run independently thirty times and the best, worst and average values are recorded for the adopted twelve testing functions under the 30-dimension and 100-dimension. The maximum iterations number is Max Max_iter = 500. The statistical results of D = 30 and D = 100 are shown in Tables 5 and 6 respectively. Seen from the numerical results listed in Tables 5 and 6, the IGWO proposed in this paper makes optimization accuracy of 12 typical functions have a certain improvement. And the optimization accuracy of IGWO is better than that of PSO, ABC, CS algorithm under the same dimension. For the F1 function, when its dimension D = 30, its average value is raised from e-27 to e-63, relative to GWO been increased by 36 orders of magnitude; and when D = 100, its average value is raised from e-12 to e-34, relative to GWO increased by 22 orders of magnitude. For the F2 function, when D = 30, its average value has been improved 35 orders of magnitude relative to GWO; and when D = 100, relative to GWO its average value is improved 22 orders of magnitude. For function F9 and F11, when D = 30 and D = 100, IGWO respectively search to their ideal minimum value 0. The optimization performance for other testing functions all has been improved.
Table 5

Numerical statistics results of D = 30.

FunctionIGWOGWOPSOABCCS
F1 Best3.9273e0692.0291e  0295.14221.9075e  0055.3943
Ave1.1783e  0641.0402e  027157.31507.3230e  00417.2298
Worst1.2505e  0626.8810e  0271.3957e + 0030.006139.5997
Std5.5606e-0631.4561e  027618.88545.5855e  00410.6054
F2 Best7.6878e0682.0613e  0302.87956.2539e  0060.0791
Ave3.2484e  0646.5600e  029108.55332.1544e  0040.2208
Worst8.4385e  0634.8634e  0281.0320e + 0030.00190.9046
Std4.1566e  0661.5256e  028113.00011.0957e  0040.1073
F3 Best1.7846e0152.6272e  0082.6128e + 00319.256382.0828
Ave3.6220e  00101.4089e  0055.8899e + 003214.0276388.8103
Worst5.5878e  0081.7721e  0041.5150e + 004512.2673892.5678
Std4.3091e  0085.1567e  0052.7730e + 003353.0218753.0257
F4 Best4.0775e0148.7151e  00810.560434.67918.2559
Ave4.2932e  0131.0129e  00619.401265.453311.5000
Worst7.1302e  0117.7390e  00630.845382.133616.9572
Std3.7329e  0123.5107e  0074.93077.69161.2514
F5 Best 23.1787 25.7961582.031340.4974111.9810
Ave25.387328.03251.7004e + 00488.0783600.0555
Worst28.569228.78008.3090e + 004502.21012.2452e + 003
Std0.69970.92584.9416e + 00429.4371274.0168
F6 Best7.5296e0050.00637.69433.40098.6597
Ave0.65830.8993300.31898.356724.2355
Worst2.11691.51921.1916e + 00323.003075.3820
Std0.29170.4055374.94723.2274e  00412.7746
F7 Best1.2695e0044.5944e  0040.11860.03190.0177
Ave7.8361e  0040.00220.61290.05280.0621
Worst0.00170.00642.22570.81700.1222
Std3.7904e  0048.4619e  0040.32020.00260.0159
F8 Best −3.9429e + 004 −6.1310e + 003−4.1322e + 003−1.1681e + 003−6.6092e + 003
Ave−9.0105e + 003−3.6813e + 003−3.4422e + 003−1.1294e + 003−5.9192e + 003
Worst−3.3482e + 003−2.9262e + 003−2.9516e + 003−1.0866e + 003−5.2163e + 003
Std−1.7912e + 003−958.08543.2960e + 003205.0614351.8938
F9 Best 0 1.1369e  01346.39815.150453.0900
Ave1.07833.214392.68399.070273.5890
Worst12.169615.8356152.747213.8016103.0006
Std1.50384.880922.78522.512614.3049
F10 Best1.5099e0177.5495e  0132.08631.67254.1810
Ave1.2204e  0161.0048e  0125.71355.02687.0560
Worst2.2204e  0141.4655e  01310.582911.260714.1411
Std4.3110e  0151.4373e  0142.45261.78192.6353
F11 Best 0 01.18903.3792e  0051.0869
Ave0.00160.00487.47900.06491.2128
Worst0.01470.028626.80610.18631.5050
Std0.00740.01143.98520.04310.1070
F12 Best 0.0065 0.01881.48871.81071.0945
Ave0.04810.059473.90294.59413.6666
Worst0.10910.08192.0290e + 0036.81716.5449
Std0.01510.04194.46140.31280.4337
Table 6

Numerical statistics results of D = 100.

FunctionIGWOGWOPSOABCCS
F1 Best1.5561e0353.4355e  0133.7127e + 004122.65123.4168e + 003
Ave9.5901e  0341.4097e  0124.9247e + 0044.0245e + 0035.7296e + 003
Worst1.8708e  0323.6831e  0126.5987e + 0041.0067e + 0048.6850e + 003
Std7.1710e  0342.8268e  0128.9305e + 0031.8523e + 003912.3910
F2 Best2.4553e0361.6794e  0132.6127e + 00491.59391.3798e + 003
Ave7.1368e  0341.0558e  0123.7423e + 0043.9368e + 0032.0391e + 003
Worst5.9594e  0334.7932e  0125.6874e + 0041.1487e + 0043.0315e + 003
Std1.6942e  0343.1877e  0127.9858e + 0032.1492e + 003514.5256
F3 Best 23.6476 65.13831.1274e + 0052.9201e + 0043.0971e + 005
Ave679.3675818.93361.8885e + 0051.2577e + 0055.4593e + 005
Worst2.5006e + 0035.5568e + 0033.9436e + 0058.1136 + e0057.4755e + 005
Std1.0607e  0093.0654e  0052.6625e + 031.6236e + 0041.7767e + 004
F4 Best 0.0018 0.060743.159490.305922.7549
Ave0.03820.956254.651295.013229.3341
Worst0.62473.184166.421198.375739.2722
Std0.40720.65134.82631.60382.8819
F5 Best 95.8954 96.85012.6394e + 0078.4200e + 0034.2931e + 005
Ave96.815798.02175.7355e + 0077.8186e + 0051.0015e + 006
Worst98.524498.52071.3108e + 0089.9299e + 0062.4179e + 006
Std0.61620.67081.3951e + 0071.9970e + 0053.2088e + 005
F6 Best 7.4830 9.74802.1793e + 004210.61354.2828e + 003
Ave9.321512.13793.4705e + 0044.4723e + 0035.7352e + 003
Worst12.682412.78604.6567e + 0041.1812e + 0048.5045e + 003
Std0.77220.97444.9192e + 0032.5176e + 0031.0449e + 003
F7 Best3.5264e0040.002628.90787.91760.4508
Ave0.00240.006692.887810.35610.7003
Worst0.00430.0130236.687411.23521.0116
Std7.0880e  0040.002147.03144.35780.1182
F8 Best −8.2275e + 004 −1.9447e + 004−1.6506e + 004−1.1603e + 004−1.2130e + 004
Ave−6.0161e + 004−1.5641e + 004−5.5154e + 003−1.0941e + 004−1.0789e + 004
Worst−5.0981e + 003−5.5875e + 003−1.7081e + 003−2.7028e + 004−9.3055e + 003
Std672.49194.6259e + 0034.4865e + 003582.5893643.1070
F9 Best 0 4.8431e  011722.3155243.0193350.2439
Ave1.66439.4635877.8493293.0193428.6253
Worst12.796330.02521.0625e + 003342.8732493.4137
Std2.66056.407186.961021.950438.0722
F10 Best6.4837e0145.9873e  00815.82948.231410.9187
Ave7.7153e  0141.1140e  00717.082611.238813.3370
Worst9.3259e  0142.2161e  00720.732615.732617.6444
Std6.0990e  0156.2626e  0080.69981.65291.9379
F11 Best 0 1.4655e  013301.06892.215538.0069
Ave0.00280.0065430.331729.614754.1621
Worst0.02540.0270560.655276.026975.9006
Std0.00760.011052.207415.383711.3528
F12 Best 0.1056 0.21548.6436e + 00620.111320.6985
Ave0.15510.39683.2730e + 0071.3921e + 0046.5202e + 003
Worst0.33310.43069.5779e + 0071.4818e + 0058.1864e + 004
Std0.04060.06124.3908e + 0073.2646e + 0040.8511
Numerical statistics results of D = 30. Numerical statistics results of D = 100. Thus, the simulation experiments results, including convergence curves and statistics data, show that the proposed improved grey wolves optimizer has a better convergence rate and optimization performance. According to the improvement above, three reasons can be summarized: Firstly, the adding of evolution operation is able to increase the diversity of wolves, that is to say to increase the solution diversity so as to make the algorithm jump out the local extreme. Secondly, the selection method of parents of DE and dynamic scaling factor F can make the algorithm has a good exploration ability in the early search stage and has a good exploitation ability in the later search stage, therefore, both search precision and convergence speed are improved. In addition, the adding of SOF wolf updating mechanism can also decrease the probability of the algorithm falling into the local extreme.

Conclusions

For achieving the proper compromise between exploration and exploitation, further accelerate the convergence and increase the optimization accuracy of GWO, an improved grey wolf optimizer (IGWO) is proposed in this paper. The biological evolution and SOF principle in nature are added to the standard GWO. The simulation experiments are carried out by adopting twelve typical function optimization problems. The simulation results show the proposed IGWO has better convergence velocity and optimization performance than DE algorithm, PSO algorithm, ABC algorithm and CS algorithm. On the one hand, the adoption of evolution operation can increase the wolves diversity and make the algorithm has a good exploration ability in the early searching stage and has a good exploitation ability in the later search stage. On the other hand, the adoption of SOF wolf updating mechanism can decrease the probability of falling into the local optimum. In the future work, we will carry out similar hybridisation of other swarm intelligent optimization algorithms, such as dragonfly algorithm (DA), ant lion optimizer (ALO), multi-verse optimizer (MVO), coral reefs optimization (CRO) algorithm, etc.
  5 in total

1.  A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process.

Authors:  Mohammad Dehghani; Eva Trojovská; Pavel Trojovský
Journal:  Sci Rep       Date:  2022-06-15       Impact factor: 4.996

2.  Gray wolf optimizer with bubble-net predation for modeling fluidized catalytic cracking unit main fractionator.

Authors:  Xiaojing Wang; Chengli Su; Ning Wang; Huiyuan Shi
Journal:  Sci Rep       Date:  2022-05-09       Impact factor: 4.996

3.  Optimization of fuzzy c-means (FCM) clustering in cytology image segmentation using the gray wolf algorithm.

Authors:  Maryam Mohammdian-Khoshnoud; Ali Reza Soltanian; Arash Dehghan; Maryam Farhadian
Journal:  BMC Mol Cell Biol       Date:  2022-02-15

4.  The cheetah optimizer: a nature-inspired metaheuristic algorithm for large-scale optimization problems.

Authors:  Mohammad Amin Akbari; Mohsen Zare; Rasoul Azizipanah-Abarghooee; Seyedali Mirjalili; Mohamed Deriche
Journal:  Sci Rep       Date:  2022-06-29       Impact factor: 4.996

5.  A new human-inspired metaheuristic algorithm for solving optimization problems based on mimicking sewing training.

Authors:  Mohammad Dehghani; Eva Trojovská; Tomáš Zuščák
Journal:  Sci Rep       Date:  2022-10-17       Impact factor: 4.996

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.