Literature DB >> 28932103

A Modified Mean Gray Wolf Optimization Approach for Benchmark and Biomedical Problems.

Narinder Singh1, S B Singh1.   

Abstract

A modified variant of gray wolf optimization algorithm, namely, mean gray wolf optimization algorithm has been developed by modifying the position update (encircling behavior) equations of gray wolf optimization algorithm. The proposed variant has been tested on 23 standard benchmark well-known test functions (unimodal, multimodal, and fixed-dimension multimodal), and the performance of modified variant has been compared with particle swarm optimization and gray wolf optimization. Proposed algorithm has also been applied to the classification of 5 data sets to check feasibility of the modified variant. The results obtained are compared with many other meta-heuristic approaches, ie, gray wolf optimization, particle swarm optimization, population-based incremental learning, ant colony optimization, etc. The results show that the performance of modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

Entities:  

Keywords:  Gray wolf optimization (GWO); meta-heuristics; optimization techniques

Year:  2017        PMID: 28932103      PMCID: PMC5598817          DOI: 10.1177/1176934317729413

Source DB:  PubMed          Journal:  Evol Bioinform Online        ISSN: 1176-9343            Impact factor:   1.625


Literature Review

Many researchers have proposed various nature-inspired techniques to solve the different types of real-life problems to improve the quality of the solutions. The most popular meta-heuristics algorithms are discussed in this section. Evolution strategies (ES) are evolutionary algorithms that date back to the 1960s and are most commonly applied to black-box global optimization functions in continuous search spaces. Evolution strategy was proposed by Rechenberg.[1] This approach is population-based on ideas of evolution and adaptation. In this use, mutation, recombination, and selection are applied to a crowd of individuals containing member of the population solutions to evolve iteratively better and better optimization problem solutions. The particle swarm optimization (PSO) algorithm was first introduced by RC Eberhart (Electrical Engineer) and James Kennedy (Social Psychologist).[2] Its fundamental judgment was primarily inspired by the simulation of the social behavior of animals such as bird flocking and fish schooling. While searching for food, the birds either are scattered or go together before they settle in the position where they can find the food. While the birds are searching for food from one position to another, there is always a bird that can smell the food very well, that is, the bird is observable of the position where the food can be found, having the correct food resource message. Because they are transmitting the message, particularly the useful message at any period while searching the food from one position to another, the birds will finally flock to the position where food can be found. Genetic algorithm (GA) was proposed by Holland.[3] This approach is inspired by Darwin’s theory of evolution “survival of the fittest.” In this approach, each new population is created by mutation and combination of the individuals in the previous generation. Because the best individuals have a higher probability of participating in generating the new position of the candidate, the new position is likely to be better than the previous position of the candidate. Ant colony optimization (ACO) approach was proposed by Marco Dorigo et al.[4] This approach is based on the behavior of ants seeking a path between their colony and source of food. The basic idea has since diversified to solve a wider class of numerical problems and improved the quality of the solutions. Population-based incremental learning (PBIL) was introduced by Shumeet.[5] It is a global optimization approach and an estimation of distribution algorithm. Population-based incremental learning approach is an extension to the Evolutionary GA achieved through the re-examination of the performance of the evolutionary GA in terms of competitive learning. It is easier than a GA and in a number of cases leads to better and good qualities of solutions than a standard GA. Recently, some of most popular variants are gravitational search algorithm (GSA),[6] gravitational local search,[7] big bang-big crunch,[8] central force optimization,[9] artificial chemical reaction optimization algorithm,[10] charged system search (CSS),[11] ray optimization,[12] galaxy-based search algorithm,[13] black hole,[14] curved space optimization,[15] and small-world optimization algorithm,[16] and many others. All these approaches are different from evolutionary algorithms in the sense that a random set of search agents communicate surrounding the search area according to the physical rules. Gray wolf optimization (GWO) algorithm was first proposed by Mirjalili et al[17] It is a nature-inspired optimizer approach and mimics the leadership hierarchy and hunting mechanism of gray wolves in nature. Four types of gray wolves alpha (α), beta (β), delta (δ), and omega (ω) are worked for simulating the leadership hierarchy. A wolf very near to a target is assigned by α, second level of near to a target is assigned as β, third level of near to a target is assigned as δ, and remaining wolves are assigned as ω. The main 3 stages of hunting, searching for target, encircling target, and attacking target, have been implemented. The performance of this approach was tested on several benchmark functions and real-life problems. On the basis of results obtained, it was concluded that the present approach is superior to and better than other existing nature-inspired approaches such as PSO, differential evolution, GSA, ES, and evolutionary programming. The GWO for training multilayer perceptron was first proposed by Mirjalili.[18] On the basis of this existing variant, the author solved 3 function approximation data sets and 8 standard data sets including 5 classifications. The performance of the proposed variant was compared with a number of existing nature-inspired algorithms such as PSO, GA, ACO, ES, and PBIL. The results obtained showed that the proposed variant provides competitive solutions in the forms of improved local optima avoidance and also demonstrates high level of accuracy in approximation and classification of the proposed trainer. Some of the recent population-based nature-inspired training algorithms are social spider optimization,[19] invasive weed optimization,[20] chemical reaction optimization,[21] teaching-learning–based optimization,[22] biogeography-based optimization,[23] and CSS.[24] Several researchers have used the above variants to solve the real-life medical problems and presented their high performance in terms of approximating the global optimum. In this article, we have also solved these real medical problems using the newly proposed mean gray wolf optimization (MGWO) algorithm. We have also reported that quality of solution of these problems using MGWO algorithm is better than other existing algorithms. Two novel binary versions of the GWO (bGWO) algorithm were also proposed by Emary et al[25] for feature selection in wrapper mode. These algorithms were applied and used for feature selection in machine learning domain using different initialization methods. The bGWO approaches are hired in the feature selection domain for evaluation, and the results are compared against 2 of the well-known feature selection algorithms—PSO and GA. Mittal et al[26] developed a modified variant of the GWO called modified GWO. An exponential decay function is used to improve the exploitation and exploration in the search space over the course of generations. On the basis of obtained results, authors proved that the modified variant benefits from high exploration in comparison with the standard GWO, and the performance of the variant is verified on a number of standard benchmarks and real-life NP-hard problems. Sodeifian et al[27] used the response surface methodology to study the efficiency of supercritical fluid extraction from Cleome coluteoides. Chemical compositions extracted by hydrodistillation and SC-CO2 methods were identified by gas chromatography (GC)/mass spectrometry and determined by GC/flame ionization detector. Comparing the 2 techniques, the obtained solutions showed higher total extraction yield with SC-CO2 method. The rest of the article is organized as follows. The newly proposed algorithm MGWO algorithm is presented in section “MGWO Algorithm.” The proposed mathematical model and algorithm have also been discussed in section “MGWO Algorithm.” The tested benchmark functions and numerical experiments are presented in sections “Testing Functions” and “Numerical Experiments.” Parameter setting, results, discussion of standard benchmark functions, and real-life problems are represented in sections “Parameter Setting,” “Analysis and Discussion on the Results,” and “Real-Life Data Set Problems.” Finally, the conclusion of the work is summarized at the end of the article.

Gray Wolf Optimization

Mirjalili et al[17] proposed a new swarm-based meta-heuristic approach. This variant mimics the hunting behavior and social leadership of gray wolves in nature. In this variant, the crowd is divided into 4 different groups (Figure 1).
Figure 1.

Hierarchy of gray wolf (dominance decreases from top to down). Adapted from Mirjalili et al.[17]

Hierarchy of gray wolf (dominance decreases from top to down). Adapted from Mirjalili et al.[17] The first 3 wolves in the best position (fittest) are indicated as which guide the other wolves (ω) of the group toward promising areas of the search space. The position of each wolf of the group is updated using the following mathematical equations: where is the position vector of the prey, indicates the current iteration, and indicates the position vector of a gray wolf. The vectors and are mathematically calculated as follows: where components of are linearly decreased from 2 to 0 and are random numbers lying between [0, 1].

Hunting

To mathematically simulate the hunting behavior of gray wolves, the hunt is usually guided by α, β, and δ which also participate in hunting occasionally. Suppose that α is the best solution of the candidate, β and δ have better knowledge about the potential location of prey. We save the first 3 best candidate solutions obtained so far and oblige the other search agents to update their positions according to the position of the best search agents. The following mathematical equations are developed for this simulation: The wolves update their positions randomly around the prey as represented in Figure 2.[17]
Figure 2.

Positions updated by the wolves in gray wolf optimization. Adapted from Mirjalili et al.[17]

Positions updated by the wolves in gray wolf optimization. Adapted from Mirjalili et al.[17]

MGWO Algorithm

In this article, a modified variant MGWO is proposed for the purpose of improving the accuracy, convergence speed, and time performance of the GWO algorithm. In the proposed variant, mathematical equations of encircling and hunting have been modified. Remaining equations/procedure is same as that in GWO.[17] The main purpose of this variant is to improve the movement or optimal path of each wolf in the searching space. The MGWO approach is outlined in the following sections.

Encircling prey

Gray wolves encircle the prey during the hunt which can be modified using the following mathematical equation: where µ is the mean, is the position vector of the prey, indicates the current iteration, and indicates the position vector of a gray wolf. The vectors and are expressed as follows: where components of are linearly decreased from 2 to 0, and are random numbers lying between [0, 1]. The hunting of prey is usually guided by α, β, and δ groups which participate occasionally. First 3 best candidate solutions are referred by α, β, and δ and the remaining candidate solutions are denoted by ω. The position of each wolf has been modified in the search space area by taking the mean of the positions. The following modified mathematical equations are proposed in this regard (Figure 3):
Figure 3.

(a) Performance index graph and (b) performance graph of PSO, GWO, and MGWO. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

(a) Performance index graph and (b) performance graph of PSO, GWO, and MGWO. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. In GWO and MGWO algorithms, the wolves update positions randomly around the prey which can symbolically be represented as shown in Figure 4.
Figure 4.

Positions updated in GWO and MGWO. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Positions updated in GWO and MGWO. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. The pseudocode of the MGWO algorithm: Initialize the population Initialize and Calculate the fitness of each search candidate (agent) of the population in the search space is the first best search candidate (agent) is the second best search candidate (agent) is the third best search candidate (agent) While (t < max. number of iterations) For each search candidate (agent) Update the position of the current search (candidate) agent using equation (16) End for Update and using equations (10) and (11) Find the fitness of all search candidate (agent) Update , , and using equations (12) to (14) End while Return

Testing Functions

The convergence and time-consuming performance of proposed variant have been tested on several types of standard functions, and the results obtained are compared with those obtained using other recent meta-heuristics. These classical functions have divided into 4 different parts, ie, unimodal, multimodal, fixed-dimension multimodal, and composite functions and are listed in Appendix 1 (Tables A to C) where is the minimum objective function value, is the dimension, and is the boundary of the standard function’s search area. All these classical functions have been used by many scientists in their research (Holland,[3] Eberhart et al,[2] Dorigo et al,[4] Shumeet,[5] and many others).
Table A.

Unimodal benchmark functions.

FunctionDimensionRange fmin
F1(x)=i=1nxi2 30[−100, 100]0
F2(x)=i=1n|xi|+i=1n|xi| 30[−10, 10]0
F3(x)=i=1n(j1ixj)2 30[−100, 100]0
F4(x)=maxi{|xi|,1in} 30[−100, 100]0
F5(x)=i=1n1[100(xi+1xi2)2+(xi1)2] 30[−30, 30]0
F6(x)=i=1n([xi+0.5])2 30[−100, 100]0
F7(x)=i=1nixi4+rand[0,1) 30[−1.28, 1.28]0
Table C.

Fixed-dimension multimodal benchmark functions.

FunctionDimensionRange fmin
F14(x)=(1500+j=1251j+i=12(xiaij)6)1 2[−65, 65]1
F15(x)=i=111[aix1(bi2+bix2)bi2+bixi+x4]2 4[−5, 5]0.00030
F16(x)=4x122.1x14+13x16+x1x24x22+4x24 2[−5, 5]−1.0316
F17(x)=(x25.14π2x12+5πx16)2+10(118π)cosx1+10 2[−5, 5]0.398
F18(x)=[1+(x1+x2+1)2(1914x1+3x1214x2+6x1x2+3x22)]×[30+(2x13x2)2×(1832x1+12x12+48x236x1x2+27x22)] 2[−2, 2]3
F19(x)=i=14ciexp(j=13aij(xjpij)2) 3[1, 3]−3.86
F20(x)=i=14ciexp(j=16aij(xjpij)2) 6[0, 1]−3.32
F21(x)=i=15[(Xai)(Xai)T+ci]1 4[0, 10]−10.1532
F22(x)=i=17[(Xai)(Xai)T+ci]1 4[0, 10]−10.4028
F23(x)=i=110[(Xai)(Xai)T+ci]1 4[0, 10]−10.5363

Numerical Experiments

The MGWO, GWO, PSO, PBIL, and ACO algorithms are coded in MATLAB R2013a and implemented on Intel HD Graphics, 15.6″ 16.9 HD LCD, Pentium-Intel Core, i5 Processor 430 M, 320 GB HDD, and 3 GB Memory.

Parameter Setting

In MGWO, GWO, PSO, PBIL, and ACO algorithms, we have set the following parameters: Number of search agents (candidate) = 30; Maximum number of iterations (generations) = 500; .

Analysis and Discussion on the Results

In this section, effectiveness of using MGWO algorithm has been checked. Usually, it is done by solving a set of benchmark problems. We have used 23 such classical functions for the purpose of comparing the performance of the modified variants with other recent meta-heuristics. These classical functions are divided into 3 types: Unimodal —these functions are suitable for exploitation of the variants because they have one global optimum and no local optima. These functions are given in Appendix 1, Table A. Multimodal —these functions have a large number of local optima and are helpful to examine local optima avoidance and exploration of the variants. These functions are given in Appendix 1, Table B.
Table B.

Multimodal benchmark functions.

FunctionDimensionRange fmin
F8(x)=i=1nxisin(|xi|) 30[−500, 500]−418.9829 × 5
F9(x)=i=1n[xi210cos(2πxi)+10] 30[−5.12, 5.12]0
F10(x)=20exp(0.21ni=1nxi2)exp(1ni=1ncos(2πxi))+20+e 30[−32, 32]0
F11(x)=14000i=1nxi2i=1ncos(xii)+1 30[−600, 600]0
F12(x)=πn{10sin(πyi)+i=1n1(yi1)2[1+10sin2(πyi+1)+(yn1)2]}+i=1nu(xi,10,100,4)yi=1+xi+14u(xi,a,k,m)={k(xia)mxi>a0a<xi<ak(xia)mxi<a 30[−50, 50]0
F13(x)=0.1{sin2(3πxi)+i=1n(xi1)2[1+sin2(3πxi+1)]+(xn1)2[1+sin2(2πxn)]}+i=1nu(xi,5,100,4) 30[−50, 50]0
Fixed-dimension multimodal —the dimension of these functions is fixed. The mathematical equation of these functions is given in Appendix 1, Table C. The MGWO and GWO variants were run 30 times on each benchmark function. The numerical results (best solutions, minimum objective function value, maximum objective function value, standard deviation, mean and time performance) are reported in Tables 1 to 18. The modified variants, GWO and PSO algorithms, have to be run at least more than 10 times to find the best statistical results. It is again a common technique that a variant is run on a function many times and best solutions, mean, standard deviation, time-consuming performance, and minimum and maximum objective functions of the superior are obtained in the last generation.
Table 1.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f1(x)
f2(x)
f3(x)
GWOMGWOGWOMGWOGWOMGWO
10−1.7002e−156.8765e−19−1.3769e−18−3.8434e−220.00015614−0.00012367
20−1.874e−157.1702e−19−1.4543e−18−3.7102e−22−2.7089e−050.00025874
301.6404e−156.179e−19−1.1437e−184.1373e−22−0.00010487−0.00013463
40−1.381e−157.9917e−198.8476e−195.411e−22−7.8006e−05−0.00011876
501.7321e−15−8.077e−19−1.2949e−18−3.931e−22−0.000140990.00019745
60−1.8335e−155.9303e−19−1.1492e−183.7938e−220.00037311−0.00018313
701.3993e−15−8.4851e−191.7189e−18−4.5698e−22−0.000138358.841e−05
60−1.5119e−156.3197e−191.2181e−185.4712e−22−0.000102910.00016454
80−1.6507e−157.8559e−191.4136e−185.7181e−22−8.6107e−05−0.000233
1001.8193e−15−7.5823e−19−8.3248e−19−6.2308e−220.000332370.00017331
1201.8944e−15−6.099e−191.4121e−184.4111e−22−0.00016013−0.00020051
1401.7885e−157.9654e−191.2573e−184.2805e−22−6.0739e−050.00021778
1601.6329e−156.844e−19−9.7035e−19−4.765e−22−0.00014521−0.00025557
1801.8544e−15−7.1333e−191.0734e−18−4.3413e−220.000345177.789e−05
2001.5281e−158.7761e−19−1.1473e−18−6.9381e−22−0.000217736.441e−05
220−1.8403e−158.8858e−19−1.2264e−18−4.3322e−22−0.000108685.7033e−05
2401.3295e−157.5088e−191.0645e−18−6.6125e−220.000157980.00012506
260−1.4762e−156.8189e−191.0632e−184.212e−220.00019816−0.00030953
280−1.6359e−15−6.1216e−19−1.3139e−18−5.0504e−22−0.000297790.00010249
3001.6576e−156.6286e−191.2966e−184.6399e−220.000202231.5144e−08
320−1.7016e−15−7.0218e−191.9433e−184.4932e−22−0.000199644.7633e−05
340−1.8177e−15−6.9387e−19−9.3109e−19−4.7001e−220.000299374.6168e−05
360−1.6465e−15−7.3474e−191.3297e−18−4.9018e−22−0.000133956.1066e−05
380−1.6094e−15−7.3569e−191.088e−186.4093e−22−5.6145e−05−6.9071e−05
4001.8205e−15−7.0419e−191.3262e−185.3603e−22−5.6473e−05−8.3593e−05
420−1.703e−15−6.8039e−19−1.3781e−183.7424e−22−7.5682e−056.4413e−06
4401.7092e−15−8.0956e−191.6653e−18−4.9875e−22−1.522e−07−0.00011816
460−1.3593e−157.4904e−191.3845e−185.3811e−220.000310880.00011756
480−1.7123e−156.5486e−191.1146e−18−4.8691e−22−0.000151966.5644e−06
5001.6999e−156.8149e−191.2265e−18−4.8087e−22−0.000171440.00011237

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 18.

Time-consuming results of fixed-dimension multimodal benchmark functions.

ProblemPSO
GWO
MGWO
TIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCK
141.01128 0.0251 1.0111.008880.03120021.0091.005530.00011.005
151.01445 0.8953011 1.0141.006730.06240041.0071.011430.01560011.011
161.01589 0.00081 1.0160.9988740.03120021.0091.011140.000011.007
171.01227 0.9158702 1.0141.011690.01560011.0141.002040.01560011.014
181.01162 0.02305 1.0141.004460.01560011.0141.003210.000011.014
191.00426 0.4167091 1.0141.006950.01560011.0141.01010.01560011.014
201.01494 0.39031 1.0151.014010.03120021.0141.009020.000011.014
211.00002 0.00001 1.0141.007170.01560011.0041.011560.000011.003
220.996519 0.51167 0.9991.001920.03120021.0141.0097 0.02489 1.014
231.01091 0.0436013 1.0141.003250.03120021.0141.014170.03120021.014

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Best solution obtained by GWO and MGWO on 500 generations. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization. Results of unimodal benchmark functions (maximum and minimum). Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Results of multimodal benchmark functions (maximum and minimum). Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Results of fixed-dimension multimodal benchmark functions (maximum and minimum). Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Results of unimodal benchmark functions (mean and SD). Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Results of multimodal benchmark functions (mean and SD). Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Results of fixed-dimension multimodal benchmark functions (mean and SD). Time-consuming results of unimodal benchmark functions. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Time-consuming results of multimodal benchmark functions. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Time-consuming results of fixed-dimension multimodal benchmark functions. Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. To verify the convergence and time-consuming performance of MGWO variant, PSO and GWO variants are chosen. Here, we use 500 generations and 30 search agents for each of the variants. The convergence performance for unimodal, multimodal, and fixed-dimensional multimodal standard classical functions for the PSO, GWO, and MGWO is given in Figures 5 to 27 and results are presented in Tables 1 to 9. Simulated results in Tables 1 to 9 and Figures 5 to 27 show that the proposed variant is superior to PSO and GWO in terms of rate of convergence and best optimal solution. Hence, all experimental results reveal that the MGWO is relatively better as compared with PSO and GWO.
Figure 5.

Convergence graph of unimodal benchmark function (F1). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Figure 27.

Convergence graph of fixed-dimension multimodal benchmark function (F23). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Table 9.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f21(x)
f22(x)
f23(x)
GWOMGWOGWOMGWOGWOMGWO
1004.00114.00254.00113.99954.00293.9996
2004.00583.99593.99783.99794.00264.0004
4003.99824.00214.00363.99874.00233.9978
5003.99793.99714.00094.00183.9983.9989

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Convergence graph of unimodal benchmark function (F1). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F2). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F3). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F4). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F5). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F6). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of unimodal benchmark function (F7). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of multimodal benchmark function (F8). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. (a) Convergence graph of multimodal benchmark function (F9) and (b) convergence graph of multimodal benchmark function (F9) from 0 to 15 iterations. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of multimodal benchmark function (F10). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of multimodal benchmark function (F11). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of multimodal benchmark function (F12). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of multimodal benchmark function (F13). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F14). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F15). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F16). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F17). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F18). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F19). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F20). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F21). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F22). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. Convergence graph of fixed-dimension multimodal benchmark function (F23). GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization. The experimental statistical results of the MGWO, PSO, and GWO variants on unimodal benchmark functions are shown in Tables 10 and 13. On the basis of results obtained in these tables, we are comparing the performance of modified variant with GWO and PSO variants in terms of minimum and maximum objective value of cost functions, mean, and standard deviation. After analysis, it may be seen that modified variant gives highly competitive solutions as compared with PSO and GWO on unimodal benchmark functions. As previously discussed, the unimodal benchmark problems are competent for benchmarking exploitation of the variants. Hence, all obtained solutions evidence high rate of exploitation capability of the MGWO variant.
Table 10.

Results of unimodal benchmark functions (maximum and minimum).

ProblemPSO
GWO
MGWO
MinimumMaximumMinimumMaximumMinimumMaximum
11.6301e−055.8559e+048.3933e−296.7832e+041.5833e−357.2334e+04
20.02442.2522e+123.7699e−174.0253e+121.4605e−208.5140e+11
396.02531.1455e+054.6658e−071.1711e+052.599e−071.2165e+05
41.263690.82993.6939e−0790.63671.2213e−0891.5194
526.73952.5613e+0827.12342.2938e+0826.22012.6366e+08
61.9802e−056.0740e+041.25856.9920e+041.25187.1406e+04
70.213089.42070.003646105.99250.00057612146.7004

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Table 13.

Results of unimodal benchmark functions (mean and SD).

ProblemPSO
GWO
MGWO
PSO
GWO
MGWO
MeanMeanMeanSDSDSD
1874.3217710.7351445.20025.6498e+035.0831e+034.2505e+03
24.5045e+098.0606e+091.7028e+091.0072e+111.8002e+113.8076e+10
33.3311e+032.5686e+032.1484e+031.3596e+041.0040e+049.6138e+03
45.14863.91303.30719.855814.293314.0919
51.1509e+061.4381e+069.9549e+051.4586e+071.4875e+071.4018e+07
6859.7742709.4915421.14045.4499e+035.1678e+034.0498e+03
744.91710.74540.528838.66156.72757.2339

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Furthermore, the experimental numerical solutions of the proposed variant on multimodal test function are shown in Tables 11 and 14. We observe that modified variant performs better to other meta-heuristics on . The results obtained in Tables 11 and 14 strongly prove that high exploration of MGWO variant is able to explore the search area extensively and give promising regions of the search area.
Table 11.

Results of multimodal benchmark functions (maximum and minimum).

ProblemPSO
GWO
MGWO
MinimumMaximumMinimumMaximumMinimumMaximum
8−3.6455e+03−3.3735e+03−5703.971−2.5289e+03−6023.026−2.1300e+03
953.3912446.08252.8422e−13469.65130.0000479.1380
100.068120.82518.2601e−1420.58383.9968e−1420.8448
110.0099578.44940.015231646.05880.0000687.4643
121.4583e−066.9167e+080.055016.9928e+080.044627.4944e+08
130.01101.2165e+091.23221.2205e+091.2069.7589e+08

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Table 14.

Results of multimodal benchmark functions (mean and SD).

ProblemPSO
GWO
MGWO
PSO
GWO
MGWO
MeanMeanMeanSDSDSD
8−3.4243e+03−3.7809e+03−3.6613e+03101.8935978.95511.0165e+03
9209.510229.120211.7840101.857578.509649.7529
104.26590.72150.41593.56203.04752.2324
1143.01505.61943.5615116.024141.294337.4242
122.8521e+063.0499e+062.9527e+063.7532e+073.6338e+073.8702e+07
135.2381e+067.7264e+062.7124e+066.7511e+067.4601e+074.4432e+07

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Furthermore, the statistical results of the modified variant on fixed-dimension multimodal functions are presented in Tables 12 and 15. For these functions, we have checked the rate of convergence performance of the modified variants, PSO and GWO, in terms of minimum and maximum objective functions, mean, and standard deviation values. The solutions are consistent with those of the benchmark test problems. Modified variant gives highly competitive solutions compared with other meta-heuristics, for these problems.
Table 12.

Results of fixed-dimension multimodal benchmark functions (maximum and minimum).

ProblemPSO
GWO
MGWO
MinimumMaximumMinimumMaximumMinimumMaximum
1412.670523.30172.98218.36080.998030.6623
150.00100.04310.0203630.07860.000317320.2304
16−1.03160.2656−1.03160.2148−1.03160.7792
170.39791.13550.397891.00810.397891.5247
18362.4398348.30773534.8252
19−3.8628−3.7784−3.8599−3.3858−3.8609−2.9920
20−3.3220−1.3907−3.3220−1.3471−3.3220−0.9232
21−10.1532−0.4051−10.1490−0.4626−10.1495−0.2926
22−10.4029−0.4329−10.4002−0.6413−10.4015−0.3539
23−10.5364−1.2565−10.5346−1.3843−10.5359−0.6351

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Table 15.

Results of fixed-dimension multimodal benchmark functions (mean and SD).

ProblemPSO
GWO
MGWO
PSO
GWO
MGWO
MeanMeanMeanSDSDSD
1413.88731.82323.06690.66271.44631.8076
150.00140.02060.00130.00350.00300.0112
16−1.0194−1.0287−1.02730.10640.05630.0817
170.40140.40190.40480.04380.03420.0561
183.14433.16124.38072.66602.293723.9465
19−3.8612−3.8465−3.84860.00760.03390.0521
20−3.2481−3.2074−3.24360.15670.17290.1890
21−8.9448−7.1551−7.15962.40852.66712.2778
22−6.8767−7.8294−7.83803.89971.82182.3014
23−9.5800−7.8107−7.23322.01222.03662.1071
Finally, the performance of the newly proposed algorithm has been verified using starting and end time of the CPU (TIC and TOC), CPUTIME, and CLOCK. These results are provided in Tables 16 and 17, respectively. It may be seen that the modified variant solved most of the benchmark functions in least time as compared with other variants.
Table 16.

Time-consuming results of unimodal benchmark functions.

ProblemPSO
GWO
MGWO
TIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCK
11.013460.01560011.0131.012140.06240041.0121.004320.00011.004
21.01160.0011.0141.012810.01560011.0141.006690.0011.014
31.006950.0011.0141.014410.0011.0141.01320.0011.014
41.003650.000011.0161.013750.000011.0160.9995160.000011.014
51.00910.04160011.0141.004010.03120021.0141.002960.000121.014
61.004790.01560011.0141.013760.01560011.0141.005780.01460011.014
71.0060.06660051.0141.001010.04680031.0161.011780.01560011.014

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

Table 17.

Time-consuming results of multimodal benchmark functions.

ProblemPSO
GWO
MGWO
TIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCKTIC TOCCPUTIMECLOCK
81.013770.01560011.0141.002920.01560011.0031.002990.01090011.003
91.012610.1092011.0131.007160.03120021.0071.009820.000011.009
101.014010.08160021.0141.010140.09360061.011.001160.03120021.001
111.008410.00011.0081.00590.00011.0061.001690.00011.001
121.010460.03120021.0041.002950.03120021.0031.009160.03010021.003
131.005840.4212031.0061.00520.01560011.0051.014070.03120021.006

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization; PSO, particle swarm optimization.

To sum up, all simulation results assert that the modified approach is very helpful in improving the efficiency of the GWO in terms of result quality as well as computational efforts.

Real-Life Data Set Problems

In this section, the following 5 data set problems are employed: (1) XOR, (2) Balloon, (3) Breast Cancer, (4) Iris, and (5) Heart. These problems have been solved using modified variant, and results obtained have been compared with several meta-heuristics. Different types of parameters have been used for running code of several meta-heuristics. These parameters are listed in Appendix 1, Table E. The performance of these algorithms has been compared in terms of average, standard deviation, classification rate, and convergence rate of all the variants. All these data set problems have been discussed step-by-step in the following sections.
Table E.

The initial parameters of algorithms.

AlgorithmParameterValue
MGWO a Linearly decreased from 2 to 0
Population size50 for XOR and Balloon, 200 for the rest
Maximum number of generations250
GWO a Linearly decreased from 2 to 0
Population size50 for XOR and Balloon, 200 for the rest
Maximum number of generations250

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

XOR data set

This data set has 3 attributes (input), 8 training samples, 8 test samples, 2 classes, and 1 output (Appendix 1, Table D[18]). The experimental numerical results obtained through MGWO, GWO, PSO, GA, ACO, ES, and PBIL for this data set are shown in Table 19, and convergence performance of GWO and MGWO variant is shown in Figure 28.
Table D.

Classification data sets.

Classification data setsNumber of attributesNumber of training samplesNumber of test samplesNumber of classes
3-bits XOR388 as training samples2
Balloon41616 as training samples2
Iris4150150 as training samples3
Breast cancer95991002
Heart22801872

Adapted from Mirjalili.[18]

Table 19.

Experimental results for the XOR data set.

VariantMSE (ave.)MSE (std)Classification rate, %
MGWO 0.0053 0.0173 100
GWO0.0094100.029500100
PSO0.0840500.03594537.50
GA0.0001810.000413100
ACO0.1803280.02526862.50
ES0.1187390.01157462.50
PBIL0.0302280.03966862.50

Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Figure 28.

Convergence graph of XOR data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Experimental results for the XOR data set. Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. Convergence graph of XOR data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. It is clear from Table 19 that MGWO, GWO, and GA variants give the better quality of statistical results as compared with other meta-heuristics. The results obtained with MGWO, GWO, and GA variants indicate that it has the highest ability to avoid the local optima and is considerably superior to other variants such as PSO, GA, ACO, ES, and PBIL. The performance of these variants has also been compared in terms of average, standard deviation classification rate (Table 19), and convergence rate (Figure 28). The low average and standard deviation show the superior local optima avoidance of the variant. On the basis of obtained results, we have concluded that newly modified variant MGWO gives highly competitive results as compared with other existing variants, and convergence graph shows that MGWO gives better solutions rather than GWO variant.

Balloon data set

It is clear from Appendix 1, Table D[18] that this data set has 4 attributes, 16 training samples, 16 test samples, and 2 classes. The statistical numerical and convergence results of the variants on this data set are shown in Table 20 and Figure 29.
Table 20.

Experimental results for the balloon data set.

VariantMSE (ave.)MSE (std)Classification rate, %
MGWO 0.0014 0.0132 100
GWO9.38e−152.81e−14100
PSO0.0005850.000749100
GA5.08e−241.06e−23100
ACO0.0048540.007760100
ES0.0190550.170260100
PBIL2.49e−055.27e−05100

Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Figure 29.

Convergence graph of balloon data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Experimental results for the balloon data set. Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. Convergence graph of balloon data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. Here, we are comparing the accuracy of the algorithms in terms of average, standard deviation, classification rate, and convergence rate of the algorithm. First, we observe that all the variants give similar classification rate. Second, on the basis of statistical and convergence results, we observe that modified variant gives highly competitive solutions as compared with other variants such as GWO, PSO, GA, ACO, ES, and PBIL algorithms. These convergence results are plotted in Figure 29.

Breast cancer data set

This data set has 9 attributes, 599 training samples, 100 test samples, and 2 classes (Appendix 1, Table D).[18] All problems have been run 10 times using this data set. The numerical results are shown in Table 21. The convergence performance on this data set is plotted in Figure 30.
Table 21.

Experimental results for the breast cancer data set.

VariantMSE (ave.)MSE (std)Classification rate, %
MGWO 0.0036 0.0063 99.11
GWO0.00127.4498e−0599
PSO0.0348810.00247211.00
GA0.0030260.00150098.0
ACO0.0135100.00213740.00
ES0.0403200.00247006.00
PBIL0.0320090.00306507.00

Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Figure 30.

Convergence graph of breast cancer data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Experimental results for the breast cancer data set. Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. Convergence graph of breast cancer data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. We have observed that the modified variant (MGWO) gives 99.11% classification rate and better convergence solutions (Figure 30) that are superior to other meta-heuristics.

Iris data set

This data set is another well-known testing data set in the text. It consists of 4 attributes, 150 training samples, 150 test samples, and 3 classes as represented in Appendix 1, Table D.[18] The convergence performance of MGWO, GWO, PSO, GA, ACO, ES, and PBIL variants is plotted in Figure 31. The numerical results are shown in Table 22.
Figure 31.

Convergence graph of iris data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Table 22.

Experimental results for the iris data set.

VariantMSE (ave.)MSE (std)Classification rate, %
MGWO 0.6712 0.0024 91.334
GWO0.02290.003291.333
PSO0.2286800.05723537.33
GA0.0899120.12363889.33
ACO0.4059790.05377532.66
ES0.3143400.05214246.66
PBIL0.1160670.03635586.66

Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Convergence graph of iris data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. Experimental results for the iris data set. Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. We have observed that these variants give the classification rate as MGWO (91.334%), GWO (91.333%), PSO (37.33%), GA (89.33%), ACO (32.66%), ES (46.66%), and PBIL (86.66%), respectively. The modified variant presents the better classification rate as compared with other variants. The results confirm that MGWO algorithm has better local optima accuracy and avoidance simultaneously.

Heart data set

The heart data set is really one of the most popular data sets in the text. This data set has 22 attributes, 80 training samples, 187 testing samples, and 2 classes, respectively, and these data sets are reported in Appendix 1, Table D.[18] The results of the training these variants are shown in Table 23, and the convergence performance of MGWO and GWO is plotted in Figure 32. The low average and standard deviation show the superior local optima avoidance of the variant.
Table 23.

Experimental results for the heart data set.

VariantMSE (ave.)MSE (std)Classification rate, %
MGWO 0.0765 0.0376 75.14
GWO0.1226000.00770075.00
PSO0.1885680.00893968.75
GA0.0930470.02246058.75
ACO0.2284300.00497900.00
ES0.1924730.01517471.25
PBIL0.1540960.01820445.00

Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization.

Bold values highlight the results of proposed variant.

Figure 32.

Convergence graph of heart data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization.

Experimental results for the heart data set. Abbreviations: ACO, ant colony optimization; ES, evolution strategy; GA, Genetic algorithm; GWO, gray wolf optimization; MGWO, mean gray wolf optimization; MSE, mean squared error; PBIL, population-based incremental learning; PSO, particle swarm optimization. Bold values highlight the results of proposed variant. Convergence graph of heart data set problem. GWO indicates gray wolf optimization; MGWO, mean gray wolf optimization. The results of Table 23 reveal that MGWO has the best performance in this data set in terms of improved mean squared error, classification rate, and convergence as compared with other meta-heuristics. Figure 32 shows that MGWO variant gives better quality of convergence solutions and outperforms GWO variant.

Conclusions

This article proposes a modified variant of GWO, namely, MGWO, inspired by the hunting behavior of gray wolves in nature. A statistical mean is used to balance the exploitation and exploration in the search space over the route of generations. The results reveal that the newly modified variant benefits from high exploration in comparison with the PSO and GWO algorithms. Moreover, the performance of the modified variant has also been tested on 5 data set problems, ie, (1) XOR, (2) Balloon, (3) Breast Cancer, (4) Iris, and (5) Heart. For the verification, the statistical results of the MGWO algorithm have been compared with 6 other meta-heuristics trainers: GWO, PSO, GA, ACO, ES, and PBIL. On the basis of results obtained for these data sets, we have discussed and identified the reasons for poor and strong performance of other variants. The experimental statistical results showed that the modified variant gives high competitive solutions in terms of improved local optima avoidance and high level of accuracy in mean, standard deviation, classification, and convergence rate as compared with GWO, PSO, GA, ACO, ES, and PBIL algorithms.
Table 2.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f4(x)
f5(x)
f6(x)
GWOMGWOGWOMGWOGWOMGWO
10−3.5971e−07−1.2152e−080.678470.82644−0.49897−0.49874
203.6824e−071.1563e−080.4580.68259−0.50149−0.50111
303.6923e−07−1.1926e−080.201090.46306−0.49817−0.50052
40−3.5931e−07−1.2112e−08−0.000275830.20532−0.50053−0.50137
50−3.667e−07−1.2199e−080.007573−0.00072299−0.49961−0.49994
60−3.5238e−071.1977e−08−0.00050232−0.00016668−0.49997−0.50213
70−3.6787e−07−2.3984e−090.00711520.0111350.0032181−0.49861
60−3.6908e−07−1.1464e-08−1.4391e−05−5.46e−050.0048451−0.5023
80−3.6939e−071.1874e−080.00718460.00019418−0.501096.4864e−05
1003.6842e−07−1.2112e−080.0003128−8.8968e−05−0.00327030.00011448
1203.6789e−07−1.2021e−080.0043621−0.0026679−0.00052754−0.50201
1403.6872e−071.2213e−08−0.000131150.0010269−0.50025−0.50006
160−3.6916e−071.0023e−08−0.000472760.0011014−0.50169−0.49955
180−3.6928e−07−1.2188e−08−0.0018889−7.6101e−05−0.50035−0.49981
2003.4979e−071.213e−08−0.00071533−2.7018e−06−0.49867−0.50089
2203.6801e−07−1.1688e−080.0046027−0.00092194−0.49925−0.50265
2402.1383e−07−1.2064e−080.00936350.00622330.00407584.9444e−06
260−3.6483e−07−1.2205e−080.010840.0030486−0.501190.00053093
2803.5202e−071.219e−080.0039531−2.2467e−05−0.49935−0.4998
3003.6114e−071.2123e−08−0.00016604−3.0738e−05−0.5001−0.50144
3203.6799e−07−1.2124e−080.00987417.4604e−05−0.50029−0.49928
340−3.1058e−077.6152e−090.000219837.0245e−07−0.49867−0.50309
360−3.6749e−071.2182e−088.5684e−05−4.2109e−05−0.50054−0.49815
3803.6227e−071.204e−080.00018917−1.2425e−05−0.49908−0.50015
400−3.5667e−07−1.2101e−080.0124043.7472e−06−0.49769−0.50063
4203.6893e−071.195e−08−0.00140159.5091e−05−0.50358−0.5018
440−3.3098e−071.2011e−080.00182810.0040767−0.50143−0.49828
4602.708e−08−1.1703e−080.00106680.00013148−0.49762−0.5041
4803.5648e−07−2.9391e−090.0021144−0.00020263−0.50137−0.4993
500−3.6509e−07−1.2165e−080.00077988−0.0023799−0.501220.0010644

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 3.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f7(x)e
f8(x)e
f9(x)e
GWOMGWOGWOMGWOGWOMGWO
100.0294230.077502418.7634−298.9024−4.8178e−091.4727e−09
200.061542−0.00346787.890677422.53375.2078e−09−2.0444e−09
30−0.0052652−0.010899200.801726.252191.1065e−083.9829e−10
40−0.107740.025158−500−3.1740721.4494e−08−7.4365e−10
50−0.0279830.021111−125.3959421.64165.4837e−09−3.9091e−09
600.082137−0.037085−7.221072199.65241.0833e−083.9096e−09
70−0.07810.0064255420.9101420.7892−9.9361e−09−8.9296e−10
600.0557950.061683419.77−499.95219.267e−09−4.7717e−09
80−0.039566−0.024718−128.3696−299.3261−9.953e−09−4.8101e−09
1000.00024457−0.020844421.3294−109.8219−8.6132e−092.5863e−09
1200.0012085−0.02979200.379233.231776.9308e−091.2765e−09
140−0.014384−0.04831163.42083199.2015−9.9047e−09−2.2088e−09
160−0.039246−0.040789−300.1527418.7271−1.0436e−084.3848e−09
1800.018483−0.01396813.31976−302.8847−1.2663e−085.5603e−09
200−0.000718620.049291−7.574163−500−8.3694e−092.6931e−09
220−0.0017537−0.055844−301.9192420.30398.1988e−093.9673e−09
2400.0613280.0054619421.206628.46334−7.1084e−09−4.0706e−09
260−0.0102290.0124565.96078870.375766.7086e−091.0293e−08
2800.0987310.01818−127.7605−69.080798.1177e−09−5.977e−09
3000.0202260.004001560.80246207.1159−1.3378e−08−2.3381e−09
320−0.0133940.0064534−304.0941−302.4255−7.3408e−096.2421e−09
3400.00817190.0264201.5527423.1574−6.6327e−095.2722e−10
3600.0061936−0.002891−500422.04096.5257e−091.1726e−09
380−0.00658730.0041979205.8273−305.4574−1.4001e−08−4.8026e−09
400−0.050018−0.0030437−65.5463−16.43798−7.3359e−094.0501e−09
420−0.04869−0.0062353−128.2352−120.68976.4397e−095.0764e−10
4400.00054960.0033052−303.8939206.3322−8.5935e−097.1361e−09
460−0.0213590.0080088201.04206.2603−9.9156e−09−3.2966e−09
4800.0056483−0.0077328−301.965270.215488.724e−097.3021e−10
5000.0011928−0.007108204.9987−14.75609−7.6446e−098.1855e−09

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 4.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f10(x)
f11(x)
f12(x)
GWOMGWOGWOMGWOGWOMGWO
10−1.8167e−148.9423e−15−0.0190395.3262e−09−0.9996−0.99932
20−1.2321e−141.4626e−144.43822.3876e−090.024447−0.0057794
301.4065e−146.8127e−15−5.434−1.5246e−08−1.0013−0.99781
40−2.0385e−14−1.3312e−140.025776−6.5272e−090.018335−0.93331
50−2.8643e−144.7171e−150.010692−1.6958e−08−1.0025−0.99452
602.6769e−146.2785e−150.081679−7.0927e−09−0.0012782−0.096163
70−1.9476e−149.5679e−150.0386929.7113e−09−1.0022−0.99453
602.5826e−14−7.632e−15−0.0061243−2.5782e−08−0.73552−0.99316
80−1.9214e−148.0977e−15−0.00180572.5804e−08−1.00260.0017965
100−2.0926e−145.941e−15−0.070671−5.0612e−09−0.2654−1.0008
120−1.9016e−141.5692e−140.0023691−1.8281e−08−1.0066−1.0225
140−1.3046e−14−1.1063e−14−0.00662123.2331e−08−0.098088−1.0127
160−2.2857e−141.9852e−140.0631091.4704e−08−1.0026−1.0014
180−2.0175e−14−1.0776e−14−0.10185−1.7661e−09−0.074417.4273e−05
200−2.6876e−14−9.1138e−150.002799−5.9329e−09−0.99391−1.0002
220−2.3615e−148.6438e−15−0.0075882−3.5188e−08−0.948040.00012775
240−2.0723e−14−1.1247e−140.042538−1.427e−09−0.9851−0.99864
260−2.8553e−14−4.0693e−150.0059001−3.9786e−08−1.0715−0.9958
280−2.5586e−141.2396e−14−0.0975012.9202e−09−1.0032−0.0021754
300−1.1253e−141.6556e−14−0.0113784.7113e−080.022496−0.99742
3201.2973e−141.1214e−140.0269994.388e−08−1.001−0.9599
340−7.5563e−15−1.0387e−14−0.0060048−4.2643e−08−0.94636−1.0336
360−2.9824e−149.6746e−15−0.079693−1.7512e−08−0.99739−1.0102
380−1.6148e−14−1.1281e−14−0.0998931.8579e−08−0.99301−1.009
400−1.5006e−14−3.6308e−15−0.0540284.2624e−08−1.0046−0.92644
420−2.3638e−14−5.7501e−150.051612−3.9184e−08−0.017619−0.99712
440−1.9547e−146.1612e−150.010453−2.9982e−08−1.0005−0.001263
4601.8011e−142.5528e−15−0.081483.6022e−08−0.9995−0.99998
480−2.0054e−14−7.0224e−150.11215−1.1691e−080.0091993−0.97862
500−1.7128e−14−5.6568e−15−0.0574933.9177e−08−0.995−0.98908

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 5.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f13(x)
GWOMGWO
101.00170.33972
200.037012−0.042207
300.998730.67138
400.00048659−0.017276
500.0132080.99471
60−0.0115511.0415
700.99990.0019544
600.855410.998
800.013833−0.00011723
1000.0175270.99887
1200.999780.80284
1401.01911.0166
1600.879570.0014587
1800.016840.99995
2000.669560.8789
2201.00061.0346
240−0.11366−0.029245
2600.00824160.66927
2800.019867−0.038115
3000.0352050.0001905
3200.669620.998
3400.997641.0259
3601.01080.76088
3801.0675−0.0019594
4000.983360.99662
4200.0514140.86818
4401.00040.022489
4601.10231
4800.955240.016364
5000.933890.99916

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 6.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f14(x)
GWOMGWO
200−0.00565596−31.9871
500−31.9651−31.9763
f15(x)
100−0.3890.19072
200−50.27247
4001.29980.15591
50050.17047

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 7.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
f16(x)
f17(x)
f18(x)
GWOMGWOGWOMGWOGWOMGWO
200−0.0898820.0897463.14153.14130.000167912.0193e−05
5000.7126−0.712612.27462.2757−0.99995−0.99997

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

Table 8.

Best solution obtained by GWO and MGWO on 500 generations.

IterationsProblem name
GWO
MGWO
f19(x)
1000.0460990.058244
3000.555080.55606
5000.852940.85251
f20(x)
1000.201520.20171
1500.146430.1467
2000.477630.47735
3000.275390.27526
4000.311650.31187
5000.657240.65705

Abbreviations: GWO, gray wolf optimization; MGWO, mean gray wolf optimization.

  2 in total

1.  An enhanced version of Harris Hawks Optimization by dimension learning-based hunting for Breast Cancer Detection.

Authors:  Navneet Kaur; Lakhwinder Kaur; Sikander Singh Cheema
Journal:  Sci Rep       Date:  2021-11-09       Impact factor: 4.379

Review 2.  Development of Nanosensors Based Intelligent Packaging Systems: Food Quality and Medicine.

Authors:  Ramachandran Chelliah; Shuai Wei; Eric Banan-Mwine Daliri; Momna Rubab; Fazle Elahi; Su-Jung Yeon; Kyoung Hee Jo; Pianpian Yan; Shucheng Liu; Deog Hwan Oh
Journal:  Nanomaterials (Basel)       Date:  2021-06-08       Impact factor: 5.076

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.