Literature DB >> 32508906

An Adaptive Shrinking Grid Search Chaotic Wolf Optimization Algorithm Using Standard Deviation Updating Amount.

Dongxing Wang1,2, Xiaojuan Ban1, Linhong Ji3, Xinyu Guan3, Kang Liu2, Xu Qian2.   

Abstract

To improve the optimization quality, stability, and speed of convergence of wolf pack algorithm, an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA) was proposed. First of all, a strategy of adaptive shrinking grid search (ASGS) was designed for wolf pack algorithm to enhance its searching capability through which all wolves in the pack are allowed to compete as the leader wolf in order to improve the probability of finding the global optimization. Furthermore, opposite-middle raid method (OMR) is used in the wolf pack algorithm to accelerate its convergence rate. Finally, "Standard Deviation Updating Amount" (SDUA) is adopted for the process of population regeneration, aimed at enhancing biodiversity of the population. The experimental results indicate that compared with traditional genetic algorithm (GA), particle swarm optimization (PSO), leading wolf pack algorithm (LWPS), and chaos wolf optimization algorithm (CWOA), ASGS-CWOA has a faster convergence speed, better global search accuracy, and high robustness under the same conditions.
Copyright © 2020 Dongxing Wang et al.

Entities:  

Mesh:

Year:  2020        PMID: 32508906      PMCID: PMC7251436          DOI: 10.1155/2020/7986982

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

1.1. Literature Review

The metaheuristic search technology based on swarm intelligence has been increasing in popularity due to its ability to solve a variety of complex scientific and engineering problems [1]. The technology models the social behavior of certain living creatures, in which each individual is simple and has limited cognitive capability, but the swarm can act in a coordinated way without a coordinator or an external commander and yield intelligent behavior to obtain global optima as a whole [2]. In [3], Yang Cuicui et al. adopted bacterial foraging optimization to optimize the structural learning of Bayesian networks. In [4] and [5], swarm intelligent algorithm is used for functional module detection in protein-protein interaction networks to help biologists to find some novel biological insights. In [6], Ji et al. performed a systematic comparison of three typical methods based on ant colony optimization, artificial bee colony algorithm, and bacterial foraging optimization regarding how to accurately and robustly learn a network structure for a complex system. In [7], the authors utilize the artificial immune algorithm to infer the effective connectivity between different brain regions. In [8], researchers used an ant colony optimization algorithm for learning brain effective connectivity network from fMRI data. Particle swarm optimization (PSO) [9] algorithm was proposed through the observation and study of the bird group's flapping behavior. Ant colony optimization (ACO) [10] algorithm was proposed under the principle of simulating ant social division of labor and cooperative foraging. Fish swarm algorithm (FSA) [11] was proposed to simulate the behavior of foraging and clustering in the fish group. In [12], bacterial foraging optimization (BFO) algorithm was proposed by mimicking the foraging behavior of Escherichia coli in the human esophagus. Artificial shuffled frog leaping algorithm (SFLA) [13] was put forward through the simulation of the frog groups to share information foraging process and exchange mechanism. In [14], Karaboga and Basturk proposed artificial bee colony (ABC) algorithm based on the concept of postpeak, colony breeding, and the way of foraging. But no algorithm is universal; these swarm intelligence optimization algorithms have their own shortcomings such as slow convergence, easy to fall into local optimum, or low accuracy. In [15], the authors proposed an improved ant colony optimization (ICMPACO) algorithm based on the multipopulation strategy, coevolution mechanism, pheromone updating strategy, and pheromone diffusion mechanism in order to balance the convergence speed and solution diversity and improve optimization performance in solving large-scale optimization problem. To overcome the deficiencies of weak local search ability in genetic algorithms (GA) [16] and slow global convergence speed in ant colony optimization (ACO) algorithm in solving complex optimization problems, in [17], the authors introduced the chaotic optimization method, multipopulation collaborative strategy, and adaptive control parameters into the GA and ACO algorithm to propose a genetic and ant colony adaptive collaborative optimization (MGACACO) algorithm for solving complex optimization problems. On the one hand, the ant colony optimization (ACO) algorithm has the characteristics of positive feedback, essential parallelism, and global convergence, but it has the shortcomings of premature convergence and slow convergence speed; on the other hand, the coevolutionary algorithm (CEA) emphasizes the existing interaction among different subpopulations, but it is overly formal and does not form a very strict and unified definition. Therefore, in [18], Huimin et al. proposed a new adaptive coevolutionary ant colony optimization (SCEACO) algorithm based on the complementary advantages and hybrid mechanism. In 2007, Yang and his coauthors proposed the wolf swarm algorithm [19], which is a new swarm intelligence algorithm. The algorithm simulates the wolf predation process, mainly through the walk, raid, and siege of three kinds of behavior and survival of the fittest population update mechanism to achieve the purpose of solving complex nonlinear optimization problems. Since the wolf pack algorithm is proposed, it is widely used in various fields and has been developed and improved continually, such as follows. In [20], the authors proposed a novel and efficient oppositional wolf pack algorithm to estimate the parameters of Lorenz chaotic system. In [21], the modified wolf pack search algorithm is applied to compute the quasioptimal trajectories for the rotor wing UAVs in the complex three-dimensional (3D) spaces. In [22], wolf algorithm was used to make out polynomial equation roots of the problem accurately and quickly. In [23], a new wolf pack algorithm was designed aiming to get better performance, including new update rule of scout wolf, new concept of siege radius, and new attack step kind. In [24], Qiang and Zhou presented a wolf colony search algorithm based on the leader's strategy. In [25], to explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step size was proposed. In [26], Mirjalili et al. proposed “grey wolf optimizer” based on the cooperative hunting behaviour of wolves, which can be regarded as a variant of paper [19]; in [27-29], grey wolf optimizer is adopted to solve nonsmooth optimal power flow problems, optimal planning of renewable distributed generation in distribution systems, and optimal reactive power dispatch considering SSSC (static synchronous series compensator).

1.2. Motivation and Incitement

From the review of the above literature, the current optimization algorithm of wolf pack follows the principle of “a certain number of scouting wolves lead wolves through greedy search with a specially limited number of times (each wolf has only four opportunities in some literatures),” the principle of “fierce wolves approach the first wolf through a specially limited number of times of rushing” in the call and rushing process, and the principle of “fierce wolves pass through a specially limited number of times of rushing” in the siege process greedy search for prey “in this operation mechanism, the algorithm itself too imitates the actual hunting behavior of wolves, especially in” grey wolf optimization,” which divides the wolves in the algorithm into a more detailed level. The advantage of doing so is that it can effectively guarantee the final convergence of the algorithm because it completely mimics the biological foraging process; however, the goal of the intelligent optimization algorithm of wolves is to solve the optimization problem efficiently, and imitating wolves' foraging behavior is only a way. Therefore, the intelligent optimization algorithm of wolves should be more abstract on the basis of imitating wolves' foraging behavior in order to improve its optimization ability and efficiency. Due to the reasons above, each of variants about wolf pack algorithm mentioned above has its own limitation or inadequacy, including slow convergence, weak optimization accuracy, and narrow scope of application.

1.3. Contribution and Paper Organization

To further improve the wolf pack optimization algorithm to make it have better performance, this paper proposed an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount (ASGS-CWOA). The paper consists of five parts including Introduction, Principle of LWPS and CWOA (both of them are classic variants of wolf pack optimization algorithm), Improvement, Steps of ASGS-CWOA, Numerical Experiments, and corresponding analyses.

2. Materials and Methods

2.1. Principle of LWPS and CWOA

Many variants about original wolf pack algorithm are mentioned above; in this article, we focus on LWPS and CWOA.

2.1.1. LWPS

The leader strategy was employed into traditional wolf pack algorithm, detailed in [24], unlike the simple simulation of wolves' hunting in [26]; LWPS abstracts wolves' hunting activities into five steps. According to the thought of LWPS, the specific steps and related formulas are as follows: (1) Initialization. The step is to disperse wolves to search space or solution space in some way. Here, the number of wolf populations can be represented by N and dimensions of the search space can be represented by D, and then, we can get the position of the i-th wolf by the following equation:where x is the location of the i-th wolf in d-th dimension, N means the number of wolf population, and D is the maximum dimension number of the solution space. Initial location of each wolf can be produced bywhere rand (0, 1) is a random number distributed uniformly in the interval [0, 1] and xmax and xmin are the upper and lower limits of the solution space, respectively. (2) Competition for the Leader Wolf. This step is to find the leader wolf with best fitness value. Firstly, Q wolves should be chosen as the candidates, which are the top Q of wolf pack according to their fitness. Secondly, each one of the Q wolves searches around itself in D directions, so a new location can be got by equation (3); if its fitness is better than the current fitness, the current wolf i should move to the new location, otherwise do not move, then searching will be continued until the searching time is greater than the maximum number Tmax or the searched location cannot be improved any more. Finally, by comparing the fitness of Q wolves, the best wolf can be elected, and it is the leader wolf:where step_a is the search step size. (3) Summon and Raid. Each of the other wolves will raid toward the location of leader wolf as soon as it receives the summon from leader wolf, and it continues to search for preys during the running road. Finally, a new location can be got by equation (4); if its fitness is better than the one of current location, the wolf will move to the new one, otherwise stay at the current location:where x is the location of the leader wolf in d-th dimension and step_b is the run step size. (4) Siege the Prey [. After the above process, wolves come to the nearby of the leader wolf and will be ready to catch the prey until the prey is got. The new location for anyone except leader wolf can be obtained by the following equation:where step_c is the siege step size, r is a number generated randomly by the function rand (−1, 1) distributed uniformly in the interval [−1, 1], and R0 is a preset siege threshold. In the optimization problem, with the current solution getting closer and closer to the theoretical optimal value, the siege step size of wolf pack also decreases with the increase in iteration times so that wolves have greater probability to find better values. The following equation is about the siege step size, which is obtained from [31]:where step_cmin is the lower limit of the siege step size, x is the upper limit of search space in d-th dimension, x is the lower limit of search space in d-th dimension, step_cmax is the upper limit of the siege step size, and t means the current number of iterations while T represents the upper limit one. Being out of boundary is not allowed to each wolf, so equation (7) is used to deal with the possible transboundary: (5) Distribution of Food and Regeneration. According to the wolf group renewal mechanism of “survival of the strong,” group renewal is carried out including the worst m wolves will die and be deleted and new m wolves will be generated by equation (2).

2.1.2. CWOA

CWOA develops from the LWPS by introducing the strategy of chaos optimization and adaptive parameters into the traditional wolf pack algorithm. The former utilizes logistic map to generate chaotic variables that are projected to solution space in order to search, while the latter introduces adaptive step size to enhance the performance, and they work well. The equation of logistic map is as follows, which is from [32]:where μ is the control variable and when μ is 4, the system is in chaos state. The strategy of adaptive variable step size search includes the improvement of search step size and siege step-size. In the early stage, wolves should search for preys with a large step size so as to cover the whole solution space as much as possible, while in the later stage, with the wolves gathering continuously, wolves should take a small step size to search for preys so that they can search finely in a small target area. As a result, the possibility of finding the global optimal solution will be increased. And by equation (9), we can get the step size of migration, while by equation (10), the one of siege can be obtained:where step_c0 is the starting step size for siege.

2.2. Improvement

2.2.1. Strategy of Adaptive Shrinking Grid Search (ASGS)

In the solution space with D dimensions, a searching wolf needs to migrate along different directions in order to find preys; during any iteration of the migration, it is according to the thought of the original LWPS and CWOA that there is only a dynamic point around the current location to be generated according to equation (3). However, a single dynamic point is isolated and not well enough to search in the current domain space of some a wolf; Figures 1(a) and 1(b) show the two-dimensional and the three-dimensional spaces, respectively. In essence, the algorithm needs to take the whole local domain space of the current wolf to be considered in order to find the local best location, so ASGS was used to generate an adaptive grid centered on the current wolf, which is extending along 2 × D directions and includes (2 × K + 1) nodes, where K means the number of nodes taken along any direction. Figures 2(a) and 2(b) show the migration about ASGS in two-dimensional and three-dimensional space, respectively, and for brevity, here, K is set to 2, detailed in the following equation:
Figure 1

(a) The range of locations of searching or competing wolves in CWOA when dimension is 2; (b) the range of locations of searching or competing wolves in CWOA when dimension is 3, where step_a = 10 or step_c = 10 (the red point means the current location of some a wolf).

Figure 2

The searching situation of wolves in ASGS-CWOA when dimension is (a) 2 and (b) 3, where step_a = 10 or step_c = 10 and K = 2 (the red point means the current location of some a wolf, and the black ones mean the searching locations of the wolf).

So a node in the grid can be defined as During any migration, the node with best fitness in the grid should be selected as new location of the searching wolf, and after any migration, the leader wolf of the population is updated according to the new fitness. It needs to be particularly pointed out that the searching grid will be smaller and smaller as the step_anew becomes smaller. Obviously, compared to a single isolated point in traditional methods, the searching accuracy and the possibility of finding the optimal value can be improved by ASGS including (2 × K + 1) nodes. As the same reason, during the process of the siege, the same strategy is used to find the leader wolf in the local domain space including (2 × K + 1) points. But the different is that the step size of siege is smaller than the one of migration. After any migration, the leader wolf of the population is updated according to new current fitness, and then, the current best wolf will be the leader wolf temporarily.

2.2.2. Strategy of Opposite-Middle Raid (OMR)

According to the idea of the traditional wolf pack algorithms, it is unfortunately that raid pace is too small or unreasonable and wolves cannot rapidly appear around the leader wolf when they receive the summon signal from the leader wolf, as shown in Figure 3(a). So OMR is put forward to solve the problem above, and its main clue is that the opposite location of the current location relative to the leader wolf should be calculated by the following equation:
Figure 3

(a) The range of locations of wolves in raid when dimension is 3 according to the original thought of wolf pack algorithm; (b) the range of locations of wolves in raid when dimension is 3 according to OMR, where step_a = 2. The red point means the current position of some wolves, the pink one indicates the position of the lead wolf, and the blue one means the middle point of current wolf and lead wolf.

If the opposite location has better fitness than the current one, the current wolf should move to the opposite one. Otherwise, the following is obtained:where x is the middle location in d-th dimension between the i-th wolf and the leader wolf, x is the location of the wolf i in d-th dimension, and x is the location of the leader wolf in d-th dimension. “Bestfitness” returns a wolf with the best fitness from the given ones. From equation (14) and Figure 3(b), it is seen that there are 2 points among the current point and the middle point, and as the result of each raid, the point with best fitness is chosen as the new location of the current i-th wolf. Thereby, not only the wolves can appear around the leader wolf as soon as possible but also they can try to find new preys as far as possible.

2.2.3. Standard Deviation Updating Amount (SDUA)

According to the basic idea of the leader wolf pack algorithm, during the iterations, some wolves with poorer fitness will be continuously eliminated, while the same amount of wolves will be added to the population to make sure that the bad gene can be eliminated and the population diversity of the wolf pack can be ensured, so it is not easy to fall into the local optimal solution and the convergence rate can be improved. However, the amount of wolves that should be eliminated and added to the wolf pack is a fixed number, which is 5 in LWPS and CWOA mentioned before, and that is stiff and unreasonable. In fact, the amount should be a dynamic number to reflect the dynamic situation of the wolf pack during any iteration. Standard deviation (SD) is a statistical concept, which has been widely used in many fields. Such as in [33], standard deviation was used into industrial equipment to help process the signal of bubble detection in liquid sodium; in [34], standard deviation is used with delay multiply to form a new weighting factor, which is introduced to enhance the contrast of the reconstructed images in medical fields; in [35], based on eight-year-old dental trauma research data, standard deviation was utilized to help analyze the potential of laser Doppler flowmetry; the beam forming performance has a large impact on image quality in ultrasound imaging, to improve image resolution and contrast; in [36], a new adaptive weighting factor for ultrasound imaging called signal mean-to-standard-deviation factor (SMSF) was proposed, based on which researchers put forward an adaptive beam forming method for ultrasound imaging based on the mean-to-standard-deviation factor; in [37], standard deviation was adopted to help analyze when an individual should start social security. In this paper, we take a concept named “Standard Deviation Updating Amount” to eliminate wolves with poor fitness and dynamically reflect the situation of the wolf pack, which means the population amount of wolf pack and standard deviation about their fitness determine how many wolves will be eliminated and regenerated. Standard deviation can be obtained as follows:where N means the number of wolf pack, x means the fitness of the i-th wolf, and μ is the mean value of the fitness. Then, SDUA is gained by the following formula: SDUA is zero when the iteration begins; next, difference between the mean value and the SD about the fitness of the wolf pack should be computed, and if the fitness of current wolf is less than the difference, SDUA increases by 1; otherwise, nothing must be done. So the value of SDUA is got, and SDUA wolves should be eliminated and regenerated. The effect of using a fixed value is displayed in Figure 4(a), and Figure 4(b) shows the effect of using ASD. It is clear that the amount in the latter is fluctuant as the wolf pack has a different fitness during any iteration and better than the former to reflect the dynamic situations of iterations. Accordingly, not only will the bad gene from the wolf pack be eliminated, but the population diversity of the wolf pack can be also kept better so that it is difficulty to fall into local optimal solution and the convergence rate can be improved as far as possible.
Figure 4

(a) The number that how many wolves will be starved to death and regenerated as the iteration goes on. It is a fixed number 5. (b) The number that how many wolves will be starved to death and regenerated following the new method named SDUA as the iteration goes on.

2.3. Steps of ASGS-CWOA

Based on the thoughts of adaptive grid search and adaptive standard deviation updating amount mentioned above, ASGS-CWOA is proposed, the implementation steps are following, and its flow chart is shown in Figure 5.
Figure 5

Flow chart for the new proposed algorithm.

2.3.1. Initialization of Population

The following parameters should be initially assigned: amount of wolf population is N and dimension of searching space is D; for brevity, the ranges of all dimensions are [rangemin, rangemax]; the upper limit of iterations is T, value of step size in migration is step_a, value of step size in summon and raid is step_b, value of step size in siegement is step_c, and the population can be generated initially by the following equation [25]:where chaos (0, 1) returns a chaotic variable distributed in [0, 1] uniformly.

2.3.2. Migration

Here, an adaptive grid centered on the current wolf can be generated by equation (12), which includes 2 nodes. After migration, the wolf with best fitness can be found as the leader wolf.

2.3.3. Summon and Raid

After migration, the others begin to run toward the location of leader wolf, and during the process of raiding, any wolf keeps searching for preys following equations (13) and (14).

2.3.4. Siege

After summon and raid, all other wolves come to be around the leader wolf in order to siege the prey following equations: After any siegement, the newest leader wolf can be got, which has best fitness temporarily.

2.3.5. Regeneration

After siege, some wolves with poorer fitness will be eliminated, while the same amount of wolves will be regenerated according to equation (14).

2.3.6. Loop

Here, if t (the number of current iteration) is bigger than T (the upper limit of iterations), exit the loop. Otherwise, go to 4.2 for loop until t is bigger than T. And when the loop is over, the leader wolf maybe is the global optima that the algorithm can find.

2.4. Numerical Experiments

In this paper, four state-of-the-art optimization algorithms are taken to validate the performance of new algorithm proposed above, detailed in Table 1.
Table 1

Arithmetic configuration.

OrderNameConfiguration
1Genetic algorithm [16]The crossover probability is set at 0.7; the mutation probability is set at 0.01, while the generation gap is set at 0.95
2Particle swarm optimizationValue of individual acceleration: 2, value of weighted value of initial time: 0.9, value of weighted value of convergence time: 0.4, limit individual speed at 20% of the changing range
3LWPSMigration step size stepa: 1.5, raid step size stepb: 0.9, siege threshold r0: 0.2, upper limit of siege step size stepcmax = 106, lower limit of siege step size stepcmin = 10−2, updating number of wolves m: 5
4CWOAAmount of campaign wolves q: 5, searching direction h: 4, upper limit of search Hmax: 15, migration step size stepa0: 1.5, raid step size stepb: 0.9, siege threshold r0: 0.2, value of siege step size stepc0: 1.6, updating amount of wolves m: 5
5ASGS-CWOAMigration step-size stepa0: 1.5, upper limit of siege step-size stepcmax = 1e6, lower limit of siege step size step cmin = 1e − 40; upper limit number of iteration T: 600; number of wolf population N: 50

2.4.1. Experiments on Twelve Classical Benchmark Functions

Table 2 shows benchmark functions for testing, and Table 3 shows the numerical experimental data of five algorithms on 12 benchmark functions for testing mentioned above, especially the numerical experiments were done on a computer equipped with Ubuntu 16.04.4 operating system, Intel (R) Core (TM) i7-5930K processor and 64G memory as well as Matlab 2017a. For genetic algorithm, toolbox in Matlab 2017a is utilized for GA experiments; the PSO experiments were implemented by a “PSOt” toolbox for Matlab; LWPS is got from the thought in [24]; CWOA is carried out based on the idea in [25], and the new algorithm ASGS-CWOA is carried out in Matlab 2017a, which is an integrated development environment with M programming language. To prove the good performance of the proposed algorithm, optimization calculations were run for 100 times on any benchmark function for testing as well as any optimization algorithm mentioned above.
Table 2

Test functions [25].

OrderFunctionExpressionDimensionRangeOptimum
1Matyas F1 = 0.26 × (x12 + x22) − 0.48 × x1 × x22[−10, 10]min f = 0
2Easom F2 = −cos (x1) × cos (x2) × exp[− (x1 − π)2 − (x2 − π)2]2[−100, 100]min f = −1
3Sumsquares F3 = ∑i=1Di∗xi210[−1.5, 1.5]min f = 0
4Sphere F4 = ∑i=1Dxi230[−1.5, 1.5]min f = 0
5Eggcrate F5 = x12 + x22 + 25 × (sin2x1 + sin2x2)2[−π, π]min f = 0
6Six hump camel back F6 = 4 × x1 − 2.1x14 + (1/3) × x16 + x1 × x2 − 4 × x22 + 4 × x242[−5, 5]min f = −1.0316
7Bohachevsky3 F7 = x12 + 2 × x22 − 0.3 × cos (3πx1 + 4πx2) + 0.32[−100, 100]min f = 0
8Bridge F8 = sinx12+x22/x12+x22 − 0.71292[−1.5, 1.5]max f = 3.0054
9Booth F9 = (x1 + 2 × x2 − 7)2 + (2 × x1 + x2 − 5)22[−10, 10]min f = 0
10Bohachevsky1 F10 = x12 + 2 x22 − 0.3 × cos (3πx1) − 0.4 × cos (4πx2) + 0.72[−100, 100]min f = 0
11Ackley F11 = −20 × exp (−0.2 ×1/Di=1Dxi2) − exp ((1/D)∑i=1Dcos 2 πxi) + 20 + e6[−1.5, 1.5]min f = 0
12Quadric F12 = ∑i=1D(∑k=1ixk)210[−1.5, 1.5]min f = 0
Table 3

Experimental results.

FunctionOrderBest valueWorst valueAverage valueStandard deviationMean iterationAverage time
F113.64E − 122.83E − 083.52E − 102.83E − 094880.4038
24.14E − 213.26E − 141.15E − 153.87E − 155910.4282
34.79E − 264.95E − 212.58E − 225.79E − 225852.4095
41.04E − 1191.69E − 125.85E − 242.13E − 132950.4266
5 0 0 0 0 27.44 0.10806

F21−1−0.00008110−0.970.17141530.0375
2−1−1−13.24E − 065830.4196
3−1−1−105071.9111
4−1−1−10720.0321
5−10−0.020.0196592.622.7036

F318.50E − 071.61E − 052.77E − 063.35E − 065931.4396
20.000170210.01140.00230.00225950.4792
37.60E − 078.74851.48021.56216004.8319
45.61E − 071.49E − 051.07E − 068.74E − 066004.1453
5 0 0 0 0 1 10.2205

F410.0040.03310.01270.00555923.1902
20.03510.17360.09020.02925940.4868
33.557910.31528.04971.20486001.4742
40.000011140.00980.00120.00486000.8431
5 0 0 0 0 1 10.0834

F514.67E − 104.67E − 104.67E − 103.12E − 25730.0085
29.20E − 226.66E − 151.34E − 166.78E − 165940.4298
31.23E − 231.76E − 191.56E − 202.60E − 205852.4112
43.75E − 1361.31E − 194.28E − 222.24E − 111950.2169
5 0 0 0 0 1.01 1.06 E − 03

F61−1.0316−1.0316−1.03161.79E − 15680.0077
2−1.0316−1.0316−1.03161.47E − 155780.4597
3−1.0316−1.0316−1.03161.52E − 155192.4727
4−1.0316−1.0316−1.03161.25E − 151570.1823
5−1.0316−1.0316−1.03165.60E − 11 3.32 0.021855

F714.07E − 081.64E − 063.17E − 075.01E − 074370.327
22.78E − 162.60E − 108.85E − 123.10E − 115900.4551
301.67E − 167.77E − 182.50E − 175561.7745
401.69E − 116.36E − 152.36E − 111390.0793
5 0 0 0 0 24.06 0.1064

F813.00542.70522.97870.0629690.0079
23.00543.00543.00544.53E − 165500.3946
33.00543.00383.00530.000161253991.2964
43.00543.00543.00549.41E − 11670.0307
53.00543.00543.0054 9.66 E − 306003.0412

F914.55E − 113.68E − 098.91E − 113.67E − 102710.1276
22.47E − 201.91E − 111.97E − 131.91E − 125910.4525
36.05E − 243.12E − 192.70E − 205.48E − 205801.9606
403.15E − 111.46E − 225.11E − 121320.0738
5 0 0 0 0 26.86 0.11364

F1014.36E − 070.46990.00470.0471090.0197
203.65E − 121.12E − 134.03E − 135880.4564
300005391.6736
40000780.025
5 0 0 0 0 184.880.86886

F1111.58511.58511.58513.78E − 074340.5788
21.59341.5941.59350.00008265950.4998
31.55E − 062.10150.76180.58335941.789
49.21E − 070.000180695.20E − 054.1549–55351.0822
5 0 0 0 0 1 0.19021

F1212.0399 × E − 40.02770.00560.00595971.577
20.01510.25740.08630.05525920.5608
30.1391.64420.90170.35166002.1169
44.28E − 080.02140.000380430.00136001.9728
5 0 0 0 0 1 7.4354
Firstly, focusing on Table 3, seen from the term of best value, only new algorithm can find the theoretical global optima of all benchmark functions for testing, so ASGS-CWOA has better performance in optimization accuracy. Furthermore, for 100 times, the worst and average values of ASGS-CWOA in the all benchmark functions for testing except the F2 named “Easom” reach the theoretical values, respectively, and the new algorithm has better standard deviation about best values detailed in Figure 6, from which it can be seen that nearly all the standard deviations are best except F2 and F6, especially the standard deviations are zero on F1, F3, F4, F5, F7, F9, F10, F11, and F12. Figure 6(b) shows that the value of standard deviation about ASGS-CWOA on F2 is not worst in five algorithms, and it has better performance than GA; Figure 6(f) indicates that the value of standard deviation about ASGS-CWOA on F6 reaches 10−11, weaker than the values on others; Figure 6(h) demonstrates that the value of standard deviation about ASGS-CWOA on F8 reaches 10−30, which is not zero, but best in five algorithms. Therefore, ASGS-CWOA has better stability than others.
Figure 6

Histograms of the SD about the benchmark functions for testing: (a) –F1; (b) –F2; (c) –F3; (d) –F4; (e) –F5; (f) –F6; (g) –F7; (h) –F8; (i) –F9; (j) –F10; (k) –F11; (l) –F12 (red means the value is out of upper limit of the chart).

In addition, focusing on the number of mean iteration, ASGS-CWOA has weaker iteration times on the test function 2 “Easom,” the test function 8 “Bridge,” and the test function 10 “Bohachevsky1,” but it has better performances on the other test functions, especially the iteration times on test function 3, 4, 5, 6, 11, and 12 are 1 or about 1. So ASGS-CWOA has better advantage on performance of iteration number. Finally, the mean time spent by ASGS-CWOA are smaller than the others' on F1, F5, F6, and F11, as shown in Figures 7(a), 7(e), 7(f), and 7(k), respectively. On F7, F9, and F10, the five algorithms spent time roughly in the same order of magnitude, and ASGS-CWOA has better performance than GA, PSO, and LWPS on F7 and F9, as shown in Figures 7(g) and 7(i). Unfortunately, ASGS-CWOA spent most time on F2, F3, F4, F8, and F12, yet it is a comfort that the times spent by ASGS-CWOA are not too much to be unacceptable. This phenomenon conforms to a philosophical truth that nothing is perfect and flawless in the world, and ASGS-CWOA is without exception. Accordingly, in general, ASGS-CWOA has a better speed of convergence. The detailed is shown in Figure 7.
Figure 7

Histograms of the mean time spent on the benchmark functions for testing: (a) –F1; (b) –F2; (c) –F3; (d) –F4; (e) –F5; (f) –F6; (g) –F7; (h) –F8; (i) –F9; (j) –F10; (k) –F11; (l) –F12.

2.4.2. Supplementary Experiments

In order to further verify the performance of the new proposed algorithm, supplementary experiments are conducted on CEC-2014 (IEEE Congress on Evolutionary Computation 2014) testing functions [38], detailed in Table 4. It is different with the above 12 testing functions that CEC2014 testing functions are conducted on a computer equipment with Win7-32 bit, Matlab 2014a, CPU (AMD A6-3400M), and RAM (4.0 GB), due to the match between the given cec14_func.mexw32 and windows 32 bit system, not Linux 64 bit.
Table 4

Part of the CEC'14 test functions [38].

No.FunctionsDimensionRange F i  = Fi (x∗)
1Rotated High Conditioned Elliptic Function2[−100, 100]100
2Rotated Bent Cigar Function2[−100, 100]200
3Rotated Discuss Function2[−100, 100]300
4Shifted and Rotated Rosenbrock's Function2[−100, 100]400
5Shifted and Rotated Ackley's Function2[−100, 100]500
6Shifted and Rotated Weier Stress Function2[−100, 100]600
7Shifted and Rotated Griewank's Function2[−100, 100]700
8Shifted Rastrigin's Function2[−100, 100]800
9Shifted and Rotated Rastrigin's Function2[−100, 100]900
10Shifted Schwefel's Function2[−100, 100]1000
11Shifted and Rotated Schwefel's Function2[−100, 100]1100
12Shifted and Rotated Katsuura Function2[−100, 100]1200
13Shifted and Rotated Happycat Function [6]2[−100, 100]1300
14Shifted and Rotated HGBat Function [6]2[−100, 100]1400
15Shifted and Rotated Expanded Griewank's Plus Rosenbrock's Function2[−100, 100]1500
16Shifted and Rotated Expanded Schaffer's F6 Function2[−100, 100]1600
From Table 5 and Figure 8(a), obviously, it can be seen that new proposed algorithm has better performance than GA and LWPS in terms of “optimal value,” which means ASGS-CWOA has better performance in finding the global optima. From Figures 8(b)–8(d), it can be seen that the new proposed algorithm has best performance in terms of “worst value,” “average value,” and “standard deviation;” in other words, the new proposed algorithm has best stability and robustness. From Figure 8(e), the proportion of ASGS-CWOA is better than PSO, LWPS, and CWOA, and it means the new proposed ASGS-CWOA has advantages in time performance.
Table 5

Results of supplementary experiments.

OrderFunctionAlgorithmOptimal valueWorst valueAverage valueStandard deviationAverage iterationAverage time
1CEC'14-F1GA104.979930721.117855.76381199210576.06675.9246
PSO1007167.217966.94912364293.96000.18536
LWPS104.0352219.115650.7181276101.76008.7827
CWOA100.4238806.3862303.4648999.17966009.0239
ASGS-CWOA 100 100 100 0 17.06 0.75354

2CEC'14-F2GA201.66555624.1361511.72202859.8535.95.5281
PSO200.00285541.6291877.38882559554.76000.1806
LWPS208.9642723.770769.9574475101.546009.2476
CWOA200.01151815.693489.5193169111.576008.6269
ASGS-CWOA 200 200 200 0 15.78 0.70151

3CEC'14-F3GA316.171717077.53494508.47923875975533.35.7871
PSO300.00072391.641446.9141101563.536000.18159
LWPS300.34075796.796889.491104949.76009.2331
CWOA300.11112196.991734.5405232245.816008.5499
ASGS-CWOA 300 300 300 0 17.62 0.72644

4CEC'14-F4GA400400.0911400.01090.000532386.73330.924
PSO4004004003.23E − 296000.17534
LWPS4004004005.90E − 176008.7697
CWOA4004004000308.73.9497
ASGS-CWOA 400 400 400 0 12.96 0.53133

5CEC'14-F5GA500520507.343250.507771.43330.76291
PSO500515.2678500.92176.73666000.18704
LWPS500500.0006500.00011.49E − 086008.8697
CWOA5005005001.34E − 226008.5232
ASGS-CWOA501.7851520514.1828.560660029.8243

6CEC'14-F6GA600.0001600.9784600.3020.1069467.90.86812
PSO6006006002.86E − 116001.3431
LWPS600.0001600.4594600.02450.008742160019.5858
CWOA6006006002.32E − 25599.566719.1092
ASGS-CWOA 600 600 600 0 45.16 17.5578

7CEC'14-F7GA700701.4272700.37670.1918161.43330.67019
PSO700700.6977700.02130.00897716000.18424
LWPS700700.323700.07950.00637736008.5961
CWOA700700.0271700.00534.69E − 05514.56677.5231
ASGS-CWOA 700 700 700 0 238.211.578

8CEC'14-F8GA800801.9899801.09450.4850757.63330.62438
PSO800801.4806800.23420.13654594.260.17554
LWPS800800.995800.13270.114396008.486
CWOA800801.9899800.69650.53787497.36677.2666
ASGS-CWOA 800 800.995800.27860.19957469.3421.8131

9CEC'14-F9GA900903.9798901.39291.161558.26670.63184
PSO900900.995900.1860.10073591.310.17613
LWPS900900.995900.09950.0890956008.5478
CWOA900901.9899900.69650.53787503.53336.9673
ASGS-CWOA 900 903.9798900.52730.50398496.3123.0867

10CEC'14-F10GA10001058.5041005.603140.403663.53330.68942
PSO10001118.7501009.425808.3984591.930.19874
LWPS10001017.0691000.8399.12436008.7552
CWOA10001058.1921006.533145.8132547.83337.833
ASGS-CWOA1063.3061363.4761174.7777651.057760033.4225

11CEC'14-F11GA11001333.8961156.9473936.712462.23330.67663
PSO11001218.4381107.871228.3351593.690.19941
LWPS11001100.6241100.2080.0476436009.0343
CWOA11001218.4381105.273459.2835534.27.6136
ASGS-CWOA1100.3121403.4531323.9815000.558960029.089

12CEC'14-F12GA1200.0001205.0581200.1980.8401774.16670.90516
PSO12001200.2861200.0080.00091935590.031.0801
LWPS1200.0011201.1361200.2420.04108860017.2774
CWOA1200120012001.49E − 2060017.6222
ASGS-CWOA 1200 1200.0161200.0019.25E − 06537.61173.256

13CEC'14-F13GA1300.0051300.1881300.020.001267469.26670.74406
PSO1300.0031300.3031300.1230.0073239591.870.18376
LWPS1300.0071300.2671300.0890.00391276008.7117
CWOA1300.0001300.1321300.0170.00104146009.0362
ASGS-CWOA1300.0131300.2421300.1200.00307560028.6266

14CEC'14-F14GA1400.0001400.4991400.2640.03526257.90.63369
PSO14001400.0671400.0230.00039174591.80.19591
LWPS1400.0001400.1751400.0370.00158466009.0429
CWOA1400.0001400.1791400.0300.0015586009.3025
ASGS-CWOA1400.1111400.4911400.4570.003262660030.4334

15CEC'14-F15GA15001500.1751500.0730.003332354.76670.59602
PSO15001500.0981500.0120.0004289591.320.18265
LWPS15001500.1391500.0210.00109816007.4239
CWOA15001500.0981500.0150.0003538477.66676.5924
ASGS-CWOA 1500 1500.020 1500.001 2.24E − 05101.294.8

16CEC'14-F16GA16001600.7461600.2810.03856353.83330.58607
PSO16001600.3561600.0150.001519593.950.18491
LWPS1600.0191600.0741600.0240.0002726008.7907
CWOA16001600.2541600.0580.0031938593.59.0235
ASGS-CWOA 1600 1600.019 1600.007 8.69E − 05545.5526.1972
Figure 8

(a) The statistical proportion of each algorithm in the number of best “optimal value” on 16 test functions; (b) the one of best “worst value;” (c) the one of best “average value;” (d) the one of best “standard deviation;” (e) the one of best “average iteration;” (f) the one of best “average time.”

In a word, ASGS-CWOA has good optimization quality, stability, advantage on performance of iteration number, and speed of convergence.

3. Results and Discussion

Theoretical research and experimental results reveal that compared with traditional genetic algorithm, particle swarm optimization, leading wolf pack algorithm, and chaotic wolf optimization algorithm, ASGS-CWOA has better global optimization accuracy, fast convergence speed, and high robustness under the same conditions. In fact, the strategy of ASGS greatly strengthens the local exploitation power of the original algorithm, which makes it easier for the algorithm to find the global optimum; the strategy of ORM and SDUA effectively enhances the global exploration power, which makes the algorithm not easy to fall into the local optimum and thus easier to find the global optimum. Focusing on Tables 3 and 5 and Figures 6 and 7 above, compared with the four state-of-the-art algorithms, ASGS-CWOA is effective and efficient in most of terms on benchmark functions for testing, but it has weaker performance in some terms on some benchmark functions for testing, such as functions 2, 3, 4, 8, and 12 shown, respectively, in Figures 7(b)–7(d), 7(h), and 7(l); ASGS-CWOA spent more time on iterations than the four other algorithms. When D (dimension of the solution space) is very large, this means too much memory space of the computer is required to implement the algorithm completely according to the formula, and it is impossible to meet the spatial demand growing exponentially, which is also a reflection of its limitations. Moreover, in the supplementary experiments, ASGS-CWOA spent more time than the other algorithms, and its proportion is 0 in the optimal time statistic index of 16 times, detailed in Figure 7(f). Our future work is to continue perfecting the performance of ASGS-CWOA in all aspects and to apply it to specific projects and test its performance.

4. Conclusions

To further improve the speed of convergence, and optimization accuracy under a same condition, this paper proposes an adaptive shrinking grid search chaotic wolf optimization algorithm using standard deviation updating amount. Firstly, ASGS was designed for wolf pack algorithm to enhance its searching capability, through which any wolf can be the leader wolf and this benefits to improve the probability of finding the global optimization. Moreover, OMR is used in the wolf pack algorithm to enhance the convergence speed. In addition, we take a concept named SDUA to eliminate some poorer wolves and regenerate the same amount of wolves, so as to update wolf population and keep its biodiversity.
  6 in total

1.  How standard deviation contributes to the validity of a LDF signal: a cohort study of 8 years of dental trauma.

Authors:  Herman J J Roeykens; Peter De Coster; Wolfgang Jacquet; Roeland J G De Moor
Journal:  Lasers Med Sci       Date:  2019-05-16       Impact factor: 3.161

2.  Validation of delay-multiply-and-standard-deviation weighting factor for improved photoacoustic imaging of sentinel lymph node.

Authors:  Roya Paridar; Moein Mozaffarzadeh; Vijitha Periyasamy; Maryam Basij; Mohammad Mehrmohammadi; Manojit Pramanik; Mahdi Orooji
Journal:  J Biophotonics       Date:  2019-02-07       Impact factor: 3.207

3.  Peak-to-peak standard deviation based bubble detection method in sodium flow with electromagnetic vortex flowmeter.

Authors:  Wei Xu; Ke-Jun Xu; Jian-Ping Wu; Xin-Long Yu; Xiao-Xue Yan
Journal:  Rev Sci Instrum       Date:  2019-06       Impact factor: 1.523

4.  An adaptive beamforming method for ultrasound imaging based on the mean-to-standard-deviation factor.

Authors:  Yuanguo Wang; Chichao Zheng; Hu Peng; Qiang Chen
Journal:  Ultrasonics       Date:  2018-06-14       Impact factor: 2.890

5.  Learning Effective Connectivity Network Structure from fMRI Data Based on Artificial Immune Algorithm.

Authors:  Junzhong Ji; Jinduo Liu; Peipeng Liang; Aidong Zhang
Journal:  PLoS One       Date:  2016-04-05       Impact factor: 3.240

6.  Ant system: optimization by a colony of cooperating agents.

Authors:  M Dorigo; V Maniezzo; A Colorni
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  1996
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.