Literature DB >> 35251144

A Multistrategy-Integrated Learning Sparrow Search Algorithm and Optimization of Engineering Problems.

Zikai Wang1, Xueyu Huang1, Donglin Zhu1.   

Abstract

The swarm intelligence algorithm is a new technology proposed by researchers inspired by the biological behavior of nature, which has been practically applied in various fields. As a kind of swarm intelligence algorithm, the newly proposed sparrow search algorithm has attracted extensive attention due to its strong optimization ability. Aiming at the problem that it is easy to fall into local optimum, this paper proposes an improved sparrow search algorithm (IHSSA) that combines infinitely folded iterative chaotic mapping (ICMIC) and hybrid reverse learning strategy. In the population initialization stage, the improved ICMIC strategy is combined to increase the distribution breadth of the population and improve the quality of the initial solution. In the finder update stage, a reverse learning strategy based on the lens imaging principle is utilized to update the group of discoverers with high fitness, while the generalized reverse learning strategy is used to update the current global worst solution in the joiner update stage. To balance exploration and exploitation capabilities, crossover strategy is joined to update scout positions. 14 common test functions are selected for experiments, and the Wilcoxon rank sum test method is achieved to verify the effect of the algorithm, which proves that IHSSA has higher accuracy and better convergence performance to obtain solutions than 9 algorithms such as WOA, GWO, PSO, TLBO, and SSA variants. Finally, the IHSSA algorithm is applied to three constrained engineering optimization problems, and satisfactory results are held, which proves the effectiveness and feasibility of the improved algorithm.
Copyright © 2022 Zikai Wang et al.

Entities:  

Mesh:

Year:  2022        PMID: 35251144      PMCID: PMC8890830          DOI: 10.1155/2022/2475460

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

In recent years, new intelligent optimization algorithms have emerged continuously and have been practically applied in medical treatment [1, 2], finance [3], production scheduling [4], and other fields. Besides, it has been proved to be remarkably effective. Since the end of the last century, scholars from all over the world have been inspired by social behavior [5], trying to simulate the behavior characteristics of biological populations in nature, and proposed algorithms such as Ant Colony Algorithm (ACO) [6, 7], Particle Swarm Optimization (PSO) [8, 9], Whale Optimization Algorithm (WOA) [10], Grey Wolf Optimization Algorithm (GWO) [11], and a series of swarm intelligence optimization algorithms. Most of the modeling process of these algorithms is based on the characteristics of the biological population, such as foraging [12], reproduction [13], and hunting [14], which vividly simulate the main behaviors in social actions. In 2020, Xue J and Shen B jointly proposed the Sparrow Search Algorithm (SSA) [1] based on the foraging behavior and backfeeding behavior of sparrow populations. The formula and control parameters of algorithm are not complex and are easier to be understood and implemented relatively. Experiments show that SSA's optimization capability is stronger than the particle swarm optimization algorithm proposed in 1995 and the grey wolf algorithm proposed in 2014, with better convergence accuracy, faster convergence speed, and better stability. However, compared with the existing swarm intelligence optimization algorithm, the SSA also has certain shortcomings, such as longer running time, and a greater possibility to fall into a local optimal solution due to the excessively fast convergence speed, so that the global optimization ability is insufficient. In order to strengthen the optimization effect of the algorithm and balance the capabilities of exploration and mining, researchers have proposed a series of improved methods for the original sparrow search algorithm model to improve the problem of easily trapping into local optimality. Of course, these improved methods of the swarm intelligence optimization algorithm have been used in various research fields, and the application has been realized extensively. Lv et al. [2] introduced a chaotic sequence to chaotically perturb some individuals, which fell into the local optimum, in order for the SSA to be jumped out of the restriction and continue to search for the global optimum solution. At the same time, the integrated Cauchy-Gaussian mutation operator is combined together to avoid the stagnation of optimization by changing the position of the elite sparrow in the search space. Zhu [15] introduced an adaptive learning factor to solve the problem that the convergence trend will slow down and the convergence accuracy will be reduced under a limited number of iterations, which are the shortcomings of the SSA. At the same time, the ASSA is applied to the optimization and identification of PEMFC stack parameters. Mao and Zhang [16] fused the sine cosine algorithm and the Levy flight strategy on the basic SSA, performed disturbance mutation at the optimal solution position, which enhanced the ability of the algorithm to escape locally, and greatly increased the accuracy of the solution. Liu et al. [17] and others introduced an improved sparrow search strategy to apply to the route planning problem of UAVs, which solved the inefficiency of path planning in the complex three-dimensional flight process. Yuan et al. [18] utilized the center of gravity reverse learning mechanism to initialize the population, which made the population distribution wider. A learning factor is put forward in the update part of the discoverer, and the mutation operator is introduced to increase the mutation processing and reduce the probability of the algorithm falling into the local optimum. Applying it to Distributed Maximum Power Point Tracking (DMPPT) provides conditions for the stable operation of the microgrid. Liu et al. [19] came up with a balanced sparrow search algorithm (BSSA), and the random walk strategy of Levy flight method was exerted to appropriately adjust the local search, which brought the improving efficiency of CNN focus. Besides, they applied it to the medical field to improve MRI image diagnosis of the brain robustness and accuracy of tumors. In order to solve the problem of labeled data classification, Zhang et al. [20] adopted the method of combining the improved SSA and the adaptive classifier and introduced the sine-cosine algorithm and the newly proposed labor cooperation structure. Great effect of application in the classification of lung CT images has been demonstrated. Zhang and Ding [21] designed a random configuration network based on the chaotic sparrow search algorithm, and, combined with the adaptive control factor of CSSA, it automatically updated the regularization parameters and scale factors for SCN. Thereby, the regression performance of SCN got improved when solving large-scale random configuration problems. Zhu and Yousefi [15] proposed to hold the adaptive sparrow search algorithm ASSA to optimize the seven unknown parameters of the proton exchange membrane fuel cell model in the PEMFC stack. The ultimate goal is achieving the best consistency with the empirical voltage polarization curve of the battery pack. Zhou et al. [22] successfully applied SSA to wavefront shaping and focusing by introducing a cross strategy, which solved the problem of SSA's lack of performance in high-dimensional optimization problems. Without a doubt, the improved algorithm provided a good reference for future wavefront shaping research. Up to now, owing to the fact that the sparrow search algorithm has not been come up with for a long time, researchers are still in the exploratory stage and have not been able to develop an absolutely excellent algorithm. In order to further improve the solution accuracy and convergence efficiency of the sparrow algorithm, this paper continues to explore paths that can be improved on the basis of the predecessors and proposes the novel sparrow search algorithm called IHSSA. Improved infinite folding iterative chaotic mapping and hybrid reverse learning strategy are combined with it. The innovation points can be summarized as follows: The improved infinite folding iterative chaotic map (IICMIC) is used to initialize the sparrow population. This strengthens the diversity of the initial population to a certain extent and increases the breadth of distribution. A hybrid reverse learning strategy is put forward to update the position of a specific individual. Taking into account the effectiveness of reverse learning in mining new solutions, this paper uses a hybrid reverse learning strategy. After the discoverer is updated, lens reverse learning can be utilized to update the global optimal solution. After the position of the joiner is updated, the generalized opposition-based learning strategy contributes to update the current worst individual. Besides, considering the limitation of the boundary, the population can get more feasible areas as possible so as to maximize the mining. The horizontal and vertical crossing strategy is introduced to update the position of the guard. The advantage of this strategy is that it can update the individual sparrows in both the horizonal and vertical angles, while maintaining the solution speed, and the range of the population can be expanded to a certain extent. This paper follows a reasonable logical order. The first chapter introduces the research background of intelligent algorithms in recent years and some contributions made by researchers to this field. The second chapter introduces the basic sparrow search algorithm SSA. The third chapter introduces several improvement points of this paper in sequence according to the application order, shows the proposed new algorithm IHSSA, and attaches the flow chart of the new algorithm. The advantage of the algorithm is proved by time complexity analysis and Wilcoxon rank sum test, and the population distribution diagram proves its contribution to the dispersed population. In the following chapters, the new algorithm is tested on 14 standard test functions, the results are statistically tabulated, and a comparative analysis is made according to the data to verify the pros and cons of the algorithm. In Chapter 5, we apply IHSSA to a classical constraint engineering optimization problem, and the obtained data further proves the feasibility and effectiveness of the proposed algorithm. Finally, a brief summary of the work of this paper is made, and the author and his team have made some plans and prospects for the next research work.

2. Sparrow Search Algorithm SSA

2.1. Group Predation Behavior of Sparrows

In nature, as one of the common birds, sparrows live in the environment where humans live. Generally speaking, the upper body of the sparrow is brown and black, and the conical mouth is short and strong. They usually live together in groups with a clear division of labor. Some sparrows are responsible for finding food and providing foraging areas and directions for the entire population, while the remaining sparrows obtain food based on the food information the former sparrows provide. In addition, a sparrow in the population will issue an alarm in time when it realizes that danger is coming, and the entire population will quickly start backfeeding behavior.

2.2. SSA Algorithm Description

The proposal of SSA is based on the characteristics of sparrows' cleverness and strong memory, which well simulates the cooperative mechanism of sparrow populations in daily foraging. We will give new names to the three types of sparrows mentioned earlier. ① Those who are responsible for finding food are called discoverers. ② Those who follow the discoverers to obtain food are called joiners. ③ Some joiners will always monitor the discoverers and choose the time to compete for food resources in order to increase the rate of food acquisition. This type of joiner is called monitor. The discoverer generally accounts for 10%–20% of the entire population. The roles of the discoverer and the joiner can be exchanged, provided that the proportion relative to the entire population is constant. The position of each sparrow is held as a solution of the algorithm. The initial positions of the sparrow represented by a matrix are as follows: Among them, d represents the dimension of the problem to be optimized, and n represents the number of sparrow population. And then, the fitness value of all sparrows can be expressed: Among them, the function f represents the fitness function. The discoverer with better fitness will obtain food earlier in the food search process. Since the discoverer needs to guide the foraging direction for the entire population, the discoverer can obtain a larger food search range. In the iterative process, the location of the discoverer is updated as follows: Among them, X represents the current position of the ith sparrow; Maxitem represents the maximum number of iterations of the algorithm; t is the current iteration number; α is a uniform number conforming to (0,1]; Q is a random number that obeys the standard normal distribution; L is a 1 ∗d matrix with each element being 1; alarm value R2 ∈ [0,1]; safety value ST ∈ [0.5,  1]. Once a sparrow in the population finds a predator or other danger, an alarm signal will be issued. When the alarm value is more than safety critical value, the discoverer will lead the population to other safer areas to forage. The formula updated position of the follower is as follows: Among them, Xworst represents the current global worst position; X represents the global optimal value of the jth dimension at the (t+1)th iteration (that is, the best position of the discoverer); A represents a 1∗d matrix whose elements are randomly assigned 1 or -1, A+ Satisfy A+=A(AA)−1. When i > n/2, it indicates that the ith joiner has a low fitness level and is not able to obtain food. In order to obtain food and increase energy reserves, one must fly to other places for foraging; when i ≤ n/2, it means that the ith joiner has held the best position and randomly finds a location to forage near X. The sparrows responsible for investigation generally account for 10%–20% of the total, and they always monitor and remind the entire population to take backfeeding behavior when facing danger. The position update formula of the monitors is as follows: Among them, K is also a random number, and the range is [−1,1]; ε is an infinitesimal constant, and its existence avoids the situation where the denominator is 0; f, f and fworst represent the current fitness, the global optimal, and the global worst fitness value of the sparrow, respectively.

3. IHSSA

3.1. Infinitely Folded Iterative Chaotic Map Initialization Population

3.1.1. ICMIC

The swarm intelligence algorithm needs an initialization strategy to generate an initial population and provide an initial guess for the subsequent evolution process. The difference in the initial distribution state of the sparrow population will lead the entire subsequent foraging process to the final result with a large gap. Both the convergence speed and the optimization accuracy are deeply affected. Therefore, the importance of the quality of the initial population can be realized. According to the original SSA, the population is not guided by prior knowledge; that is, it is generally randomly generated. In 1975, Li et al. proposed the concept of “chaos” for the first time in the article “Period Three Implies Chaos” and used the word chaos for the first time [23]. Considering its unpredictability, ergodicity, and parameter sensitivity, chaotic systems are special. In the field of parameter optimization, chaotic mapping can be operated to replace pseudorandom number generators in order to generate chaotic numbers between 0 and 1. Considering that chaos can only traverse all the space in a sufficient length of time, it is feasible to combine chaos into the global optimizer to improve the search performance of the latter in order to complete the optimization of the target task in a short time range [24]. Experiments have proved that the utilization of chaotic sequences for population initialization will affect the entire process of the algorithm, and better results than pseudorandom numbers can often be held. The ergodicity of chaos allows the initial state of the sparrow population to have better diversity, to avoid premature convergence, that is, to improve the global optimization accuracy and convergence, which overcomes the shortcomings of traditional optimization algorithms. This paper applies ICMIC map, one of the most classic chaotic maps (Iterative Chaotic Map with Infinite Collapses) to initialize the sparrow population. The chaotic map was proposed in 2001 by Di He. Its basic idea is to generate a chaotic sequence in [0, 1] through the mapping relationship and then transform the chaotic sequence into the search space of the population [25]. Its higher Lyapunov exponent shows stronger chaotic characteristics than other commonly used continuous chaotic models [26]. Selecting appropriate parameters can generate a good chaotic model so as to contribute satisfactory results in practical applications. The uniform distribution test of chaotic systems by Di et al. [26] proved that the one-dimensional ICMIC presents a noise phenomenon closer to uniform distribution. Two mathematical expressions for ICMIC mapping are as follows: Expression one: Expression two: In expression one, a is a very important adjustable parameter. Experiments show that the value of a directly affects the mapping effect and then affects the pros and cons of the population. In the second expression, α also plays an important role as a control parameter.

3.1.2. IICMIC

Based on the expression two in 2.1.1, this paper proposes an improved infinite fold iterative chaotic map-IICMIC. The mathematical expression is as follows: After a lot of experiments, it is concluded that SSA can obtain a good chaotic sequence when the value of a is in the range of (0.6, 1). Combining IICMIC with the original SSA, the initial population state generated is shown in Figure 1(a) which shows the population state distribution after initial SSA initialization, and Figure 1(b) shows the distribution of sparrow population state after initialization with IICMIC. It can be seen that the improved initialization method has greatly improved the diversity of the population, and it has greatly avoided falling into the local optimum. The value of a is set 0.9 in subsequent experiments afterwards.
Figure 1

Individual distribution. (a) Individual initialization map of SSA. (b) Individual distribution of IHSSA.

In combination with SSA, we first select initial values whose number is N with small differences as the initial state of the population. Taking into account the parameter sensitivity of ICMIC mapping, even if the individual gap is small, it can be captured. These N initial values can be mapped to get the same amount of chaotic sequence and then inversely mapped to the corresponding individual search space. The initial position of the i-th individual after the change is denoted as X (i = 1.2,…, D).

3.2. Hybrid Reverse Learning Strategy to Update the Position of the Discoverer

Opposition-based learning (OBL) is an intelligent calculation method, which is first proposed by Tizhoosh in 2005. With the in-depth research of various algorithms, OBL has been successfully applied to many intelligent algorithms [27-31]. The main idea can be summarized as follows: calculate a feasible solution and its reverse solution. Then, evaluate the pros and cons of the two, and select the required solution according to certain conditions. Research shows that the solution generated by reverse learning is better than the randomly generated solution, and the probability of reaching the optimal solution is higher. Therefore, OBL is a good method that is greatly suitable for mining new solutions in unknown fields, which increases the diversity of the population. In the discoverer stage, a broad and flexible search mechanism is the key to guide the entire sparrow population to search for food and avoid tripping into danger. In order to better realize the lead role of discoverers, it takes researchers too much time to explore in these fields, and they put forward a series of improvement methods gradually. However, traditional learning strategies have limited ability to solve problems and can only achieve their goals in certain dimensions. In response to this problem, based on the traditional OBL, this paper proposes a hybrid reverse learning method. Not only the improved lens imaging inverse learning mechanism is applied to the update of the optimal solution in the discoverer stage, but also the generalized opposition-based learning is performed on the global worst solution. The higher optimization accuracy can be obtained in this hybrid way so as to avoid premature convergence.

3.2.1. Reverse Learning Strategy Based on Improved Lens Principle to Update Optimal Position

The reverse learning strategy based on the lens principle has strong flexibility and versatility. A strong ability to explore unknown areas and dig new solutions is another advantage. The principles of this method are as follows: Supposing that there is an object P with a height of h, and X is the projection of P on the X axis. Define a and b to be the upper and lower bounds of the solution in the j-th dimension under the current algorithm. The midpoint of the upper and lower bounds is defined as the base point O, and a lens with focal length f is placed at this point. Through the lens imaging, an image P′ different from P can be obtained. The projection of the image P′ on the X axis is denoted as X′. X′ is the newly generated reverse solution based on this learning strategy. The schematic diagram is shown in Figure 2.
Figure 2

Lens schematic diagram.

From Figure 2, we can clearly see that X generates a new image X′ under the action of the lens. According to the properties of similar triangles, we can get the following formula:Let h/h′=k (k is the scale factor), and the mathematical expression of the reverse point X′ can be written as When k=1, we can get The formula above is the general form of the reverse learning strategy, and the new individuals generated by this formula are fixed. Studies have shown that, for high-dimensional complex functions, new individuals with a fixed range have a certain probability of falling into a local optimum. In the later stage of the algorithm iteration, the optimal solution generated is usually very close to the optimal solution. In order to deal with the hidden danger, we can introduce a new operator k. Changing the scale factor k contributes to dynamically variable and new individuals. The randomness of the solution prevents individuals from losing vitality and increases the diversity of the population. The mathematical expression of k is as follows (the expression is implemented to the position of the i-th sparrow individual on the j-th dimension): Among them, t represents the current number of iterations, and T represents the maximum number of iterations.

3.2.2. Generalized Reverse Learning Strategy Updates the Current Global Worst Position

With the deepening of research, more and more attempts have been made to choose the optimal solution, but the research on the current global worst position cannot be ignored. The update of the worst position can bring a better search range and make the population distribution greater. In order to maximize the diversity of individuals in the population, not only the best individuals in the discoverer stage are learned, but also the worst individuals in the sparrow group are learned in reverse strategy. Combined with the improved lens learning strategy in the previous subsection, this subsection adopts the generalized opposition-based learning (GOBL) strategy to optimize and update the current global worst position after each iteration. The concept of generalized opposition-based learning is as follows: let the individual x=(x, x,…, x), and their dynamic search range in the j-th dimension space is [a, b]. x′ is its reverse solution. The mathematical expression of x′ is Among them, k is a random number that obeys a uniform distribution between (0, 1). If the value range of the reverse solution exceeds the predetermined range, the solution will be randomly generated within the dynamic search range [a, b] according to the following formula: The purpose of reverse learning is to find a new and most suitable solution. Generalized opposition-based learning (GOBL) compares the worst solution while finding it and updates the current global worst solution once. At the same time, GOBL increases the dynamic update operation of the boundary than the basic reverse learning, which means the relatively small search space. The GOBL is combined with the worst joiner update of the sparrow search algorithm. The characteristics of reverse learning are fully utilized to explore more feasible regions while improving the convergence speed of the algorithm.

3.3. Vertical and Horizontal Cross Strategy

The optimization speed of the SSA is very fast, and the solution accuracy is also strong. As the number of iterations increases, the sparrow population will gather around the local optimal solution to a large extent. In order to balance the global search and development capabilities of the algorithm and avoid the algorithm from falling into the local optimum, the crossover optimization algorithm is newly proposed in 2014. It is inspired by the Confucian golden section principle and the crossover operation in genetic algorithms. The experimental results demonstrate that, compared with other heuristic algorithms, the cross-optimization algorithm has excellent performance on most test functions [32]. This paper proposes to integrate the vertical and horizontal crossover strategy into the guard search stage of the sparrow search algorithm, which expands the range of the population as much as possible while preserving the speed of the solution.

3.3.1. Horizontal Crossover Strategy

Horizontal crossover divides the solving space of multidimensional problem into half the population of hypercubes. In order to reduce the blind spots that cannot be reached, the horizontal crossover also searches the edge of each hypercube with a small probability. This is the guarantee that horizontal search has strong global search ability. In this paper, two parental vigilant individuals x(i) and x(j) are crossed horizontally to generate new individuals MSx(i) and MSx(j). Among them, r1 and r2 are random numbers in [0, 1] conforming to a uniform distribution, and c1 and c2 are random numbers in [−1, 1] conforming to a uniform distribution. The offspring produced by the horizontal crossover needs to make an elite selection with their parents and retain the individuals with high adaptability. In this way, the algorithm can continuously converge to the optimal solution, which can ensure the convergence efficiency without affecting the optimization accuracy.

3.3.2. Vertical Cross Strategy

The premature convergence of most swarm intelligence search algorithms is caused by a small number of stagnant population dimensions. The original purpose of introducing vertical crossover is to promote certain dimensions of the population to escape from the dimensional convergence. Differing from the horizontal crossover strategy, the vertical crossover is operated on all dimensions of the new individual. Its function is to avoid premature maturity in the later stage of SSA, which is similar to the mutation mechanism in genetic algorithm. Assuming that there is a newborn individual x(k), which crosses longitudinally in the d1 and d2 dimensions, the calculation formula is as follows: Among them, MSx(k) is a new individual generated after vertical crossover, r ∈ [0,1]. Like the individuals generated by the horizontal crossover strategy, the new individuals generated after the vertical crossover must be selected by elites with their parents. The one with high adaptability is retained as the final individual. The advantage of this is that it not only increases the possibility of seeking the best in breadth, but also chooses various dimensions and realizes the continuous improvement of the quality of the solution. Even individuals who have fallen into a local optimum have a chance to jump out. It is not difficult to see that, after combining with the horizontal and vertical crossover, it is indeed possible to balance the exploration and mining capabilities of the algorithm to a certain extent. The bottleneck in the horizontal direction can be shifted from the vertical experiment, and the vertical gains will be immediately fed back to the horizontal cross. Then, the information will spread to the entire population. The perfect combination of the two is like a layer of mesh structure that provides maximum help for optimization.

3.4. Frame Work of IHSSA

In summary, in order to solve the problems of the original sparrow search algorithm, such as fast convergence speed and high accuracy, but easy to mature early, several improvement measures have been proposed. The improved ICMIC is applied in the initialization phase, and the hybrid reverse learning strategy is utilized to update the discoverer and joiner, respectively. At the same time, the vertical and horizontal crossover strategy is added in the monitor stage to realize the overall update of each stage and strive to maximize the optimization. The specific implementation steps are as follows:

Step 1 .

Initialize the population and its parameters, including the population size N, the proportion of discoverers PD, the proportion of guards SD, the dimension of the objective function set to D, the upper and lower bounds of the initial value set to lb and ub, the maximum number of iterations T, and the alarm threshold ST, solving accuracy ε.

Step 2 .

Employ IICMIC to initialize the population (8), generate N D-dimensional vectors Zi, and then inversely map to the corresponding individual search space. The renewal of the population ensures the diversity of the sparrow population.

Step 3 .

Calculate the fitness f of each sparrow, select the current optimal fitness fb and its corresponding position xb, and the current worst fitness fw and its corresponding position xw.

Step 4 .

According to the set ratio PD, randomly select pNum sparrows with excellent adaptability as discoverers, and the rest become joiners. Update the position of the discoverers according to formula (3).

Step 5 .

According to the population fitness updated by the discoverer, an improved lens-based reverse learning strategy (12) is utilized to update the optimal value.

Step 6 .

Update the position of the joiner according to formula (4).

Step 7 .

Employ the generalized opposition-based learning strategy (13) to update the current global worst value.

Step 8 .

Randomly generate sNum guards from the population according to the ratio SD, and perform the horizontal crossover (15) and (16) operation.

Step 9 .

Perform vertical crossover operation according to formula (17), compare the degree of fitness, and save the better ones.

Step 10 .

According to the current state of the sparrow population, update the optimal position xb, the best fitness value fb, the worst position xw, and the worst fitness value fw of the entire population during the entire foraging process.

Step 11 .

Determine whether the iteration is over. If the algorithm reaches the maximum number of iterations, or the solution accuracy reaches the set value, it is determined that the loop ends, and the optimization result is output. Otherwise, it returns to Step2 to continue the next iteration operation, and the current iteration number t satisfies t=t+1.

Step 12 .

Output the results of IHSSA. The flow chart is shown in Figure 3.
Figure 3

IHSSA flow chart.

4. Experimental Results and Analysis

4.1. Benchmark Function Test

In order to better verify the effectiveness of the newly improved algorithm, this paper selects 14 internationally representative benchmark functions for testing. The selected benchmark functions, which hold the function name, expression, and search interval of the function, are shown in Table 1. F1–F4 in the table are unimodal functions, usually only a global optimal value, the purpose of which is to test the local mining capability of the function. F5–F7 are multimodal functions which test the balance between exploration and mining of the function. The final selections F8–F14 are all fixed-dimensional functions. The theoretical optimal values of the 14 selected test functions are all 0.
Table 1

Fourteen benchmark test functions.

Function nameFunctionDimensionInterval
Sphere F 1(x)=∑i=1nxi230[−100, 100]
Schwefel's problem 1.2 F 2(x)=∑i=1nj=1ixj)230[−100, 100]
Schwefel's problem 2.21 F 3(x)=maxi{|xi|, 1 ≤ i ≤ n}30[−100, 100]
Rosenbrock F 4(x)=∑i=1n−1[100(xi+1xi)2+(xi − 1)2)]30[−30, 30]
Rastrigin F 5(x)=∑i=1n[xi2 − 10  cos(2πxi)+10]30[−5.12, 5.12]
Ackley F6x=20exp0.21/ni=1nxi2exp1/ni=1ncos2πxi+20+e 30[−32, 32]
Griewank F7x=1/4000i=1nxi2i=1ncosxi/i+1 30[−600, 600]
Schwefel F8x=418.9829ni=1nxisinxi 30[−500, 500]
Three-hump camel F 9(x)=2x12 − 1.05x14+x16/6+x1x2+x222[−5, 5]
Colville F 10(x)=100x12x22+(x1 − 1)2+(x3 − 1)2+90(x32x4)2+10.1((x2 − 1)2+(x4 − 1)2)+19.8(x2 − 1)(x4 − 1)4[−10, 10]
Bent cigar F 11(x)=x12+106i=2nxi210[−1010, 1010]
Zakharov F 12(x)=∑i=1nxi2+(∑i=1n0.5ixi)2+(∑i=1n0.5ixi)410[−510, 1010]
Noncontinuous rotated Rastrigin's F 13(x)=∑i=1n(zi2 − 10  cos(2πzi)+10)+F13, X^=M15.12xo/100yi=Xi,^ifXi^0.5,round2Xi^/2,ifXi^>0.5,Z=M1Λ10M2Tasy0.2(Tosz(y))10[−510, 510]
Levy function F 14(x)=  sin2(πω1)+∑i=1n−1(ωi − 1)2[1+10  sin2(πω1+1)]+(ωd − 1)2[1+  sin2(2πωd)], Where ωi=1+xi − 1/430[−1030, 1030]
All the algorithms mentioned are performed on Windows10 64 bit system, and the processor is Intel(R) Core(TM) i5-9300H CPU @ 2.40 GHz with 16 GB RAM. And the MATLAB R2016b simulation experiment platform is used for simulation.

4.2. Ablation Experiment

In order to verify the influence of the three improvement points of the algorithm on the effect of the entire experiment, an ablation experiment is hereby carried out. The comparison results are analyzed and have strong persuasion. The functions used for verification still select the 14 functions selected in the previous section, and the statistical results are divided into 5 angles according to the type and number of improvement points. These algorithms include the original SSA; the improved ICMIC initial population combined with the initial SSA is named ISSA-I; the improved algorithm combined the hybrid reverse learning strategy with the ISSA-I is named ISSA-II; the improved algorithm combined the crisscross strategy with ISSA-I is named ISSA-III; the last one is the IHSSA that combines all the innovations proposed in this paper. Integrate the data into Table 2 according to the principles above.
Table 2

Algorithm parameters

AlgorithmParameters
GWO a max =2,  amin=0
PSO c 1=c2=1.49445
SSA PD = 0.2, ST = 0.6, SD = 0.2
CSSA PD = 0.2, ST = 0.8, SD = 0.2
LSSA PD = 0.2, SD = 0.2
GSSA PD = 0.3, ST = 0.6, SD = 0.7
YSSA PD = 0.2, SD = 0.2
IHSSA PD = 0.2, SD = 0.2
It can be seen from Table 2 that, in the process of improvement, the indicators of 8 functions have no obvious changes in the data. Among them, each index of the 7 functions, F1, F5, F6, F7, F9, F11, F12, and F13, reaches the optimal value of 0 in the SSA. The value obtained by the improved algorithm still keeps the optimal state. With the increase of improvement points, in the five functions of F2, F3, F4, F10, and F14, the optimization effect becomes more significant. Except for the best optimization of ISSA-III in F14, the other four functions are all the best optimization values of IHSSA and even get progress of many orders of magnitude. In F8, although there is no improvement in the two data of average and standard deviation, the optimal value has reached an improvement of 7 orders of magnitude. Overall, the IHSSA that combines all the innovation points proposed in this article has the best effect. Each innovation point has played a certain role in each step of the algorithm; especially combining IICMIC with population initialization has brought obvious results.

4.3. Population Diversity Analysis

Population diversity is one of the important performance indexes to measure the pros and cons of an algorithm, which can reflect whether the algorithm falls into a local optimum to a certain extent. In this paper, the population distribution map in the early stage of the iteration (the number of iterations is 10) was selected as a reference. The unimodal function F1 and multimodal function F8 proposed in the above table were selected as the research objects to show the advantages and disadvantages of IHSSA and the original SSA, as shown in Figures 4(a) and 4(b), respectively, representing the individual distribution of SSA and IHSSA on F1, and Figures 4(c) and 4(d), respectively, represent the individual distribution of SSA and IHSSA on F8. The theoretical optimal value of F1 is 0, and the theoretical optimal value for F8 is 420.
Figure 4

Population distribution map. (a) SSA. (b) IHSSA. (c) SSA. (d) IHSSA.

As can be seen from Figure 4, in the early stage of the algorithm iteration, the distribution in Figure 4(a) is linear, while the IHSSA in Figure 4(b) is more widely distributed. Compared with the poor aggregation state of SSA in Figure 4(c), the distribution shown in Figure 4(d) is closer to the theoretical optimal value and presents a wider distribution field. It can be seen that the improved IHSSA in this paper increases the diversity of the population to a certain extent and reduces the invalid search of individuals.

4.4. Comparison with Other Optimization Algorithms

14 standard test functions proposed in the previous section are utilized to test the performance of the improved IHSSA. Nine intelligent optimization algorithms, including particle swarm optimization (PSO), whale optimization algorithm (WOA), grey wolf optimization algorithm (GWO), teaching and learning algorithm (TLBO), Sparrow Search Algorithm (SSA), Chaos Sparrow Optimization Algorithm (CSSA) proposed by Lv et al. [2], LSSA improved by Zhu DL [33], GSSA improved by Chen G, and YSSA proposed by Yan et al. [34], are chosen for comparison. In order to ensure the objectivity of the experiment and the fairness of comparison, the population size and maximum iteration number of each algorithm are 100 and 500, respectively. The other parameter settings of the 8 algorithms are shown in Table 3. Considering the importance of parameter values in experimental results, the feasibility proved by a large number of experiments is the only source of value, so the data in the table are from the parameter values set by the author when each algorithm was first proposed. In order to avoid the contingency of the algorithm results, each test function is run 30 times separately. And the average value, standard deviation and optimal value of the experiment are calculated, respectively. Meanwhile, the average running time of each algorithm for optimizing in each function was recorded as a reference for improving performance. The experimental data are shown in Table 4.
Table 3

Ablation experiment.

FunctionIndexSSAISSA-IISSA-IIISSA-IIIIHSSA
F1Avg00000
Std00000
Best00000

F2Avg4.9592E-29001.1301E-28700
Std00000
Best00000

F3Avg2.9952E-1931.0566E-2305.119E-1631E-2330
Std00000
Best004.7959E-16500

F4Avg1.3694E-058.13949E-062.59124E-054E-072E-07
Std3.77399E-052.05064E-055.66546E-052E-068E-07
Best1.36288E-091.88962E-102.45252E-107E-120

F5Avg00000
Std00000
Best00000

F6Avg8.88178E-168.88178E-168.88178E-169E-169E-16
Std00000
Best8.88178E-168.88178E-168.88178E-169E-169E-16

F7Avg00000
Std00000
Best00000

F8Avg3958.1821073972.121152628.0862894674.83377.4
Std712.2420957527.2054021002.927817770.111620.5
Best2389.8123633085.5874280.0146127683396.70.0008

F9Avg00000
Std00000
Best00000

F10Avg4.82652E-083.93007E-081.53723E-086E-082E-08
Std1.02745E-078.62134E-083.49383E-082E-075E-08
Best2.87751E-132.76833E-142.82932E-202E-134E-15

F11Avg00000
Std00000
Best00000

F12Avg0001E-1880
Std00000
Best00000

F13Avg00000
Std00000
Best00000

F14Avg5.14521E-108.78436E-101.26547E-091E-112E-10
Std1.83095E-094.04952E-092.0902E-095E-118E-10
Best2.07524E-203.3079E-143.0117E-112E-152E-14
Table 4

Comparisons of IHSSA and other seven algorithms for 14 test functions.

FunctionAlgorithmAvgStdBestRun time
F1WOA6.47271E-972.45955E-961.4869E-1040.00114
GWO9.58408E-411.61375E-401.85345E-420.001135
PSO4.51174E-116.25117E-111.31715E-120.003173933
TLBO2.29067E-851.79774E-852.6797E-860.0051
SSA0000.001907
CSSA0000.002318533
LSSA0000.002167
GSSA3.826E-1231.4763E-12200.003030333
YSSA0000.0024253
IHSSA0000.00231

F2WOA14907.668277290.6134845245.5010870.000753
GWO2.01843E-114.84067E-112.37886E-150.001054
PSO6.1592484523.4165611472.2743296990.003053
TLBO1.49173E-151.26326E-151.28014E-160.005033
SSA4.9592E-290000.001873
CSSA0000.001958
LSSA0000.002093
GSSA1.03442E-881.13315E-8700.002994
YSSA0000.00206
IHSSA0000.001953

F3WOA34.6852703329.66683574.83669E-050.000001
GWO2.5165E-102.80518E-103.92494E-110.000071
PSO0.2108388770.1552903450.0578319080.002197
TLBO2.30203E-341.48485E-343.57945E-350.004141
SSA2.9952E-193000.000924
CSSA0000.00118
LSSA0000.001134
GSSA4.87917E-902.20121E-8900.002007
YSSA0000.001297
IHSSA0000.001171

F4WOA28.724576140.19807956827.988051660.000752
GWO1.62822E+356.51934E+35382509071920.001087
PSO3.24434E+874.65939E+862.29319E+870.002976
TLBO420.47756631327.01607521.955910780.004966
SSA1.3694E-053.77399E-051.36288E-090.001724
CSSA6.06157E-061.48998E-051.03023E-080.001897
LSSA5.93493E-061.14239E-0500.001974
GSSA5.11701E-075.04167E-071.6666E-140.002881
YSSA1.50157E-053.02452E-052.37868E-090.002006
IHSSA2.40169E-078.20733E-0700.001887

F5WOA0000.000587
GWO1.5816674612.93945242200.001086
PSO45.6034738712.1260440524.873962290.003016
TLBO6.3830235064.95505000400.005033
SSA0000.001767
CSSA0000.001891
LSSA0000.001917
GSSA0000.0028
YSSA0000.002885
IHSSA0000.00188

F6WOA5.15143E-152.16807E-158.88178E-160.000032
GWO2.6823E-143.63147E-151.86517E-140.000032
PSO0.3435892010.5969422941.24699E-060.002345
TLBO0.0310441560.1700358474.44089E-150.004333
SSA8.88178E-1608.88178E-160.001045
CSSA8.88178E-1608.88178E-160.001243
LSSA8.88178E-1608.88178E-160.001302
GSSA8.88178E-161.00293E-318.88178E-160.002071
YSSA8.88178E-161.00293E-318.88178E-160.001365
IHSSA8.88178E-1608.88178E-160.001232

F7WOA0.004202530.01296071900.000702
GWO0.0017809220.00412869500.000986
PSO0.0170488050.0191846356.22012E-110.002873
TLBO0000.004866
SSA0000.001613
CSSA0000.001893
LSSA0000.001803
GSSA0000.002723
YSSA0000.001993
IHSSA0000.001885

F8WOA749.06144881053.517920.1422286450.001887
GWO5984.325616512.0346915075.8366560.001087
PSO5822.761982781.21134363040.6983180.002733
TLBO5050.582291189.1478993306.3840940.004777
SSA3958.182107712.24209572389.8123630.001487
CSSA1619.35044977.2374638217.14014250.001995
LSSA4878.962977846.45052343517.3719640.001683
GSSA678.11201611382.2206690.0003818270.002487
YSSA2178.6362782045.6186620.0003818270.002057
IHSSA3377.4389651620.5497720.0008152620.001987

F9WOA1.2342E-1175.3288E-1172.672E-1420.000885
GWO0000.001102
PSO7.15959E-482.6399E-476.0523E-530.002666
TLBO6.1741E-18301.5723E-1880.004666
SSA0000.001424
CSSA0000.001911
LSSA0000.001614
GSSA0000.002457
YSSA0000.002011
IHSSA0000.001902

F10WOA0.7548814521.3869313490.0014593450.000757
GWO0.6007426351.393739052.53554E-050.000965
PSO0.0008924680.0009223651.42838E-060.002883
TLBO3.78444E-069.24279E-063.07036E-080.004883
SSA4.82652E-081.02745E-072.87751E-130.001633
CSSA6.06071E-081.74794E-074.9792E-140.001969
LSSA1.87914E-063.82252E-0600.001777
GSSA9.6035E-071.483E-062.07901E-280.002665
YSSA5.06037E-089.41366E-082.87751E-130.002075
IHSSA1.78222E-085.32296E-083.72464E-150.001965

F11WOA6.5603E-783.17192E-779.75895E-900.00132
GWO6.77957E-662.37196E-652.19507E-700.000106
PSO1.0187E+262.21398E+254.6921E+250.002255
TLBO4.82046E-855.52867E-851.68795E-860.004233
SSA0000.000977
CSSA0000.001212
LSSA0000.001273
GSSA0000.002087
YSSA0000.001338
IHSSA0000.001206

F12WOA6.6988E+169.54778E+1639646.297510.000265
GWO2.61392E+153.73552E+151.47252E+140.000282
PSO2.38287E+431.23114E+434.17747E+420.002377
TLBO9.02546E+144.72246E+142.97075E+140.004306
SSA0000.000983
CSSA0000.001293
LSSA0000.001166
GSSA0000.001983
YSSA0000.001412
IHSSA0000.001282

F13WOA0.7000.001367
GWO0.82.00688470200.000958
PSO1.22453E+202.91996E+195.79476E+190.003133
TLBO4.874633040.9884150752.7612511060.005187
SSA0000.001922
CSSA0000.002064
LSSA0000.002097
GSSA0000.002965
YSSA0000.002185
IHSSA0000.002057

F14WOA591670353.732407078871.9672648540.000187
GWO5.22374E+341.32705E+355.04003E+320.000466
PSO1.94575E+602.11926E+591.42718E+600.002175
TLBO0.3201341440.1270109310.0931714520.00516
SSA5.14521E-101.83095E-092.07524E-200.000983
CSSA3.47974E-107.18343E-109.49837E-140.001377
LSSA1.00461E-062.21303E-061.49976E-320.001183
GSSA1.45943E-072.58657E-077.98889E-190.002022
YSSA2.20337E-076.82432E-071.49976E-320.001485
IHSSA1.96838E-107.70225E-102.39969E-140.001365
It can be seen from Table 4 that, compared with the other three SSA algorithms, the same results are achieved in 7 functions; even the 6 functions F1, F5, F7, F9, F11, and F13 have found the optimal solution 0. There are obvious improvements in the remaining 7 functions, and the average value of optimization in F2, F3, and F4 has been improved by multiple orders of magnitude. Compared with the WOA, GWO, and TLBO algorithms, the optimal solution 0 is found in F5, F9, and F7, respectively, and the results in the other functions are better. Compared with the PSO algorithm, the five functions of F4, F11, F12, F13, and F14 have a significant improvement, which is particularly prominent in F4. In addition, compared with the basic SSA, the optimal values found in the three functions are improved significantly. Compared with the other two improved SSA algorithms, the results are better in the three functions of F4, F10, and F14. In F8, apart from the GSSA, the performance of several SSA algorithms is not as good as WOA, especially in the average value. Overall, the IHSSA proposed in this paper has the best performance among the 14 functions, while the PSO has the worst performance. Figure 5 shows the convergence curves of 8 algorithms for 10 functions. It can be seen that, among the five functions of F1, F2, F3, F6, and F12, IHSSA has the fastest convergence speed and higher convergence accuracy. In F4, F10, and F14, although IHSSA has the same convergence speed as other SSA variants, it is obviously able to obtain a better solution. For F8, WOA showed a high advantage, and GSSA shows superior optimization ability than other SSA variants. However, compared with SSA and LSSA, IHSSA performs better in convergence accuracy, but compared with GSSA, CSSA, and ISSA, the accuracy is still far from the theoretical optimal value. In terms of running time, the variant of SSA consumes more time than the original SSA. However, among several variants, LSSA and IHSSA have relatively shorter running times, and higher efficiency in the optimization process of 7 functions, respectively.
Figure 5

Convergence curves of eight algorithms for ten representatives test functions. Note. (a) corresponds to F1, (b) corresponds to F2, (c) corresponds to F3, (d) corresponds to F4, (e) corresponds to F6, (f) corresponds to F8, (g) corresponds to F9, (h) corresponds to F10, (i) corresponds to F12, and (j) corresponds to F14. The image optimization results of the four functions F5, F7, F11, and F13 have greater advantages and reach the optimal value after a short number of iterations. Since the convergence effect is too good, considering the overall beauty of the image, it will not be displayed.

In general, IHSSA has the fastest convergence speed and better convergence accuracy; that is, the quality of the algorithm's optimal solution is better.

4.5. Wilcoxon Rank Sum Test

Derrac et al. proposed that, for the performance evaluation of improved intelligent optimization algorithms, data comparison only based on average, standard, and optimal values is not convincing enough. One of the necessary conditions, the quality of the statistical test results, also proves whether the algorithm has been significantly improved or not. In order to judge that the results of the improved IHSSA in this paper are significantly different from the results of other algorithms, the Wilcoxon statistical test was performed at a significance level of 5% [23]. The test principle is briefly described as follows: when P < 0.05, it is considered that there is a significant difference between the two algorithms. When P < 0.05, it indicates that the performance of the two algorithms is equivalent, and the difference is not obvious. In this article, the partial value of P > 0.05 is expressed as N/A. Table 5 shows the P value calculated in the Wilcoxon rank sum test of IHSSA and other algorithms among the 14 selected benchmark functions. The results show that P < 0.05 accounts for the main component. The IHSSA has a greater improvement over the SSA algorithm, and its superiority is also statistically significant, which proves that the improved algorithm has a higher convergence accuracy.
Table 5

p value and Wilcoxon rank.

FunctionWOAGWO1PSO1TLBOSSACSSALSSAGSSAYSSA
F13.02E-113.02E-113.02E-113.02E-11N/AN/A0.049941793N/A0.006518796
F21.21E-121.21E-121.21E-121.21E-120.333710696N/AN/A4.79E-08N/A
F31.21E-121.21E-121.21E-121.21E-120.002788006N/AN/A1.93E-10N/A
F43.02E-113.02E-113.02E-113.02E-116.53E-089.26E-090.02399.13E-047.49E-08
F5N/A2.15E-061.21E-121.93E-10N/AN/AN/AN/AN/A
F62.53E-115.67E-131.21E-123.50E-13N/AN/AN/AN/AN/A
F70.0815229720.0215771921.21E-12N/AN/AN/AN/AN/AN/A
F81.49E-065.57E-104.18E-094.12E-060.029205413.09E-061.61E-066.51E-070.1259
F91.21E-12N/A1.21E-121.21E-12N/AN/AN/AN/AN/A
F103.02E-113.02E-113.02E-111.46E-100.2225728960.6204037210.0034757010.01990.0271
F111.21E-121.21E-121.21E-121.21E-12N/AN/AN/AN/AN/A
F121.21E-121.21E-121.21E-121.21E-12N/AN/AN/AN/AN/A
F130.08150.00281.21E-121.21E-12N/AN/AN/AN/AN/A
F143.02E-113.02E-113.02E-113.02E-11N/AN/A0.049941793N/A0.006518796

4.6. Time Complexity Analysis

Time complexity is one of the important indicators for judging the performance of the algorithm and calculating the running cost. Analyze whether the improved IHSSA increases the time complexity from both the macro- and microperspectives. On the one hand, from a macroperspective, supposing that the maximum number of iterations of the algorithm is M, the dimension is D, and the population size is P, then, according to the time complexity calculation formula of the intelligent optimization algorithm, the time complexity of SSA is O1=P × M × D. For the improved IHSSA, although the number of cycles has been increased, the structure of the algorithm has not changed. Therefore, the time complexity O2 of the IHSSA can be calculated as O2=P × M × D. Obviously, O1=O2, and the time complexity has not increased in the macroscopic view. On the other hand, from a microperspective, the time complexity of IHSSA has increased to a certain extent. Assuming that the proportions of discoverers and joiners are A and B, respectively, then, the time complexity of lens-based reverse learning O3 and generalized opposition-based learning O4 is O3=A × P × M × D, O4=M, respectively. The increase in time complexity of the alert phase of the vertical and horizontal cross strategy update is O5=B × P × M × D. The initialization phase of IICMIC does not increase the time complexity. In summary, from a microscopic point of view, the time complexity of the improved algorithm has increased by O=O3+O4+O5=(A+B) × P × M × D+M, but the increase in each step did not cause orders of magnitude. The total time complexity is still P × M × D. From the above on, regardless of the macroscopic or microscopic point of view, the time complexity has not changed, which undoubtedly proves the feasibility of the algorithm improvement.

5. Application in Constrained Engineering Optimization Problem

5.1. I-Shaped Beam

The design optimization problem of I-beam is one of the classic engineering optimization problems. The goal is to minimize the vertical deflection by optimizing the width of the leg x1, the height of the waist x2 and the two thicknesses (x3, x4). The objective function and constraint conditions of this optimization problem are as follows: Minimize: Subject to: Variable range:

5.2. Tree-Bar Truss Design Problem

The design problem of three-bar truss is another classic problem in engineering case studies. In order to minimize the weight constrained by stress, deflection, and buckling, it is necessary to evaluate the optimal cross-sectional area and adjust the two long rods A1 and A2 (x1, x2). The specific mathematical formulas for adjustment are as follows: Minimize: Subject to: Variable range:

5.3. Cantilever Beam

The application is a structural engineering design problem. The component part of the cantilever arm is five hollow bricks, and the purpose of the project is to increase the rigidity. Increasing the cross-sectional height of the brickwork is more conducive to improving the rigidity. If the section height increases, in order to reduce the mass or maintain the same quality, the section width must be reduced. Therefore, the size of the cross section (height or width) is the optimal parameter for this experiment. The modeling expression of this case is as follows: Minimize: Subject to: Variable range: Three classic constrained engineering optimization problems, I-beam optimization problems, three-bar truss design problems, and cantilever beam problems, are representative in verifying the feasibility of the algorithm. The parameters and constraints of the three engineering problems are integrated in Tables 6–8, respectively. In decades of research [14, 35–41], to some extent, generations of researchers have designed many kinds of optimizers to solve these three nonlinear problems. The statistical results of these optimization methods (including the IHSSA proposed in this paper) are shown in Tables 6–8, respectively, and the optimal solutions obtained are denoted as f(X). It can be seen from Tables 7 and 8 that the IHSSA algorithm can be used in engineering optimization problems and has better performance than the original SSA algorithm. Compared with other optimizers shown in [27], the overall result is also slightly superior.
Table 6

Best results for the optimal design of I-shaped beam problem.

AlgorithmVariablesConstraint
x1 x2 x3 x4 g1(X) g2(X) f(X)
IARSM79.9948.420.92.40.0869999−1.524540.0131
CS80500.92.3216−0.012005−1.570020.01307
GWO80500.92.3217−0.009059−1.5700710.0131
EMGO-FCR80500.92.32−0.176−1.5671790.0131
SOS80500.92.3217−0.000222−1.5702240.01307
AEFA-C79.967149.990.92.3164−0.560371−1.5595180.0131
SSA79.9999249.999820.92.321795732−0.00058001−1.5702108360.013074174
IHSSA80500.92.32179226−2.06E−08−1.5702284750.013074119
Table 7

Best results of the three-bar truss design problem.

AlgorithmVariablesConstraint
x1 x2 g1(X) g2(X) g3(X) f(X)
GA0.7889150.4075699.64E-07−1.464873605−0.53512542263.8958857
PSO0.7886690.4082654.8650E-07−1.464082376−0.535917137263.8958434
ICA0.7886250.4083898.42E-07−1.463941244−0.536057913263.8958452
CS0.788670.40902−2.90E-04−0.26853−0.73176263.9716
WCA0.7886510.4083160.00E+00−1.464024−0.535975263.895843
GWO0.7886480.4083253.34E-08−1.464014397−0.535985569263.8960063
ALO0.7886630.408283−5.32E-12−1.464062005−0.53593799263.8958434
MFO0.7882450.4094677.71E-12−1.462717072−0.537282927263.8959796
WSA0.7886830.4082763.00E-10−1.46407036−0.53587454263.8958434
SSA0.7886280.4083815.43E-07−1.463950108−0.536049349263.8957734
IHSSA0.7886740.4082515.13E-09−1.464098378−0.535901617263.8958427
Table 8

Best results of the cantilever beam design example.

AlgorithmVariablesConstraint
x1 x2 x3 x4 x5 g1(X) f(X)
CS6.00895.30494.50233.50772.1504−6.45E-051.33999
MFO5.984875.316724.497333.513612.161624.18E-091.33998
ALO6.018125.311424.488363.497512.15832−3.00E-061.33995
SOS6.018785.303444.495873.498962.155641.39E-041.33996
SSA5.992155.285364.542164683.4827212862.174334383−0.000145011.3401483
IHSSA5.993495.338194.5014712523.48920142.152033962−1.92E-051.340002

6. Conclusion

Based on the basic sparrow search algorithm, this paper proposes an improved sparrow search algorithm (IHSSA) that integrates infinite folding iterative chaotic mapping and hybrid reverse learning strategy so as to deal with shortcomings. Firstly, an improved infinite fold iterative chaotic map (IICMIC) is introduced in the initial population stage to increase the search range of the population. Then, in order to update the position of the global optimal value and the current worst, a hybrid reverse learning strategy is proposed to be applied after the update of the discoverer and the update of the follower, respectively. The introduction of the hybrid reverse learning strategy increases the quality of understanding and avoids falling into the global optimum. Moreover, combining the vertical and horizontal crossover strategy into the monitor stage contributes to maximizing the exploration and mining capabilities of the balance algorithm. In general, the proposal of IHSSA makes the optimization accuracy better, the development ability becomes stronger, and the algorithm's global search ability gets enhanced. Overall, the comparison results of the solutions obtained by the 14 standard test functions also prove that the new algorithm is generally better than several well-known heuristic algorithms such as WOA, GWO, TLBO, PSO, the newly proposed SSA, and its excellent variants. IHSSA has strong stability and robustness. In terms of running time, the optimization process of the seven functions takes the least amount of time, showing high computational efficiency. In addition, the high quality of convergence accuracy is proven in the Wilcoxon rank sum test. It is proven that the update of the algorithm does not bring an order of magnitude increase in time complexity, which indicates that it is a good operation. Moreover, the application of the improved algorithm in three constrained engineering optimization problems has demonstrated its great feasibility and effect, which is better than other optimizers. This undoubtedly makes the research more meaningful. However, IHSSA research is still in its infancy. In the follow-up research, in order to obtain better accuracy and convergence speed, we will continue to try to improve the sparrow search algorithm and other swarm intelligence algorithms. In addition, the improved algorithm and innovative points are applied to engineering optimization problems to solve practical problems, so as to broaden the application field of the algorithm and further verify the feasibility and effectiveness of the algorithm.
  3 in total

1.  Development and applications of an intelligent crow search algorithm based on opposition based learning.

Authors:  Shalini Shekhawat; Akash Saxena
Journal:  ISA Trans       Date:  2019-09-06       Impact factor: 5.468

2.  A Modified Sparrow Search Algorithm with Application in 3d Route Planning for UAV.

Authors:  Guiyun Liu; Cong Shu; Zhongwei Liang; Baihao Peng; Lefeng Cheng
Journal:  Sensors (Basel)       Date:  2021-02-09       Impact factor: 3.576

3.  Improved Sparrow Search Algorithm Based on Iterative Local Search.

Authors:  Shaoqiang Yan; Ping Yang; Donglin Zhu; Wanli Zheng; Fengxuan Wu
Journal:  Comput Intell Neurosci       Date:  2021-12-15
  3 in total
  2 in total

Review 1.  Improved Sparrow Algorithm Based on Game Predatory Mechanism and Suicide Mechanism.

Authors:  Ping Yang; Shaoqiang Yan; Donglin Zhu; Jiangpeng Wang; Fengxuan Wu; Zhe Yan; Song Yan
Journal:  Comput Intell Neurosci       Date:  2022-05-16

2.  Advances in Sparrow Search Algorithm: A Comprehensive Survey.

Authors:  Farhad Soleimanian Gharehchopogh; Mohammad Namazi; Laya Ebrahimi; Benyamin Abdollahzadeh
Journal:  Arch Comput Methods Eng       Date:  2022-08-22       Impact factor: 8.171

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.