Literature DB >> 34335728

Gaussian Perturbation Specular Reflection Learning and Golden-Sine-Mechanism-Based Elephant Herding Optimization for Global Optimization Problems.

Yuxian Duan1,2, Changyun Liu1, Song Li1, Xiangke Guo1, Chunlin Yang2,3.   

Abstract

Elephant herding optimization (EHO) has received widespread attention due to its few control parameters and simple operation but still suffers from slow convergence and low solution accuracy. In this paper, an improved algorithm to solve the above shortcomings, called Gaussian perturbation specular reflection learning and golden-sine-mechanism-based EHO (SRGS-EHO), is proposed. First, specular reflection learning is introduced into the algorithm to enhance the diversity and ergodicity of the initial population and improve the convergence speed. Meanwhile, Gaussian perturbation is used to further increase the diversity of the initial population. Second, the golden sine mechanism is introduced to improve the way of updating the position of the patriarch in each clan, which can make the best-positioned individual in each generation move toward the global optimum and enhance the global exploration and local exploitation ability of the algorithm. To evaluate the effectiveness of the proposed algorithm, tests are performed on 23 benchmark functions. In addition, Wilcoxon rank-sum tests and Friedman tests with 5% are invoked to compare it with other eight metaheuristic algorithms. In addition, sensitivity analysis to parameters and experiments of the different modifications are set up. To further validate the effectiveness of the enhanced algorithm, SRGS-EHO is also applied to solve two classic engineering problems with a constrained search space (pressure-vessel design problem and tension-/compression-string design problem). The results show that the algorithm can be applied to solve the problems encountered in real production.
Copyright © 2021 Yuxian Duan et al.

Entities:  

Mesh:

Year:  2021        PMID: 34335728      PMCID: PMC8289615          DOI: 10.1155/2021/9922192

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Many challenging problems in applied mathematics and practical engineering can be considered as the processes of optimization [1]. Optimization is the process of selecting or determining the best results from a set of limited resources [2]. In general, there exist several explicit decision variables, objective functions, and constraints in optimization problems. In the real world, however, optimization problems vary widely, from single to multiobjective, from continuous to discrete, and from constrained to unconstrained. Optimization algorithms are used to obtain the values of decision variables and optimize the objective function under a certain range of constraints and search domains. If the search domain is compared to a forest, the optimization algorithm needs to find the potential area where the prey can be found. In this way, the optimization problem can be solved easily rather than laboriously. Optimization algorithms are divided into two categories, namely, exact algorithms and heuristic algorithms. Traditional exact algorithms (e.g., branch-and-bound algorithms and dynamic programming), although capable of giving global optima infinite time, must rely on gradient information, and the runtime of the algorithm grows proportionally to the number of variables [3]. Therefore, it is difficult to achieve good results in the face of many types of nondifferentiable, noncontinuous, and complex high-dimensional problems in the real world [4]. As research has progressed, the emergence of heuristic algorithms (local search algorithm, tabu search, simulated annealing algorithm, etc.) has provided ideas for solving complex problems. Owing to the introduction of the greedy strategy and fixed search steps, the number of iterations of the algorithm is reduced [5]. For the NP-hard problem, an approximate and accurate solution can be given. However, the drawback is that such algorithms are greedy and often fall into local optima when solving complex problems, thus narrowing the scope of its application [6]. The metaheuristic algorithm that emerged later is a higher-level heuristic strategy, with a problem-independent algorithmic framework, and provides a set of guidelines and strategies for developing heuristic algorithms. In addition, it has fewer control parameters, greater randomness, more flexibility, and simplicity, can effectively handle discrete variables, and is computationally less expensive [7]. Compared with exact methods and heuristic algorithms, metaheuristic algorithms are more applicable in solving complex optimization problems. Due to its unique advantages, metaheuristics of various versions such as continuous and binary have been developed to be suitable for solving continuous and discrete optimization problems. Under this kind of tidal current, in recent years, metaheuristic algorithms have become more popular among researchers and are widely used in various fields. Metaheuristic algorithms can be broadly classified into three categories, namely, evolutionary algorithms, physics-based algorithms, and swarm intelligence algorithms. Evolutionary algorithms were first proposed in the early 1970s and were mainly generated by simulating the concept of evolution in nature. Inspired by Darwinian biological evolution, Goldberg and Holland [8] proposed the first evolutionary algorithm, the genetic algorithm (GA), in 1988, which provides a stochastic and efficient method to perform a global search among a large number of candidate solutions. In addition, similar algorithms also include the differential evolutionary algorithm (DE) [9], biogeography-based algorithm (BBO) [10], and evolution strategy (ES) [11]. Physics-based algorithms are modeled using physical concepts and laws to update the agents in the search space, mainly including analytical modeling based on the laws of the universe, physical/chemical rules, scientific phenomena, and other ways [12]. For example, to overcome the defects that traditional GA algorithms tend to converge prematurely and have long running time, Hsiao et al. [13] proposed the space gravitational algorithm (SGA) based on the inspiration of Einstein's theory of relativity and the laws of motion of asteroids in the universe. Then, Rashedi et al. [14] proposed the gravitational search algorithm (GSA) by analyzing the interaction between gravity in the universe, which received wide attention. It has been slightly modified in the literature [15] to be adaptable to industrial applications. Inspired by the idea of GSA, Flores et al. [16] proposed the gravitational interactions optimization (GIO) algorithm, which mainly modified the nondecreasing constant, and the global search capability and local search optimization are stronger than those of the GSA. In 2019, Faramarzi et al. [17] proposed an equilibrium optimizer (EO) by simulating the mass balance model of physics to achieve the final equilibrium state (optimal result) by continuously updating the search agents. Also common are multiverse optimization (MVO) [18], electromagnetic field optimization (EFO) [19], the artificial electric field algorithm for global optimization (AEFA) [20], and lightning search algorithm (LSA) [21]. The swarm intelligence algorithms focus on artificially reproducing the social behavior and thinking concepts of various groups of organisms within nature so that the intelligence of the swarm surpasses the sum of individual intelligence. In such algorithms, multiple search agents perform the search process together, sharing location information between them, and using different operators that depend on the metaphor of each algorithm to shift the search agents to new locations [22]. On that basis, the probability of finding the optimal solution can be increased, so that the best solution can be found with low computational complexity. For example, Shi et al. [23, 24] proposed particle swarm optimization (PSO) based on the behavior of biological groups of fish, insect swarms, and bird flocks. By simulating the local interactions between individuals and the environment, a globally optimal solution can be achieved. Notably, Liu et al. [25] extended PSO by introducing chaotic sequences. The exploration and exploitation capabilities of the algorithm were effectively balanced by introducing adaptive inertia weights. A novel algorithm, called the classic self-assembly algorithm (CSA), was proposed by Zapata et al. [26]. Using PSO as a navigation mechanism, the search agents were guided to continuously move toward the constructive region. Based on the collective foraging behavior of honeybees, Karaboga [27] proposed artificial bee colony optimization (ABC) in 2005, which is simple and practical and is now one of the most cited next-generation heuristics [28]. However, in the operation of the algorithm, there may be a stagnant state, which tends to make the population fall into a local optimum [29]. In addition, to solve the multiobjective problem under complex nonlinear constraints, Yang and Deb [30] replicated the reproductive behavior of cuckoo and proposed cuckoo search (CS). Owing to the introduction of Levy flights and Levy walks [31], the convergence performance of the algorithm is improved by capturing the behavior of instantaneously moving group members instead of the simple isotropic random wandering approach. Compared with algorithms such as PSO, the CS algorithm has fewer operating parameters, can satisfy the global convergence requirement, and has been widely used [32]. In the literature [33], a CS variant, called island-based CS with polynomial mutation (iCSPM), was proposed from the perspective of improving population diversity. The strategy of the island model and Levy flight strategy were introduced to enhance the search effectiveness of the algorithm. Furthermore, Yang and Gandomi [34] proposed the bat algorithm (BA) for the predatory behavior of bats. It aims to solve single-objective and multiobjective optimization problems in continuous domain space by simulating the echo-location approach. The slime mould algorithm (SMA) was proposed by Li et al. [35] in 2020, which was a very competitive algorithm. Precup et al. [36] provided a more understandable version of SMA and introduced it for fuzzy controller tuning, extending the application of SMA. Moreover, researchers have proposed bacterial foraging optimization (BFO) (Passino) [37], krill herd (KH) (Gandomi and Alavi) [38], the artificial plant optimization algorithm (APO) (Cui and Cai) [39], grey wolf optimizer (GWO) (Mirjalili et al.) [40], crisscross optimization algorithm (CSO) (Meng et al.) [41], whale optimization algorithm (WOA) (Mirjalili and Lewis) [42], crow search algorithm (CSA) (Askarzadeh) [43], salp swarm algorithm (SSA) (Mirjalili et al.) [44], Harris hawks optimization (HHO) (Heidari et al.) [45], sailfish optimizer (SFO) (Shadravan et al.) [46], manta ray foraging optimization algorithm (MRFO) (Zhao et al.) [47], and bald eagle search (BES) (Alsattar et al.) [48]. In 2016, Wang et al. [49] developed a novel metaheuristic algorithm named elephant herding optimization (EHO) for solving global unconstrained optimization problems by studying the herding behavior of elephants in nature. According to the living habits of elephants, the activity trajectory of each baby elephant is influenced by its maternal lineage. Therefore, in EHO, the clan updating operator is used to update the distance of individual elephants in each clan relative to the position of the maternal elephant. Since in each generation male elephants must move away from the clan activity, the separating operator is introduced to perform the separation operation. It is experimentally demonstrated [50] that, for most benchmark problems, the EHO algorithm can achieve better results compared to DE, GA, and BBO algorithms. It has thus aroused plenty of research interest owing to its fewer control parameters, easy implementation, and better global optimization capability for multipeaked problems [51]. Scholars and engineers have promoted EHO in various areas of practical engineering, including wireless sensor networks [52], bioinformatics [53], emotion recognition [54], character recognition [55], and cybersecurity [56]. From the perspective of EHO, although it is a relatively effective optimization tool, there are still some shortcomings, such as the lack of mutation mechanisms, slow convergence, and the tendency to fall into local optimality, which make the algorithm limited in practical applications. In recent years, researchers have achieved numerous results to overcome the deficiencies of EHO, and the research can be divided into three aspects. The first is to mix EHO with other algorithms or strategies to improve the performance of the algorithm. For example, Javaid et al. [57] combined EHO with the GA to develop a novel algorithm, GEHO, for smart grid scheduling, which reduces the maximum cost. Wang et al. [50] mixed EHO with three different approaches, namely, cultural-based EHO, alpha-tuning EHO, and biased initialization EHO. The three approaches were tested on benchmark functions from CEC 2016 and carried out on engineering problems such as gear trains, continuous stirred-tank reactors, and three-bar truss design. Chakraborty et al. [58] proposed the IEHO algorithm, which combines EHO with opposition-based learning (OBL) and dynamic Cauchy mutation (DCM) to accelerate the convergence and improve the performance of EHO. Second, a noise interference strategy is applied [59]. To increase the population diversity of the algorithm, noise interference has become a streamlining technique. Two of the most representative are the Levy flight (LF) and chaos strategy. Xu et al. [60] proposed a novel algorithm, LFEHO, that combines Levy flight with the EHO algorithm to overcome the defects of poor convergence performance and ease of falling into local optima in the original EHO. Tuba et al. [61] introduced two different chaotic maps into the original EHO for solving the unconstrained optimization problem and tested them on the CEC 2015 benchmark function. Third, to improve the internal structure of EHO, this part of the research focused on proposing adaptive operators and stagnation prevention mechanisms. Li et al. [62] introduced a global speed strategy based on EHO to assign travel speed to each elephant and achieved good results on CEC 2014. Ismaeel et al. [63] addressed the problem of unreasonable convergence to the origin in EHO by improving the cladistic update operator and separation operator, achieving the balance between exploration and exploitation. Li et al. [64] took an original approach by extracting the previous state information of the population to guide the subsequent search process. Six variants were generated by updating the weights using random numbers and the fitness of the previous agent. The experiment results showed that the quality of the obtained solutions was higher than that of the original algorithm. Most of the metaheuristics should be enhanced because they do not apply to complex problems, such as intricate scheduling and planning problems, big data analysis, complicated machine learning structures, and arduous modeling and classification problems. Scholars such as Dokeroglu [28] pointed out that a more fruitful research direction for metaheuristics is to optimize the internal structure of metaheuristics rather than to propose new algorithms similar to the existing ones. These are one of the motivations why this paper attempts to strengthen a new metaheuristic algorithm instead of developing a new one. Moreover, the efficiency of a metaheuristic algorithm depends on the balance between the local exploitation ability and the global exploration ability during the iterations [65]. In this regard, exploration is to explore new search spaces that require search agents to be more diverse and traversable under the operation of operators. Exploitation is characterized by the algorithm's ability to extract solutions from explored regions that are more promising in approximating the global optimal solution. In that stage, search agents played a role in converging quickly toward the optimal solution. To promote the performance of the metaheuristic algorithm, a desirable balance must be struck between these two conflicting properties. Like a coin having two sides, there are advantages and disadvantages in every developed metaheuristic. That is exactly why each algorithm cannot be applied to all problems. According to the no free lunch (NFL) theorem [66], all algorithms cannot be regarded as a universal optimal optimizer type. In other words, the success of an algorithm does not apply to all optimization problems while solving a specific set of problems. In addition, the NFL theorem encourages innovations to improve existing optimization algorithms to enhance their performance in use. Given the constant emergence of new optimization problems and the exponential growth in the size and complexity of real-world and engineering design problems, the development and improvement of new optimizers are inevitable. Khanduja and Bhushan [67] provided evidence in their research that hybrid metaheuristic algorithms can obtain better solutions than classical metaheuristic algorithms, which inspired us a lot. From these perspectives, the study of hybrid metaheuristic algorithms has a strong practical significance and value. Therefore, in this paper, the plan is to mix the EHO algorithm with other algorithmic mechanisms to exploit the advantages of each for collaborative search and effectively improve the optimization performance. Aiming to effectively achieve the balance between the exploration and exploitation capabilities, a Gaussian perturbation specular reflection learning and golden sine mechanism-based elephant herding optimization for global optimization problems, called SRGS-EHO, is proposed in the present paper, the main contributions of which are summarized as follows: First, the poor diversity and traversal of randomly generated initial populations affect the convergence performance of an algorithm. In this paper, the specular reflection learning strategy is used to generate high-quality initial populations. Moreover, Gaussian perturbation is added for the mutation to further enhance the diversity of the initial population. Furthermore, to improve the global optimization capabilities, the golden sine mechanism is introduced to update the position of the clan leader in the algorithm to prevent the population from falling into the local optimum. At the same time, it is made to move toward the global optimum and obtain a balance between exploitation and exploration. Additionally, to fully verify the effectiveness of SRGS-EHO, 23 common benchmark functions are selected as tests; the Wilcoxon rank-sum test and Friedman test are also invoked. Compared with eight other recognized metaheuristics, the performance of SRGS-EHO in terms of accuracy, convergence, and statistics is completely evaluated. In addition, sensitivity analysis to parameters and experiments of the different modifications are conducted. The aims are to analyze the impact of different parameters and modules in the algorithm on the performance of the algorithm. Finally, SRGS-EHO is applied to solve two practical engineering design problems (the pressure-vessel design and tension/compression string design problems), and the results are compared with those achieved using other algorithms. Experiments are conducted to test the feasibility and applicability of the proposed algorithm for solving real-world problems. The rest of this paper is organized as follows: in Section 2, the principle of EHO is briefly introduced. A detailed introduction to the proposed Gaussian perturbation SRGS-EHO method is given in Section 3. Experiments conducted are described in Section 4, which introduces the simulation experimental procedure and the analysis. In Section 5, the experiments and analysis of SRGS-EHO for solving practical engineering problems are represented. Finally, conclusions and future work are presented in Section 6.

2. Elephant Herding Optimization (EHO)

Elephants are herd-dwelling creatures, usually consisting of several clans. In each clan, the herd is headed by female elephants. Male elephants, however, undertake the tasks of defending the clan and usually operate outside the clan. In EHO, each clan contains an equal number of agents. According to the algorithm, the clan leader (patriarch) is identified as the individual with the best position. Depending on the relationship with the female elephant clan leader, the position of other agents is modified by the updating operator. Meanwhile, in each generation, there are a fixed number of male elephants set to leave the clan, and these elephants are modeled by using the separating operator. In general, the EHO algorithm is divided into the initialization operation, clan updating operation, and separating operation.

2.1. Initialization Operation

Assuming that there are N elephants in the D-dimensional search space, the k agent in the population can be represented as X=(x1, x2,…, x), 1 ≤ k ≤ N. Therefore, the definition of the initialized population is shown in the following equation:where m stands for dimensions, 1 ≤ m ≤ D, and u and l are the upper and lower bounds of the mth dimension. Then, the initial population can be expressed as X(0)={X1(0), X2(0),…, X(0)}. Next, the entire initial population must be divided into the preset clans.

2.2. Clan Updating Operator

At this stage, the position of each individual elephant will be updated according to its position relationship with the patriarch, which is shown as follows:where xnew, indicates the updated position of the agent, x represents the current location of the agent, and xbest, is the position of the current best agent. The scale factor α ∈ [0,1], r ∈ [0,1] is a random number. Through this operation, the diversity of the population can be enhanced. When x=xbest,, the patriarch of the clan cannot be updated by equation (2). To avoid this situation, it is changed to the following equation: The scale factor β ∈ [0,1] determines the extent to which xcenter, acts on xnew,. xcenter, is the centre of clan ci, which is calculated by the positions of all agents. For the position of the dth dimension, the expression of xcenter, can be given as Among them, d is the dimensionality of the agent, D represents the total dimensionality, 1 ≤ d ≤ D, n represents the number of agents in clan ci, and x represents the dth dimension of the jth agent in clan ci.

2.3. Separating Operator

In EHO, a certain number of adult male elephants will leave the clan life. The separation operator acts on the elephant with the worst fitness in each clan, which is expressed as follows:where xmax and xmin are the upper and lower bounds of agents in the population, respectively, xworst, denotes the worst agent in clan ci, and rand ∈ [0,1] represents a random distribution from 0 to 1.

3. Proposed Algorithm

3.1. Motivations

EHO was proposed in 2016 with excellent global optimization capabilities, fewer control parameters, and ease of implementation, and its performance was verified in the original paper. Nevertheless, it can be observed that the original EHO suffers from the following deficiencies. First, the initialization of the original algorithm is completed randomly, which makes it difficult to guarantee diversity and traversal. Therefore, it may make the algorithm unable to converge to the best solution while increasing the runtime. Second, in the process of iteration, the position of the patriarch is determined by the total agents in the clan, which may break the balance between global exploration and local exploitation. Meanwhile, it is easy to fall into the local optimum while dealing with complex problems. Once the population has stalled, the algorithm will converge prematurely. The clan leader, being the best-positioned agent in the clan, should have stronger exploration ability. The above issues make EHO perform poorly when dealing with more complex problems. The efficiency of metaheuristic algorithms depends mainly on striking the right balance between the global exploration and local exploitation phases. Among them, exploration is the process of exploring new search spaces, requiring search agents to be more diverse and traversable under the operation of operators. Exploitation is characterized by the algorithm's ability to extract solutions from the explored region that are more promising in approximating the global optimal solution. Therefore, search agents are desired to converge quickly toward the optimal solution. To improve the performance of the metaheuristic algorithm, a desirable balance must be struck between these two conflicting properties. If the balance is broken, the algorithm will suffer from falling into a local optimum while failing to obtain a globally optimal solution. To deal with these problems, improvements are made in two aspects in this paper. First, specular reflection learning is introduced to update the initialization scheme. Subsequently, Gaussian perturbation is introduced to further enhance the population diversity. Second, the golden sine mechanism is presented to modify the position of the patriarch in each generation of the clan, making it converge to the global optimum continuously, improving the convergence performance by balancing the local exploitation ability and global exploration ability. With these modifications, the aim is, on the one hand, to increase the population diversity and promote convergence efficiency and, on the other hand, to strengthen exploration and exploitation capabilities and establish a balance between the two phases.

3.2. Gaussian Perturbation-Based Specular Reflection Learning for Initializing Populations

In metaheuristic algorithms, the diversity of initial populations can significantly affect the convergence speed and solution accuracy of intelligent algorithms [68]. However, in EHO, the lack of a priori information about the search space tends to generate the initial population using random initialization, which imposes some limitations on the update strategy of the search agents. The reason for this is that, supposing the optimal solution appears at the opposite position of the randomly generated individuals, the direction of the population advance will deviate from the optimal solution. It has been demonstrated that solutions generated by the specular reflection learning (SRL) strategy are better than those generated using only random approaches. Therefore, in this paper, specular reflection learning for population initialization is introduced and Gaussian perturbation is added to compute the opposite values of the initial population in the search space, and a mutation operation is performed on the resulting agents. Then, the opposite individual fitness values are compared with those of the original individuals to filter out the better ones for retention. Opposition-based learning (OBL) [69] is widely used to improve metaheuristic algorithms due to its excellent performance. In OBL, a candidate solution and its opposite position are simultaneously examined to speed up the convergence. The opposite point , which is in the range [lb, ub], is defined as follows: Inspired by the phenomenon of specular reflection, Zhang [70] proposed specular reflection learning (SRL). In physics, there is an obvious correspondence between incident light and reflected light, as shown in Figure 1(a). Based on this phenomenon, the current solution and the reverse solution can be modeled in the way shown in Figure 1(b). Under this circumstance, it can be deduced that there is some correspondence between a solution and one of its neighbors of the opposite solution. Supposing both solutions are examined simultaneously, a better solution can be obtained. It has been demonstrated that the solutions generated by the SRL strategy are better than OBL [71]. Therefore, in this paper, specular reflection learning is introduced for population initialization. Besides, Gaussian perturbation is added to perform various operations on the generated agents. According to the results of the fitness values, the better N individuals are retained to form the initial population.
Figure 1

Diagram of specular reflection learning. (a) Specular reflection phenomenon. (b) Specular reflection model.

Suppose a point X=(a, 0) exists on the horizontal plane, and the opposite point is X′=(b, 0), ∀X, X′∈[X, X]. When light is incident, the angles of incidence and reflection are α and β, respectively. O is the midpoint of [X, X], O=(x0, 0). According to the law of reflection, the following correspondence can be obtained: When B0=μA0, equation (7) can be represented aswhere μ is the preset scale factor, and when μ takes on a different value, b is represented as It can be observed that, when μ changes, all values of [X, X] can be traversed by b. Therefore, it can be used to initialize the population and enhance the diversity and traversal of the initial population. Let X=(x1, x2,…, x) be a point in n-dimensional space, where x ∈ [xmin, xmax], i ∈ {1,2,…, n}. According to the basic specular reflection model, the opposite point in equation (6) can be defined by its components: It is worth noting that the scale factor μ in equation (10) is set to a random number within [0,1] for the convenience of the operation. After that, x and must be merged to form a search agent of size 2N, where the population is {x, x}. Next, the fitness of the population must be calculated, and the N agents with the best fitness value are selected as the initial population. SRL can be seen as a special case of opposition-based learning, in both the current solution and the reverse solution, in order to select a better solution that can provide more opportunities for discovering the global optimal solution. It is well known that the diversity of populations has a significant impact on metaheuristic algorithms [72]. The reason is that the increase in diversity can make it more practical for the population to explore a larger search area and therefore promote a move away from the local optimum. From this perspective, there are two aspects that constrain the increase of initial population diversity in SRL. First, the method does not adjust well in small spaces. Second, the SRL method is relatively fixed. Therefore, Gaussian perturbation is introduced in the present work to perform mutation operations after generating the reverse solution. The equation is as follows:where is the current inverse solution, is the newly generated inverse solution, k is the weight parameter (set to 1 in this paper), and randn(1) is the matrix that generates a matrix of 1 × 1 that conforms to a standard Gaussian distribution with mean 0 and variance 1. Then, the elite solution is selected as the initialized population in the following way:where x denotes the final generated ith initialized agent, i ∈ [1, N], and the final generated initialized population is X0=(x1, x2,…, x).

3.3. Golden Sine Mechanism

The golden sine algorithm [73] is a novel metaheuristic algorithm proposed by Tanyildizi in 2017, the design of which was inspired by the sine function in mathematics, and its agents search the solution space to approximate the optimal solution according to the golden ratio. The sine curve is defined with a range [−1,1], a period 2π, and has a special correspondence with the unit circle, which is shown in Figure 2. When the value of the independent variable x1 of the sine function changes, the corresponding dependent variable y1 also changes. In other words, traversing all the values of the sine function is equivalent to searching all the points on the unit circle. By introducing the golden ratio, the search space is continuously reduced and the search is conducted in the region with more hope of producing the optimal value, so as to improve the convergence efficiency. The solution process is shown in Figure 3.
Figure 2

Correspondence between sine function and unit circle.

Figure 3

Schematic of solution of golden sine mechanism.

When the clan update operation is completed, the individual agent with the best fitness is screened and its position updated using the golden sine mechanism in the following equation:where xbest, represents the global best individual, r1 is the random number between [0,2π], r2 is the random number between [0, π], and m1 and m2 are the coefficient factors obtained by the following equations:where a and b are the initial values of the golden ratio, which can be adjusted according to the actual problem. τ represents the golden ratio, . Next, the obtained agents must be compared with the global optimal solution, and the coefficient factors m1 and m2 must be updated according to the comparison results. When  f(xnew,) < f(xbest,), the update method is as follows: When f(xnew,) > f(xbest,), the equation is expressed as Supposing m1=m2, the method is denoted by The strategy of determining the clan leader's position by the average position is replaced by a renewed position update strategy, which, in turn, performs exploration with a strong directionality. As a result, the agents with the best fitness value can be made to continuously approach the optimal solution, obtaining a better solution in each iteration and reaching a balance between global exploration and local exploitation.

3.4. The Workflow of SRGS-EHO

The pseudocode of SRGS-EHO is given in Algorithm 1. The algorithm starts from initialization based on SRL and further enhances the diversity of the population through Gaussian perturbation. Next, the golden sine mechanism is introduced to optimize the position of the patriarch in each clan. The position of agents is evaluated by comparing fitness, and then continuous iteration ensues until the maximum number of iterations is reached. The flowchart of SRGS-EHO is shown in Figure 4.
Algorithm 1

SRGS-EHO.

Figure 4

Flowchart of SRGS-EHO.

4. Experimental Results and Discussion

To verify the effectiveness of SRGS-EHO for solving global optimization problems, experiments are conducted on 23 benchmark functions. Simultaneously, eight other different metaheuristic algorithms are selected for comparison, namely, the aforementioned EHO [49], WOA [42], EO [17], HHO [45], CSO [41], GWO [40], SFO [46], and IEHO [58]. To make the experiment fair, each algorithm is run 30 times independently on the benchmark function to ensure its stability. To better reflect the differences in performance between algorithms, the nonparametric Friedman test [74] and Wilcoxon rank-sum test [75] are invoked for statistical testing. Furthermore, different combinations of parameters and modifications are set up to analyze the impact of each parameter and module in SRGS-EHO on the performance of the algorithm. The experimental environment is an Intel® Co®TM) i5-9300H CPU @ 2.40 GHz, with 16 GB RAM running the Windows 10 operating system and the MATLAB R2019b simulation experiment platform (MathWorks, USA). Specific details about the experiments are discussed in the following sections.

4.1. Benchmark Functions

Twenty-three commonly used benchmark functions are selected for testing, and their basic information is shown in Table 1. Among them, F1–F7 are single-peaked functions, which have only one global optimal solution in the defined upper and lower bounds and are usually used to detect the convergence rate and exploitation capability of the algorithm. F8–F23 are multipeaked functions, among which F8–F13 are high-dimensional multipeaked functions and F14F23 are fixed-dimensional multipeaked functions, which have multiple local extrema in the defined domain of each self-function and can detect the ability of global exploration and avoid premature convergence of the algorithm.
Table 1

Details of 23 benchmark functions.

No.FunctionDimensionRange f min
F1 f 1(x)=∑i=1nxi2f2(x)=∑i=1n|xi|+∏i=1n|xi|30[−100,100]0
F2 f 2(x)=∑fminn|xi|+∏i=1n|xi|30[−10,10]0
F3 f 3(x)=∑i=1n(∑j−1ixj)230[−100,100]0
F4 f4x=minixi,1in 30[−100,100]0
F5 f 5(x)=∑i=1n−1100(xi+1xi2)2+(xi − 1)230[−30,30]0
F6 f 6(x)=∑i=1n([xi+0.5])230[−100,100]0
F7 f 7(x)=∑i=1nixi4+random[0,1)30[−1.28,1.28]0
F8 f8x=i=1nxisinxi 30[−500,500] 418.9829×dim
F9 f 9(x)=∑i=1n[xi2 − 10  cos(2πxi)+10]30[−5.12,5.12]0
F10 f10x=20  exp0.21ni=1nxi2exp1/ni=1ncos2πxi+20+e 30[−32,32]0
F11 f11x=1/4000i=1nxi2i=1ncosxi/i+1 30[−600,600]0
F12 f12x=π/ni=1n1yi121+10  sin2πyi+1+yn12+i=1nuxi,10,100,4+π/n10  sinπy1yi=1+xi+1/4uxi,a,k,m=kxiamxi>a0a<xi<akxiamxi<a 30[−50,50]0
F13 f13x=0.1i=1nxi121+  sin23πxi+1+xn121+  sin22πxn+0.1sin23πx1+i=1nuxi,5,100,4 30[−50,50]0
F14 f 14(x)=((1/500)+∑j=1251/j+∑i=12(xiaij)6)−12[−65,65]1
F15 f 15(x)=∑i=111[aix1(bi2+bix2)/bi2+bix3+x4]24[−5,5]0.00030
F16 f 16(x)=4x12 − 2.1x14+1/3x16+x1x2 − 4x22+4x242[−5,5]−1.0316
F17 f 17(x)=(x2 − 5.1/4π2x12+5/πx1 − 6)2+10(1 − (1/8π))cos  x1+102[−5,5]0.398
F18 f18x=1+x1+x2+121914x1+3x1214x2+6x1x2+3x22×30+2x13x22×1832x1+12x12+48x236x1x2+27x22 2[−2,2]3
F19 f 19(x)=−∑i=14ciexp[−∑j=13aij(xjpij)2]3[1, 3]−3.86
F20 f 20(x)=−∑i=14ciexp[−∑j=16aij(xjpij)2]6[0,1]−3.32
F21 f 21(x)=−∑i=15[(Xai)(Xai)T+ci]−14[0,10]−10.1532
F22 f 22(x)=−∑i=17[(Xai)(Xai)T+ci]−14[0,10]−10.4028
F23 f 23(x)=−∑i=110[(Xai)(Xai)T+ci]−14[0,10]−10.5363

4.2. Experimental Parameter Settings

To make the experiments more credible, the values reported in the original papers or widely used in different studies are selected as parameters for the respective algorithms, which are shown in Table 2. The parameter settings are kept consistent except for those listed in the table.
Table 2

Parameter settings of different algorithms.

AlgorithmParameterRange
Elephant herding optimization (EHO) α 0.5
β 0.1
N 5
n j 7

Whale optimization algorithm (WOA) a Decreased from 2 to 0
a 2 Decreased from 2 to 1
b 1

Equilibrium optimizer (EO) a 1 2
a 2 1
GP0.5

Harris hawk optimization (HHO) β 1.5
E 0 [−1,1]

Crisscross optimization algorithm (CSO) c 1 [−1,1]
c 2 [−1,1]

Grey wolf optimizer (GWO) a Decreased from 2 to 0
A 4

Sailed fish optimizer (SFO) e 0.001
Initial population for the sailfish9
Initial population for the sardine21

Improved elephant herding optimization (IEHO) α 0.5
β 0.1
N 5
n j 7

4.3. Scalability Analysis

Since dimensionality is also a significant factor affecting the accuracy of optimization, F1–F13 are extended from 30 to 100 dimensions to verify the solving ability of the algorithms in different dimensions. When completed, the results of each algorithm must be evaluated. To make the experiments more convincing, the evaluation indexes are chosen as the mean (Ave) and standard deviation (Std). Among them, the mean value can reflect the solution accuracy and quality of the algorithm, and Std reflects the stability of the algorithm. When solving the minimization problem, the smaller the mean value, the better the algorithm performance. Similarly, the smaller the standard deviation, the more stable the algorithm performance. In addition, the maximum number of iterations tmax for all algorithms is set to 500 and the overall size N is set to 30. Table 3 shows the experimental results when d=30. As can be seen from the data, SRGS-EHO obtains the best solution on five of the seven single-peak functions (F1–F7). It is noteworthy that SRGS-EHO achieves a more significant advantage over the other algorithms on F1–F4. This is due to the introduction of the golden sine mechanism, which increases the local search ability of the algorithm, thus enhancing the exploitation ability as a result. In the performance of the multipeak functions (F8–F23), SRGS-EHO achieves the best results on F8–F11, F17, and F21–F23 and the best mean value on F14. All of the results obtained by SRGS-EHO are better than those obtained by the original EHO. This indicates that the algorithm has boosted its global capability compared to the original EHO after introducing SRL and updating the clan updating operator. In addition, the performance of multimodal functions with fixed dimensions shows that the algorithm strongly achieves a balance between exploitation and exploration.
Table 3

Comparison results of 23 benchmark functions (d=30).

FunctionSRGS-EHOEHOWOAEOHHOCSOGWOSFOIEHO
F1Mean 1.73E − 2686.63E − 058.47E − 752.17E − 412.32E − 941.98E − 092.11E − 278.36E − 111.06E − 04
Std 0.00E + 001.46E − 043.49E − 744.68E − 411.25E − 934.41E − 093.23E − 271.78E − 102.55E − 04

F2Mean 4.88E − 1344.04E − 031.55E − 506.19E − 241.33E − 503.02E − 078.75E − 174.18E − 053.70E − 03
Std 2.67E − 1335.13E − 036.56E − 505.96E − 245.77E − 502.42E − 075.79E − 173.61E − 053.22E − 03

F3Mean 4.05E − 2573.98E − 024.83E + 047.30E − 099.22E − 691.43E + 039.27E − 051.61E − 081.97E − 02
Std 0.00E + 008.32E − 021.56E + 042.53E − 085.05E − 687.91E + 024.40E − 041.90E − 084.29E − 02

F4Mean 7.42E − 1251.08E − 034.45E + 011.42E − 109.72E − 509.79E − 015.29E − 071.44E − 061.18E − 03
Std 4.06E − 1241.18E − 032.64E + 011.59E − 103.20E − 495.03E − 014.31E − 071.73E − 061.28E − 03

F5Mean1.27E + 002.77E − 012.81E + 012.53E + 01 9.95E − 038.97E + 012.70E + 012.85E − 021.15E − 01
Std3.22E + 008.89E − 014.71E − 011.91E − 01 1.11E − 021.93E + 028.48E − 013.03E − 021.61E − 01

F6Mean6.99E − 011.65E − 034.74E − 01 8.90E − 067.80E − 059.47E − 107.20E − 013.38E − 022.20E − 03
Std1.83E + 003.61E − 032.43E − 01 5.38E − 061.27E − 042.26E − 093.31E − 011.26E − 013.68E − 03

F7Mean 7.40E − 051.39E − 033.39E − 039.70E − 041.27E − 043.21E − 031.98E − 033.87E − 041.69E − 03
Std 8.74E − 051.45E − 034.00E − 034.55E − 041.23E − 042.76E − 039.00E − 043.08E − 042.16E − 03

F8Mean1.26E + 04−1.26E + 04−1.05E + 04−8.94E + 03−1.25E + 04−1.17E + 04−6.40E + 03−3.83E + 03−1.26E + 04
Std 2.05E − 019.84E + 001.78E + 036.59E + 023.76E + 024.83E + 021.01E + 034.04E + 021.85E + 01

F9Mean 0.00E + 006.72E − 053.79E − 15 0.00E + 00 0.00E + 003.40E − 033.85E + 007.31E − 075.11E − 05
Std 0.00E + 009.93E − 052.08E − 14 0.00E + 00 0.00E + 001.53E − 025.16E + 001.94E − 061.22E − 04

F10Mean 8.88E − 162.16E − 033.73E − 158.47E − 15 8.88E − 167.19E − 061.03E − 134.23E − 061.95E − 03
Std 0.00E + 002.61E − 032.70E − 151.80E − 15 0.00E + 008.89E − 062.05E − 144.04E − 062.64E − 03

F11Mean 0.00E + 001.37E − 041.17E − 029.02E − 04 0.00E + 001.94E − 017.24E − 033.38E − 121.64E − 04
Std 0.00E + 002.39E − 044.45E − 024.94E − 03 0.00E + 002.72E − 011.28E − 024.87E − 124.51E − 04

F12Mean8.29E − 092.77E − 055.69E − 025.74E − 075.27E − 06 6.43E − 114.60E − 028.74E − 033.97E − 05
Std2.24E − 084.04E − 051.09E − 014.49E − 075.65E − 06 2.46E − 102.00E − 022.22E − 027.42E − 05

F13Mean6.81E − 073.66E − 045.08E − 012.94E − 021.42E − 04 1.23E − 106.25E − 017.19E − 056.70E − 04
Std2.22E − 064.42E − 042.16E − 015.94E − 021.63E − 04 1.68E − 102.59E − 015.05E − 051.10E − 03

F14Mean 9.98E − 019.98E − 013.84E + 009.98E − 011.39E + 002.23E + 004.49E + 007.76E + 009.98E − 01
Std1.02E − 032.08E − 044.03E + 00 1.75E − 169.56E − 012.79E + 004.08E + 003.48E + 007.26E − 05

F15Mean1.64E − 041.58E − 031.22E − 031.08E − 033.96E − 041.55E − 033.09E − 03 3.55E − 041.66E − 03
Std1.28E − 042.75E − 043.33E − 033.65E − 032.48E − 043.28E − 036.90E − 03 3.66E − 053.44E − 04

F16Mean−7.47E − 01−7.33E − 011.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−1.03E + 00−6.73E − 01
Std4.27E − 013.70E − 01 2.86E − 096.45E − 163.10E − 091.12E − 022.40E − 083.80E − 034.28E − 01

F17Mean 3.98E − 015.27E − 013.98E − 013.98E − 013.98E − 014.26E − 013.98E − 013.99E − 015.37E − 01
Std 6.54E − 062.00E − 017.71E − 064.83E − 054.04E − 058.96E − 026.39E − 041.03E − 031.78E − 01

F18Mean1.60E + 012.65E + 013.00E + 00 3.00E + 003.00E + 004.73E + 003.00E + 007.73E + 001.83E + 01
Std1.00E + 018.84E + 002.52E − 04 1.55E − 154.59E − 076.38E + 003.65E − 057.82E + 001.03E + 01

F19Mean−3.85E + 00−3.47E + 00−3.86E + 003.86E + 00−3.86E + 00−3.85E + 00−3.86E + 00−3.84E + 00−3.49E + 00
Std2.16E − 062.44E − 016.43E − 03 2.45E − 153.15E − 032.49E − 021.34E − 032.28E − 022.94E − 01

F20Mean−2.24E + 00−1.89E + 00−3.24E + 00−3.26E + 00−3.10E + 003.26E + 00−3.25E + 00−2.92E + 00−2.24E + 00
Std4.57E − 015.59E − 011.19E − 016.70E − 021.19E − 01 5.63E − 028.02E − 022.12E − 013.75E − 01

F21Mean1.01E + 01−1.01E + 01−8.43E + 00−8.97E + 00−5.21E + 00−8.98E + 00−9.31E + 00−1.01E + 01−1.01E + 01
Std 1.65E − 022.76E − 022.43E + 002.46E + 008.81E − 012.44E + 001.92E + 008.99E − 022.71E − 02

F22Mean1.04E + 01−1.04E + 01−7.64E + 00−9.79E + 00−5.43E + 00−9.32E + 00−1.02E + 01−1.02E + 01−1.04E + 01
Std 1.11E − 022.21E − 022.92E + 001.89E + 001.32E + 002.50E + 009.70E − 012.49E − 011.40E − 02

F23Mean1.05E + 01−1.05E + 01−6.57E + 00−9.45E + 00−5.13E + 00−9.05E + 00−1.05E + 01−1.04E + 01−1.05E + 01
Std 6.54E − 062.67E − 023.39E + 002.52E + 001.20E + 003.03E + 008.85E − 041.62E − 013.24E − 02
Tables 4 and 5 show the results when the dimension was increased to 50 and 100, respectively. The data in the tables indicate that the difficulty in gaining optimal solutions is lifted as the size of the problem increases. It can be seen from Table 4 that SRGS-EHO achieves the optimal solutions in F1–F4 and F7–F11. When d=100, SRGS-EHO still achieves the best results in nine of the 13 benchmark functions. Combining the results from the two tables, it can be noted that the performance of SRGS-EHO does not degrade, proving that SRGS-EHO has good adaptability for handling high-dimensional problems. This indicates that the introduced Gaussian perturbation-based SRL can effectively enhance the population diversity. Moreover, the clan positions are updated by the golden sine mechanism to continuously approach the global optimum, which effectively balances early exploration and later exploitation.
Table 4

Comparison results of 13 benchmark functions (d=50).

FunctionSRGS-EHOEHOWOAEOHHOCSOGWOSFOIEHO
F1Mean 9.11E  2582.48E − 041.95E − 691.33E − 341.42E − 955.96E − 049.71E − 201.43E − 102.37E − 04
Std 0.00E + 004.10E − 041.06E − 681.93E − 345.49E − 951.54E − 031.21E − 192.38E − 104.45E − 04

F2Mean 1.15E  1244.05E − 037.59E − 501.54E − 201.50E − 501.47E − 032.67E − 126.00E − 051.13E − 02
Std 6.32E  1243.93E − 033.10E − 491.57E − 206.49E − 503.73E − 041.43E − 125.78E − 051.29E − 02

F3Mean 3.91E  2616.72E − 021.98E + 054.90E − 049.04E − 726.75E + 031.93E − 017.41E − 081.58E − 01
Std 0.00E + 001.15E − 015.13E + 049.56E − 044.58E − 712.98E + 033.53E − 011.04E − 072.37E − 01

F4Mean 7.85E  1341.10E − 036.84E + 013.79E − 073.44E − 475.92E + 005.16E − 041.26E − 061.64E − 03
Std 3.72E  1331.14E − 032.61E + 015.97E − 071.73E − 461.65E + 004.48E − 041.04E − 062.42E − 03

F5Mean2.79 E + 002.89E − 014.83E + 014.60E + 01 3.18E  021.30E + 024.75E + 015.54E − 021.91E − 01
Std6.98 E + 006.62E − 013.70E − 018.85E − 01 5.28E  026.72E + 018.83E − 018.38E − 024.35E − 01

F6Mean6.42E − 012.63E − 031.26E + 003.89E − 022.66E − 04 2.58E  042.80E + 002.43E − 021.46E − 03
Std2.35 E + 007.36E − 035.01E − 018.59E − 024.51E − 04 1.81E  047.33E − 017.15E − 021.97E − 03

F7Mean 6.27E  051.69E − 033.77E − 031.78E − 031.50E − 041.13E − 023.65E − 036.09E − 042.06E − 03
Std 5.79E  052.34E − 033.79E − 036.59E − 041.57E − 045.78E − 031.55E − 035.35E − 042.41E − 03

F8Mean2.09E + 04−2.09E + 04−1.69E + 04−1.44E + 04−2.09E + 04−1.88E + 04−9.13E + 03−4.94E + 03−2.09E + 04
Std 2.76E + 008.46E + 003.35E + 039.97E + 021.90E + 007.31E + 028.56E + 024.60E + 024.33E + 00

F9Mean 0.00E + 001.14E − 041.89E − 15 0.00E + 00 0.00E + 004.46E + 004.15E + 005.36E − 071.37E − 04
Std 0.00E + 002.00E − 041.04E − 14 0.00E + 00 0.00E + 002.95E + 005.93E + 001.21E − 062.18E − 04

F10Mean 8.88E  161.95E − 033.97E − 151.57E − 14 8.88E  162.86E − 034.10E − 115.68E − 062.97E − 03
Std 0.00E + 003.84E − 032.76E − 152.96E − 15 0.00E + 001.55E − 032.45E − 117.25E − 063.62E − 03

F11Mean 0.00E + 001.17E − 047.33E − 03 0.00E + 00 0.00E + 002.12E − 014.31E − 033.78E − 123.07E − 04
Std 0.00E + 003.43E − 044.02E − 02 0.00E + 00 0.00E + 003.10E − 018.60E − 037.52E − 127.04E − 04

F12Mean8.94E − 063.25E − 052.65E − 022.83E − 038.85E − 06 1.12E  061.07E − 011.71E − 022.53E − 05
Std1.86E − 056.24E − 051.50E − 021.14E − 021.42E − 05 1.10E  064.05E 025.35E 025.19E 05

F13Mean7.92E 032.95E 041.12E + 004.83E 017.44E 057.80E 052.10E + 00 7.03E  051.75E 03
Std2.53E 023.92E 044.66E 012.40E 011.00E 041.59E 042.87E 01 6.56E  053.36E 03
Table 5

Comparison results of 13 benchmark functions (d=100).

FunctionSRGS-EHOEHOWOAEOHHOCSOGWOSFOIEHO
F1Mean 8.01E  2723.23E 043.22E 743.51E 296.15E 943.98E + 001.73E 123.34E 103.29E 04
Std 0.00E + 004.55E 041.11E 734.42E 292.89E 931.60E + 001.25E 125.44E 108.21E 04

F2Mean 1.02E  1341.26E 021.23E 492.21E 171.31E 486.58E 014.34E 081.18E 041.58E 02
Std 4.58E  1341.98E 025.65E 493.19E 175.55E 481.07E 011.65E 081.33E 041.16E 02

F3Mean 3.34E  2501.37E + 001.09E + 066.74E + 003.73E 583.21E + 045.53E + 022.48E 061.16E + 00
Std 0.00E + 002.47E + 003.08E + 051.53E + 011.90E 579.70E + 034.84E + 026.36E 061.65E + 00

F4Mean 8.88E  1331.70E 037.37E + 013.88E 033.06E 481.77E + 016.85E 011.58E 061.37E 03
Std 4.85E  1325.40E 032.54E + 019.63E 031.57E 472.71E + 007.43E 011.44E 061.67E 03

F5Mean3.69E + 009.01E 019.82E + 019.67E + 01 3.40E  028.12E + 029.80E + 015.79E 023.51E 01
Std1.78E 011.56E + 001.70E 011.05E + 004.68E 02 2.60E + 025.16E 015.90E 021.05E + 00

F6Mean3.08E + 003.80E 034.25E + 003.75E + 00 3.06E  043.50E + 009.89E + 001.61E 014.90E 03
Std6.64E + 006.58E 031.49E + 006.54E 01 5.20E  041.43E + 009.98E 014.87E 019.87E 03

F7Mean 8.80E  051.78E 035.33E 032.61E 031.16E 041.06E 015.88E 035.07E 041.98E 03
Std 1.06E  042.97E 036.58E 038.76E 041.18E 042.88E 022.19E 033.89E 042.85E 03

F8Mean4.19E + 04−4.19E + 04−3.52E + 04−2.55E + 04−4.17E + 04−3.39E + 04−1.63E + 04−6.85E + 03−4.19E + 04
Std 5.21E + 002.16E + 015.73E + 031.99E + 031.14E + 031.92E + 031.19E + 036.50E + 022.60E + 01

F9Mean 0.00E + 002.73E 03 0.00E + 00 0.00E + 00 0.00E + 004.44E + 019.52E + 003.59E 078.80E 04
Std 0.00E + 001.34E 02 0.00E + 00 0.00E + 00 0.00E + 009.23E + 006.45E + 005.06E 072.41E 03

F10Mean 8.88E  161.70E 034.20E 153.57E 14 8.88E  167.01E 011.31E 074.56E 062.13E 03
Std 0.00E + 002.24E 032.79E 155.31E 15 0.00E + 003.61E 014.74E 085.54E 062.19E 03

F11Mean 0.00E + 002.45E 048.45E 035.77E 04 0.00E + 009.21E 014.21E 031.12E 111.02E 03
Std 0.00E + 002.96E 044.63E 023.16E 03 0.00E + 001.21E 019.79E 031.63E 112.45E 03

F12Mean2.21E 051.47E 044.67E 024.15E 02 3.48E  064.38E 023.17E 011.44E 034.44E 05
Std7.09E 052.38E 042.68E 021.47E 02 5.45E  066.79E 027.60E 025.30E 031.13E 04

F13Mean5.59E 027.47E 042.76E + 005.80E + 001.61E 046.80E 016.80E + 00 1.15E  041.71E 03
Std2.09E 011.16E 031.01E + 008.37E 012.33E 042.45E 014.67E 01 1.18E  043.99E 03

4.4. Analysis of Convergence Curves

To further compare the convergence performance of various algorithms in solving optimization problems, the convergence curves of nine algorithms are plotted and shown in Figure 5. Among them, the dimensions of F1–F13 functions are set to 30. It is observed that the convergence accuracy of SRGS-EHO is more prominent on single-peaked functions (F1–F7), which is a great improvement compared with other algorithms. In the performance of multipeaked functions (F8–F23), SRGS-EHO converges to the global optimum on F8–F11, F14, F17, and F21–F23 and can maintain a better convergence rate. Compared with the original EHO, the convergence performance of SRGS-EHO has been significantly improved. The modifications for the initialized population and the strategy of introducing the golden sine mechanism are proved to be effective. The experimental results indicate that the optimization ability and convergence performance of SRGS-EHO are enhanced.
Figure 5

Convergence curves of different algorithms on 23 benchmark functions.

4.5. Statistical Tests

Garcia et al. [76] pointed out that, when evaluating the performance of metaheuristic algorithms, comparisons only based on mean and standard deviation are not sufficient. Moreover, there exist inevitable chance factors that affect the experimental results during the process of iteration [77]. Therefore, statistical tests are necessary to reflect the superiority of the proposed algorithm and the variability of other algorithms [78]. In this paper, the Wilcoxon rank-sum test and Friedman test are chosen to compare the performance between algorithms. Besides, the maximum number of iterations tmax of all algorithms is set to 500 and the overall size of the population Nis set to 30. Other parameters are set as in Section 4.2. As usual, f1(x) to f13(x) are extended from 30 to 100 dimensions. In the Wilcoxon rank-sum test, the significance level p is set to 0.05. When p < 0.05, the algorithm is proved to be statistically superior. The results of the experiments are shown in Tables 6 and 7. The notation “+/−/ = ” indicates that the proposed methods are superior to, equal to, or worse than the current method, respectively. Since the best algorithm on a benchmark function cannot be compared with itself, the best algorithm on each benchmark function is marked as NaN, which means “not applicable.”
Table 6

The statistical results of Wilcoxon's rank-sum test (F1–F13).

FunctionDimensionEHOWOAEOHHOCSOGWOSFOIEHO
F1303.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
503.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
1003.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11

F2303.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
503.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
1003.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11

F3303.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
503.01E 113.01E 113.01E 113.01E 113.01E 113.01E 113.01E 113.01E 11
1003.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11

F4303.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
503.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11
1003.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 113.02E 11

F530 9.63E  023.02E 113.02E 118.15E 056.07E 113.02E 111.00E 032.15E 02
50 9.00E  013.02E 113.02E 111.78E 043.34E 113.02E 11 5.94E  02 5.89E  01
100 4.83E  013.02E 113.02E 111.03E 023.02E 113.02E 11 2.23E  01 8.65E  01

F6302.89E 035.61E 055.96E 096.35E 053.02E 111.63E 058.14E 055.32E 03
509.79E 051.61E 06 5.69E  013.96E 081.16E 071.11E 061.86E 031.68E 03
1006.28E 062.77E 054.94E 051.43E 087.20E 059.51E 061.61E 069.79E 05

F730 9.59E  011.41E 041.75E 051.17E 052.83E 083.08E 08 3.18E  01 1.09E  01
50 1.05E  016.36E 057.20E 053.32E 064.98E 112.83E 08 4.55E  011.68E 03
1001.70E 021.25E 042.20E 076.05E 073.02E 113.69E 11 4.73E  01 4.04E  01

F8303.56E 044.98E 113.02E 111.68E 041.09E 103.02E 113.02E 112.60E 05
50 3.71E  012.87E 103.02E 11 6.20E  013.02E 113.02E 113.02E 11 2.64E  01
1003.56E 041.21E 103.02E 11 9.00E  013.02E 113.02E 113.02E 113.67E 03

F9301.21E 12 8.15E  02 3.34E  01 NaN 1.21E 121.20E 121.21E 121.21E 12
501.21E 12 3.34E  01 3.34E  01 NaN 1.21E 121.21E 121.21E 121.21E 12
1001.21E 12 NaN NaN NaN 1.21E 121.21E 121.21E 121.21E 12

F10301.21E 128.07E 086.12E 14 NaN 1.21E 121.16E 121.21E 121.21E 12
501.21E 122.74E 092.59E 13 NaN 1.21E 121.21E 121.21E 121.21E 12
1001.21E 127.78E 107.78E 13 NaN 1.21E 121.21E 121.21E 121.21E 12

F11301.21E 12 3.34E  01 NaN NaN 1.21E 125.58E 034.57E 121.21E 12
501.21E 12 3.34E  01 NaN NaN 1.21E 123.13E 041.21E 121.21E 12
1001.21E 12 NaN 3.34E  01 NaN 1.21E 121.21E 121.21E 121.21E 12

F1230 9.82E  014.50E 116.77E 05 2.71E  013.02E 113.02E 11 8.77E  018.07E 04
50 3.55E  013.02E 116.74E 06 1.37E  014.64E 033.02E 11 1.49E  019.12E 03
100 1.09E  013.02E 113.02E 111.52E 036.70E 113.02E 11 2.12E  011.81E 04

F1330 3.87E  013.02E 11 2.28E  01 1.67E  013.02E 113.02E 11 8.42E  01 5.11E  01
50 9.00E  013.02E 114.50E 11 2.12E  017.29E 033.02E 11 3.33E  01 1.02E  01
100 6.41E  013.02E 113.02E 112.42E 023.02E 113.02E 11 9.05E  02 8.42E  01

+/ = /−309/0/411/0/210/1/28/3/213/0/013/0/010/0/311/0/2
508/0/511/0/2/10/1/27/3/313/0/013/0/09/0/410/0/3
10010/0/311/2/011/1/19/3/113/0/013/0/09/0/410/0/3
Table 7

The statistical results of Wilcoxon's rank-sum test (F14–F23).

FunctionEHOWOAEOHHOCSOGWOSFOIEHO
F145.57E 033.67E 031.01E 11 8.77E  01 1.61E  015.19E 073.02E 114.68E 02
F156.36E 054.62E 101.07E 073.02E 117.74E 061.11E 063.02E 11 5.01E  02
F16 2.97E  013.02E 111.14E 113.02E 111.69E 093.02E 111.73E 07 6.63E  01
F172.07E 023.02E 111.21E 123.68E 111.01E 083.69E 111.31E 082.05E 03
F181.86E 033.02E 112.29E 113.02E 114.20E 105.57E 107.66E 053.18E 04
F191.08E 027.39E 116.32E 123.34E 111.17E 093.02E 111.86E 09 7.73E  02
F20 3.87E  013.02E 111.82E 113.34E 113.02E 113.02E 113.20E 099.82E 05
F212.81E 021.03E 06 2.45E  013.02E 117.66E 03 5.79E  014.62E 101.02E 05
F22 4.12E  013.09E 065.75E 053.02E 117.96E 03 6.31E  016.01E 08 3.18E  01
F23 1.71E  012.19E 082.83E 103.02E 116.77E 05 4.12E  011.70E 081.08E 02
+/ = /−6/0/410/0/09/0/19/0/19/0/17/0/010/0/06/0/4
The results show that when d=30, the proposed SRGS-EHO outperforms EHO, WOA, EO, HHO, CSO, GWO, SFO, and IEHO on 9, 11, 10, 8, 13, 10, and 11 problems out of 13 benchmark functions, while it underperforms them on 4, 2, 2, 2, 0, 0, 3, and 2 problems. When the dimensionality is expanded to 50 dimensions, the proposed SRGS-EHO performs better on 8, 11, 10, 7, 13, 9, and 10 problems and underperforms on 5, 2, 2, 3, 0, 0, 4, and 3 problems in comparison with the other 8 algorithms. When the dimensionality is further expanded to 100 dimensions, SRGS-EHO continues to perform more superior. It outperforms EHO, WOA, EO, HHO, CSO, GWO, SFO, and IEHO on 10, 11, 11, 9, 13, 9, and 10 benchmark functions, respectively. The performance on 3, 0, 1, 1, 0, 0, 4, and 3 problems is inferior. For the performance of the 10 fixed-dimensional benchmark functions F14F23, SRGS-EHO performs better on 6, 10, 9, 9, 9, 7, 10, and 6 problems, respectively, while inferior to other algorithms on 4, 0, 1, 1, 1, 1, 0, 0, and 4 problems. The results show that the proposed SRGS-EHO is superior in terms of solution accuracy. Undoubtedly, the results are statistically significant. To make the experiment more convincing, the Friedman test is performed to screen the difference between the proposed SRGS-EHO and other algorithms. As one of the most well-known and widely used statistical tests, the Friedman test is used to detect significant differences between the results of two or more algorithms on consecutive data [79]. Specifically, it can be used for multiple comparisons between different algorithms by calculating the ranking F of the experimental results. The equation is represented aswhere k is the number of algorithms involved in the comparison, j is the correlation coefficient, N is the number of test cases or runs, and R is the average ranking of each algorithm. The experimental results of Friedman tests are shown in Table 8. According to the results of the Friedman test, the algorithm with the lowest ranking is considered to be the most efficient algorithm. From the results in the table, the proposed SRGS-EHO is always ranked first in different cases (d=30,50,100). Compared with other metaheuristics, the SRGS-EHO has a greater competitive advantage.
Table 8

Results of Friedman test on 23 benchmark functions.

FunctionDimensionSRGS-EHOEHOWOAEOHHOCSOGWOSFOIEHO
F1–F1330Friedman value2.4624.6825.2184.2315.7696.7695.6155.2315.077
Friedman rank135289764
50Friedman value2.6154.8465.3084.6925.3855.4626.1544.1546.385
Friedman rank145367829
100Friedman value2.2794.1546.5385.1494.6156.1544.3085.8465.923
Friedman rank129548367

F14–F23FixedFriedman value1.9253.7315.9756.6265.7376.4423.8585.6225.404
Friedman rank127968354

4.6. Sensitivity Analysis to Parameters

To examine the effect of different parameters in SRGS-EHO on the performance of the algorithm, sensitivity analysis to parameters is also performed in this section. The initial values of the golden ratio a and b in equations (14) and (15), the maximum number of iterations tmax, and the size of population N are set to different values to verify. In this experiment, N is set to 5, 20, and 50, tmax is marked as 100 and 500, and a and b are set to two different sets of values [−π, π] and [0,1]. Twelve variants of SRGS-EHO are created, each representing a combination of different parameters, as shown in Table 9. It should be noted that these parameters can be adapted to the actual problem.
Table 9

Combination of different parameters in SRGS-EHO.

ParametersSRGS-EHO1SRGS-EHO2SRGS-EHO3SRGS-EHO4SRGS-EHO5SRGS-EHO6SRGS-EHO7SRGS-EHO8SRGS-EHO9SRGS-EHO10SRGS-EHO11SRGS-EHO12
t max 100
500

N 5
20
50

a, b a = −π
b = π
a = 0
b = 1
When each seed algorithm is performed 30 times, the results of the Friedman test reported are shown in Table 10. The analysis of the data in the table yields that the quality of the obtained solutions varies if the set parameters are changed. By comparison, the subalgorithm SRGS-EHO6 with N=50, tmax=500, a=−π,  and b=π outperforms the other variants and achieves the highest ranking.
Table 10

Comparison results by Friedman test for different versions of SRGS-EHO on 23 benchmark functions.

FunctionsSRGS-EHO1SRGS-EHO2SRGS-EHO3SRGS-EHO4SRGS-EHO5SRGS-EHO6SRGS-EHO7SRGS-EHO8SRGS-EHO9SRGS-EHO10SRGS-EHO11SRGS-EHO12
F19.60003.26679.06673.53339.13333.40009.63333.40009.80003.86679.76673.5333
F29.20003.56679.33333.33339.66672.76679.60003.63339.40003.86679.80003.8333
F39.96673.26679.80003.50009.40003.73339.10003.40009.03333.40009.70003.7000
F49.40003.20009.70003.33339.60003.66679.03333.53339.23333.366710.03333.9000
F58.33336.33337.96674.73338.80003.46679.26675.16677.13335.20007.96673.6333
F67.50005.90008.50003.93337.60004.03338.26674.86678.50004.80009.16674.9333
F79.53335.80008.00004.30007.50003.56679.20005.76678.80004.13338.23333.1667
F88.80004.63337.36674.80008.90004.66678.26674.90008.73334.06678.90003.9667
F91.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.0000
F101.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.0000
F111.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.00001.0000
F129.40004.93337.96673.73338.30003.60009.16675.20008.60004.80008.83333.4667
F138.60005.56678.76674.40007.56673.06678.53335.60009.43334.40008.33333.7333
F144.73336.83335.43337.23336.53337.50005.60006.00007.73337.03336.53336.8333
F158.66675.43338.53334.26678.40004.23337.56675.06678.90004.40007.73334.8000
F168.20008.86676.63334.40005.40004.50009.06678.90008.06674.83335.13334.0000
F178.80005.96679.40005.16676.30004.36678.63335.76677.20004.66677.46674.2667
F189.20008.20006.23335.56674.26674.53339.66678.36676.83336.50004.06674.5667
F1910.43338.86676.03335.40004.36673.73338.60009.53336.10006.46673.76674.7000
F209.46678.66677.50006.00003.60004.73339.20008.63336.06676.43333.36674.3333
F218.20004.60007.83334.53338.53334.73339.46675.40007.23334.70008.90003.8667
F228.03335.40008.16674.90007.23334.30009.40005.36678.26674.46678.70003.7667
F238.76674.90008.20004.76679.13333.76678.40004.60007.73334.36678.96674.4000
Average rank7.73195.09577.10584.12326.66233.71167.76815.04787.20874.29426.88553.7565
Overall rank116937112510482

4.7. Analysis of the Modifications

In order to analyze the impact of the newly tuned modules on the algorithm performance, comparison experiments are conducted in this section. In SRGS-EHO, the initialized populations are first generated by specular reflection learning based on Gaussian variational perturbations (SR-GM). Secondly, the golden sine operator (GSO) is introduced to optimize the positions of the patriarch. For a simple analysis, four algorithms, EHO, SR-GM + EHO, GSO + EHO, and SRGS-EHO, are considered to compare behaviors for solving different problems. The different strategies are combined in the way shown in Table 11. Six representative benchmark functions are selected, including F1, F5, F10, F14, F15, and F17. The size of the population N is set to 30 and the maximum number of iterations tmax is 500.
Table 11

Combination of different parameters in SRGS-EHO.

SR-GMGSO
EHO00
SR-GM + EHO10
GSO + EHO01
SRGS-EHO11
Figure 6 shows the convergence curves of the four algorithms. It can be seen that the convergence rate of SR-GM is generally higher than that of EHO due to the optimized initialization method using Gaussian perturbation-based specular reflection learning. The introduction of the golden sine operator makes GSO a very significant improvement in search accuracy and breadth. By combining the two strategies, the convergence rate and the search accuracy of SRGS-EHO are simultaneously promoted.
Figure 6

Convergence curves of different strategy combinations on 6 benchmark functions. (a) F1, (b) F5, (c) F10, (d) F14, (e) F15, and (f) F17.

The mean value and Friedman ranking results obtained by different combinations of strategies are shown in Table 12, where the bold values indicate the best solutions obtained on the current benchmark functions. According to these results, all three enhanced versions outperform the original algorithm. Both SR-GM + EHO and GSO + EHO outperform EHO on five functions. On the one hand, SR-GM + EHO and GSO + EHO achieve different improvements in terms of the accuracy and breadth of the search compared to EHO. On the other hand, the performance of SRGS-EHO is enhanced comprehensively by the effective combination of SR-GM and GSO. By verification, it is shown that the modifications for EHO are effective. Meanwhile, SRGS-EHO is determined to be the final optimized version.
Table 12

Comparison results for different combinations on 6 benchmark functions. 

FunctionsEHOSR-GM + EHOGSO + EHOSRGS-EHO
F13.54E − 054.99E − 074.73E − 277 3.60E − 313
F51.03E − 011.99E − 022.71E − 02 3.32E − 03
F107.08E − 041.22E − 03 8.88E − 16 8.88E − 16
F149.98E − 019.98E − 019.98E − 01 9.99E − 01
F151.67E − 03 1.67E − 031.68E − 031.67E − 03
F174.18E − 013.99E − 013.99E − 01 3.98E − 01
Average rank2.85362.82.14441.7852
Overall rank4321

5. Applications of SRGS-EHO for Solving Engineering Problems

The applicability of SRGS-EHO is further tested in solving engineering design problems and the results are described here. In this paper, two restricted practical engineering test problems, namely, the pressure-vessel design and tension/compression string design problems, are selected, and the results obtained by SRGS-EHO are compared with other algorithms to highlight superiority. It is noteworthy that these two cases include some inequality constraints. Consequently, constraint handling methods should be used in SRGS-EHO and other compared methods. Constraint handling methods are divided into five categories, namely, penalty function methods, hybrid methods, separation of objective function and constraints, repair algorithms, and special operators [80]. In terms of penalty functions, there exist different types, including static, annealing, adaptive, coevolutionary, and death penalty. Among these, the death penalty is a popular and simplest constraint processing method. In this approach, search agents that violate any level of constraints impose the same penalty, i.e., being assigned a poorer fitness value. This approach does not require any modification of the original algorithm. Constraints are added to the fitness function and are efficiently handled by most optimization algorithms. Therefore, in this study, SRGS-EHO is merged with the death penalty approach for solving constrained engineering problems. It is worth noting that the objective of solving real engineering problems is to provide the global optimal solution at the lowest possible cost. Based on this consideration, in this section, each compared algorithm is performed 10 times and the best combination and the maximum fitness value obtained are selected as the final comparison results.

5.1. Pressure-Vessel Design Problem

The pressure-vessel design problem [81] is a common engineering design problem first proposed by Kannan and Kramer in 1994, which is shown in Figure 7. The objective of this optimization problem is to minimize the manufacturing cost of the pressure vessel. Four variables are involved: the thickness of the shell T, that of the head T, and the inner radius R and length L of the cylinder. Among them, the first two variables are discrete. In addition, the problem contains four constraints. Of these, three constraints are linear and one constraint is nonlinear. The mathematical form of the problem is expressed as follows:
Figure 7

Pressure-vessel design.

SRGS-EHO is applied to optimize the problem and compared with eight other algorithms separately, with some as listed earlier, i.e., EO [17], WOA [42], HHO [45], DE [9], evolution strategies (ESs) [82], PSO [23], the opposition-based sine cosine algorithm (OBSCA) [83], improved sine cosine algorithm (ISCA) [22], and enhanced whale optimization algorithm (EWOA) [84]. The obtained results are shown in Table 13. According to the results, SRGS-EHO obtained the best solution among the nine algorithms. The four variables are optimized to 0.850468, 0.420387, 44.065679, and 153.694517, and the optimum cost obtained is 6020.753071.
Table 13

Comparison results of pressure-vessel design problem.

AlgorithmOptimal values for variablesOptimum cost
T s T h R L
SRGS-EHO0.8504680.42038744.065679153.6945176020.753071
EO0.8984010.44408046.549292128.3176306124.239342
WOA1.0406320.51176650.90518891.3220076775.807533
HHO1.0791600.54339255.46951660.1152036715.912450
DE0.8125000.43750042.098353176.6377516059.725800
ES0.8125000.43750042.098087176.6405186059.745600
PSO0.8125000.43750042.091266176.7465006061.077700
OBSCA3.0000000.87500066.148100159.3036006958.988200
ISCA0.81250.437542.09842176.63826059.745738
EWOA10.06250.5058.1739944.382946177.754912

5.2. Tension/Compression String Design Problem

This problem was described by Arora [85] and Belegundu [86] for the purpose of minimizing the weight of a tension/compression spring. Three variables are included in Figure 8, namely, diameter (d), mean coil diameter (D), and number of active coils (P). SRGS-EHO is applied to solve the problem and compared with eight other algorithms, namely, the slap swarm algorithm (SSA) [44], WOA [42], PSO [23], GA [87], moth-flame optimization (MFO) [88], GWO [40], the enhanced WOA (EWOA) [84], IEHO [89], and the reinforced variant of WOA (RDWOA) [90]. The results are presented in Table 10. The mathematical form of the tension/compression string design problem is expressed as follows:
Figure 8

Tension/compression spring design problem.

As can be observed from Table 14, the optimum weight obtained by SRGS-EHO is 0.012044 when d, D, and N are optimized to 0.061414, 0.638027, and 3.004913, respectively. This indicates that SRGS-EHO has a superior global optimization capability compared to other algorithms.
Table 14

Comparison results of the problem of tension/compression spring design.

AlgorithmsOptimal values for variablesOptimum weight
d D N
SRGS-EHO0.0614140.6380273.0049130.012044
SSA0.0512070.34521512.0040320.012676
WOA0.0532340.3950369.3514760.012708
PSO0.0157280.35764411.2445430.012675
GA0.0514800.35166111.6322010.012705
MFO0.0519940.36410910.8684220.012667
GWO0.0516900.35676011.2881100.012662
EWOA0.0519610.36330610.912960.012667
EHOI0.0515940.35443811.4238800.012665
RDWOA0.05171120.3572511.2577880.012665

6. Conclusions and Future Work

In this paper, the Gaussian perturbation-based specular reflection learning and golden sine mechanism are introduced for dealing with the defective problem of the original EHO. According to the method proposed in this paper, the population initialization method and the clan leader position update strategy are optimized, which makes exploration and exploitation more efficient and leads to the enhancement of the algorithm performance. Experiments on 23 benchmark functions show that the proposed SRGS-EHO has an excellent performance in terms of optimization accuracy and stability compared with other metaheuristic algorithms, while the convergence rate is also promoted. In addition, SRGS-EHO is applied to solve real-world engineering design problems, such as pressure-vessel design and tension/compression string design problems. Compared with other algorithms, this indicates that SRGS-EHO has superiority and applicability. At the same time, the algorithm has great potential for dealing with different complex problems. In the future, SRGS-EHO can be further developed and refined based on practical problems. In addition, it can be introduced to solve discrete and multiobjective optimization problems, and more encouraging results can potentially be achieved.
  3 in total

1.  Performance of Elephant Herding Optimization and Tree Growth Algorithm Adapted for Node Localization in Wireless Sensor Networks.

Authors:  Ivana Strumberger; Miroslav Minovic; Milan Tuba; Nebojsa Bacanin
Journal:  Sensors (Basel)       Date:  2019-06-01       Impact factor: 3.576

2.  Detection of dynamic protein complexes through Markov Clustering based on Elephant Herd Optimization Approach.

Authors:  R Ranjani Rani; D Ramyachitra; A Brindhadevi
Journal:  Sci Rep       Date:  2019-07-31       Impact factor: 4.379

3.  Performance up-gradation of Symbiotic Organisms Search by Backtracking Search Algorithm.

Authors:  Sukanta Nama; Apu Kumar Saha; Sushmita Sharma
Journal:  J Ambient Intell Humaniz Comput       Date:  2021-04-11
  3 in total
  1 in total

1.  A3C-TL-GTO: Alzheimer Automatic Accurate Classification Using Transfer Learning and Artificial Gorilla Troops Optimizer.

Authors:  Nadiah A Baghdadi; Amer Malki; Hossam Magdy Balaha; Mahmoud Badawy; Mostafa Elhosseini
Journal:  Sensors (Basel)       Date:  2022-06-02       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.