Literature DB >> 35535189

Modified Harris Hawks Optimization Algorithm with Exploration Factor and Random Walk Strategy.

Meijia Song1, Heming Jia2, Laith Abualigah3,4, Qingxin Liu5, Zhixing Lin1, Di Wu6, Maryam Altalhi7.   

Abstract

One of the most popular population-based metaheuristic algorithms is Harris hawks optimization (HHO), which imitates the hunting mechanisms of Harris hawks in nature. Although HHO can obtain optimal solutions for specific problems, it stagnates in local optima solutions. In this paper, an improved Harris hawks optimization named ERHHO is proposed for solving global optimization problems. Firstly, we introduce tent chaotic map in the initialization stage to improve the diversity of the initialization population. Secondly, an exploration factor is proposed to optimize parameters for improving the ability of exploration. Finally, a random walk strategy is proposed to enhance the exploitation capability of HHO further and help search agent jump out the local optimal. Results from systematic experiments conducted on 23 benchmark functions and the CEC2017 test functions demonstrated that the proposed method can provide a more reliable solution than other well-known algorithms.
Copyright © 2022 Meijia Song et al.

Entities:  

Mesh:

Year:  2022        PMID: 35535189      PMCID: PMC9078797          DOI: 10.1155/2022/4673665

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

In recent years, because of the low computing cost, simplicity, flexibility, and gradient-free mechanisms [1] and metaheuristic algorithms (MAs) [2-5] have attracted a rising amount of interest. In most cases, MAs are motivated by the ideas of evolution [6-8], human [9-12], animal behavior [13-16], or physics [17-21]. Kennedy et al. [22] inspired by the regularity of flock foraging behavior and proposed particle swarm optimization (PSO) with the characteristics of fast convergence speed and fewer parameters. Mirjalili et al. [23] proposed an algorithm called grey wolf optimization (GWO) based on the grey wolves' hierarchy system and hunting strategies. The whale optimization algorithm (WOA) [24] simulates a whale's random search and predation, bubble attack, and contraction surround behavior. The salp swarm algorithm (SSA) [25] is inspired by the leader guiding followers' behavior of salp swarm chains to food. The sine cosine algorithm (SCA) [26] is proposed to optimize the aircraft wing design problem by searching the image inward or outward with the sine and cosine function, which can deep mine the best position and finally achieves the global optimum. Slime mould algorithm (SMA) [27] is motivated mainly by changes in slime mould morphology during foraging. In the field of MAs, the no free lunch (NFL) theorem [28] illustrates that there is no one-size-fits-all solution to all optimization challenges. Therefore, many researchers show great interest not only in proposing a new algorithm but also in optimizing classical algorithms. Zheng et al. [29] propose an algorithm called improved remora optimization algorithm (IROA) that improves the remora optimization algorithm (ROA) by introducing an autonomous foraging mechanism. This mechanism is based on the way the remora finds food on its own. Abualigah et al. [30] combine the arithmetic optimization algorithm (AOA) with SCA's operators for enhancing the local search ability of AOA and verify the effectiveness by some experiments. Jia et al. [31] come up with an algorithm called CMSRSSMA with an improvement of SMA by embedding the composite mutation strategy (CMS) and restart strategy (RS). The CMS aims to enhance the population diversity, and the RS is utilized to avoid the local optima. Liu et al. [32] present a modified remora optimization algorithm (MROA) with Brownian motion and lens opposition-based learning, and it can be suitable for the multilevel thresholding image segmentation application. Meanwhile, the new algorithm introduces the nonlinear escaping energy parameters and random-opposition-based learning strategies to make itself more competitive. Almotairi and Abualigah [33] proposed a method called HRSA, which hybrid the original reptile search algorithm (RSA) and ROA by a novel transition method, and applied it to solving data clustering problems. Zamfirache et al. [34] apply a new algorithm hybrid policy iteration (PI) and GWO algorithm to neural networks and get good results in NN training and solving complex optimization problems. Pozna et al. [35] proposed a new algorithm called PF-PSO that hybrid particle filter (PF) and PSO algorithm and applied the new hybrid method to optimize the position control of the integral-type serve systems. In 2019, Heidari et al. [36] proposed the HHO algorithm, which is inspired by the Harris hawks' predation behavior in nature, including three stages of exploration, the transformation from exploration to exploitation, and exploitation. HHO has the characteristics of simple principles, fewer parameters, and strong local optimization ability. HHO has been used in the aspect of image segmentation [37], neural network [38], electric machine control [39], and other fields. However, HHO has the weakness of limited optimization accuracy, low convergence speed, and easily jumping into local optimum like other MAs. Therefore, many scholars have improved the HHO algorithm from different perspectives. Ma et al. [40] used the Chan algorithm to calculate the initial solution and replace an individual position to reduce unnecessary exploration and improve the algorithm's convergence speed. Houssein et al. [41] introduced cross and mutation cooperative gene operators and proposed the HHOCM optimization algorithm based on opposition-based learning, which enhanced the ability of exploration and applied it to generate the initial population effectively. Tang et al. [42] introduced the tent chaotic map, an elite hierarchy system, nonlinear escape energy strategy, and Gaussian random walk strategy to improve the convergence speed and accuracy of the algorithm. Jia et al. [43] introduced a mutation strategy and dynamic control parameter for calculating escape energy in the exploration stage and achieved good results by regulating different parameters. The above improvement strategy has improved the performance of the HHO algorithm, but it still has significant room for improvement for the shortcomings of the HHO algorithm. We proposed the ERHHO algorithm to surmount some weaknesses of the HHO algorithm. Based on different experiments, the proposed algorithm was evaluated with some classic algorithms such as SMA and SSA and some HHO optimizer algorithms such as DHHO and CEHHO. And the results show that the proposed algorithm can perform better than the competitive algorithms and enhance the ability to jump out of the local optimum with minor changes. In particular, this paper made the following main contributions: Introduced tent chaotic map to improve the quality of initialized population location Proposed the strategy called exploration factor in improving the ability of exploration Proposed the random walk strategy to promote the convergence speed and accuracy The simulation experiments are tested on 23 standard test functions The real-world problems tests are based on CEC2017 test functions and five classical engineering problems The remainder of this document is structured as follows. A quick summary of HHO is given in Section 2. Section 3 describes the tent chaotic map, exploration factor, random walk strategy, and ERHHO algorithm. The results and discussion of the proposed algorithm are given in Section 4, which mainly applies to benchmark functions, and real problems include CEC2017 and engineering design problems. Section 5 contains the conclusions and prospects.

2. Harris Hawks Algorithm

The HHO algorithm is motivated by different attack strategies of hawks exploring and attacking their prey. HHO is a population-based optimization technology that consists of three stages: exploration, the transformation of exploration, and exploitation. The different phases of HHO are shown in Figure 1.
Figure 1

Different phases of HHO [36].

2.1. Exploration Stage

At this stage, hawks perch in random places based on other members or rabbit locations, which are modeled as follows:where X(t+1) represents the new position of hawks in the next iteration, Xrabbit(t) is the position of prey, and X(t) is the position of hawks. The modulus means the absolute value of the elements. r1, r2, r3, r4, and q are random numbers in the interval (0, 1). UB and LB are the upper and lower bounds of variables. Xrand(t) is the position of a random hawk population. X(t) is the average location of the current population of hawks.

2.2. Transformation of Exploration and Exploitation

The escape energy of the prey is a major factor in the transition stage, which is evaluated with the following equations:where t is the current iteration; E0 is the initial energy of a prey, varying randomly between −1 and 1; and T is the maximum number of iterations.

2.3. Exploitation Stage

At this point, the hawks assault the prey using four chasing strategies and the prey's escape behavior. Escaping energy (E) and the potential of escape (r) are required for a successful capture. When r ≥ 0.5 and |E| ≥ 0.5, a soft besiege was conducted by hawks in the following equations, which means the prey has enough energy but gets a failed try for escaping:where ΔX(t) is the contrast between the position of hawks at iteration t and the current position of prey and Xrabbit(t) represents the leap strength that changes randomly at each iteration. r5 is a random number between 0 and 1. Hawks applies a hard besiege to prey with low escaping energy and fails to escape, which is indicated by r ≥ 0.5 and |E| < 0.5, modeled as follows: When r < 0.5 and |E| ≥ 0.5, hawks hunt through a smarter soft encirclement called soft besiege with progressive rapid dives, modeled as follows:where D is the problem's dimension, S denotes a random vector of size 1 × D, and LF means the Levy flight function as defined in equations.where u, v are random normal distribution vector with the size of 1 × d, β is a constant and bound to a value of 1.5, and Γ is a standard Gamma function. Updating the hawk's positions can be modeled by When the prey's energy is depleted, a hard besiege is established (r < 0.5 and |E| < 0.5). The calculation of Y and Z is modeled as equations (13) and (14). The updating method is as follows:

3. Proposed Algorithm

3.1. Tent Chaotic Map

For the past few years, many scholars have proved that chaotic maps [44, 45] are capable of improving the search process of a population-based metaheuristic algorithm. In general, chaotic maps are usually introduced into one or several processes such as initial population, exploration, or exploitation stage. The images of the 10 most commonly used chaotic maps are shown in Figure 2.
Figure 2

Ten commonly used chaotic maps.

One of the main purposes of this paper is to enhance the diversity of the initialization population. The initialization of location has a certain influence on the diversity of the population and the stability of the algorithm. HHO algorithm can only guarantee the randomness of the population position at the initialization stage, but randomness does not mean uniformity. The chaotic sequence has certain ergodicity and high randomness. Chaotic mapping can generate random numbers with a uniform distribution between 0 and 1. We tested the listed 10 chaotic maps and verified that tent chaotic map is appropriate to our modified algorithm. The characteristics and randomness of this map can effectively improve the performance by transforming the initial position of hawks. The mathematical description is shown in the following equation: We enhance the diversity of the initialization population by modifying the initial positions through equation (16), where X represents the new position of hawks after chaotic mapping, X is the current position of hawks, and a is set to 0.7.

3.2. Exploration Factor

In the exploration stage, the HHO algorithm updating positions is mainly calculated by equations (1) and (2), where r1 and r3 are random values in the range of (0, 1). Although the settings could lead to the randomness of each step in the global search, the variability is lacking. In this phase, the original HHO algorithm simulates the situation that the hawks can track and detect the prey with their powerful eyes, but occasionally, the prey cannot be seen easily, and the hawks detect a prey maybe after several hours. According to these, we think we should adjust the parameters to be more flexible. We can consider the parameters of r1 and r3 as the step length; when the values are bigger, the hawks move faster and vice versa. There are two possibilities for a hawk finding a prey, one is an immediate detection, and another is a long-time search. For the first situation, we should consider the randomness of the step length. And, for the second situation, the whole variant trend of the step length should decrease. Because the possibility for a hawk finding a prey increases as time goes by, the hawks should explore a wider range with a bigger step at first, and a meticulous search should be applied in the late iterations. Thus, we update r1 and r3 by exploration factor modeled as follows: Then equation (1) is updated as follows:where b is set to 2 that can achieve a pleasing effect through experimental tests. (b∗rand() − b/2) is introduced to gain the randomness of step length by generating random numbers in the interval of (−b/2, b/2). The cos function is introduced to form a nonlinear convergence from 1 to 0 with iterations (see Figure 3).
Figure 3

Cos function convergence curve.

In a word, the exploration factor expands the range of step length from (0, 1) to (−b/2, b/2) at first and helps the exploration process gradually change from an extensive range to a small range as the number of iterations increases (see Figure 4). Finally, the exploration factor preserved the randomness of step length.
Figure 4

Curve of exploration factor.

3.3. Random Walk Strategy

In the exploitation stage of the HHO algorithm, Harris hawks update its position through four pursuit strategies. Although it increases the possibility of exploration, entering the next iteration without interference can easily lead to the algorithm falling into the local optimum. To ameliorate this problem, some common strategies are applied to the methods, such as the Gaussian random walk strategy [42], Levy flight function, and Brownian motion [16, 46]. These strategies possess the traits of stabilizing in a range of values with high probability and drastic changing values with low probability. And all these strategies improve the methods by generating a deviation. In this paper, we think that in the early iterations, the deviation should be bigger so that we can get more chances to jump out of local optima. In the late iterations, a smaller deviation can help find the optimal result better. So we designed the step of random walk gradually taper with iterations. According to these, the random walk strategy was proposed, which is activated by identical fitness values in iterations. In other words, when the value of fitness is equal to its last iteration, we activate the random walk strategy. This strategy can deviate the position of a hawk according to a varying parameter. The value of the parameter depends on (c × rand() − c/2) × cos(π/2 × (t/T) and decreases with iterations. The strategy is modeled as follows: When c = 6, we can obtain a better performance through the experimental test. X(i) is the new position after applying the random walk strategy. We retain the better result to enter the next iteration by greedy strategy modeled as follows:

3.4. The Details of ERHHO

HHO possesses strong local exploitation ability but insufficient global exploration. The switch from exploration to exploitation is based on the prey's escaping energy. The population diversity is insufficient in the early iterations, reflecting the exploration period and the slow convergence speed. The energy of prey drops as the number of iterations grows, and the algorithm enters the period of local exploitation. Four different hunting techniques are applied based on the prey's energy and the likelihood of escaping. Therefore, the tent chaotic map was introduced to enhance the population diversity; then, we optimized critical parameters in the exploration phase with the exploration factor. As a component of the exploitation phase, a random walk strategy was introduced to enhance the ability to jump out of local optima. The convergence speed and accuracy are improved through all of these approaches, and the program's overall optimization performance is effectively enhanced. This new Harris hawks optimization algorithm is called ERHHO. Figure 5 depicts the summary flowchart, and Algorithm 1 represents the pseudocode for ERHHO.
Figure 5

Flowchart of ERHHO.

Algorithm 1

Pseudocode of ERHHO.

3.5. Computational Complexity Analysis

Initialization, position updating, and fitness evaluation are the three essential components of ERHHO. Positions are generated with a computational complexity of O(N ∗ D), where N denotes the number of populations and D represents the dimensions of the problem. It takes O(N) to evaluate the fitness solution. We employ a random walk strategy to prevent the algorithm from entering local optima, and the computational complexity is O(2 × N × D × T). Hence, the proposed ERHHO algorithm has a total computational complexity of O(2 × N × D × T).

4. Experimental Results and Discussion

In this part, we validate the performance of ERHHO on 23 benchmark functions [47] by comparing it with some state-of-art metaheuristics algorithms: SMA, WOA, SSA, SCA, and HHO, and HHO-based optimization algorithm: DHHO/M and HHOCM. Meanwhile, we use the Wilcoxon signed-rank test to acknowledge the differences between ERHHO and the comparative algorithms. Furthermore, we test ERHHO with HHO-related algorithms: HHO, DHHO/M, HHOCM, and CEHHO on CEC2017 test functions to testify the real-world applications. And also the five engineering design issues are applied with the same algorithms as in benchmark functions' tests.

4.1. Benchmark Functions and Parameter Settings

Many scholars employ benchmark functions. Details of the unimodal, multimodal, and fixed-dimension multimodal benchmark functions are shown in Table 1. The unimodal benchmark functions (F1–F7) have only one extreme point; it can be effectively testified to the exploitation ability of ERHHO. And the global search capacity of ERHHO can be tested by the multimodal benchmark functions (F8–F23) with many local optima.
Table 1

Benchmark function properties (Dim indicates dimension).

FunctionDimRange F min
Unimodal benchmark functionsF130[−100, 100]0
F230[−10, 10]0
F330[−100, 100]0
F430[−100, 100]0
F530[−30, 30]0
F630[−100, 100]0
F730[−1.28, 1.28]0

Multimodal benchmark functionsF830[−500, 500]418.9829 × dim
F930[−5.12, 5.12]
F1030[−32, 32]0
F1130[−600, 600]0
F1230[−50, 50]0
F1330[−50, 50]0

Fixed-dimension multimodal benchmark functionsF142[−65, 65]0.998
F154[−5, 5]0.00030
F162[−5, 5]−1.0316
F172[−5, 5]0.398
F182[−2, 2]3
F193[−1, 2]−3.86
F206[0, 1]−3.32
F214[0, 10]−10.1532
F224[0, 10]−10.4028
F234[0, 10]−10.5363
To make the experimental findings more representative, the ERHHO is compared with algorithms of SMA [27], WOA [24], SSA [25], SCA [26], and HHO [36], DHHO/M [43], and HHOCM [41]. Table 2 shows the parameter settings for each algorithm. All the parameters are set according to the original articles except b and c (the analysis of b and c are listed in Section 4.2). The maximum iteration (T) was set to 500; the population size (N) was set to 30; and the dimension size (D) was set to 30 in all tests. We calculate average results and standard deviations with 30 independent runs and bold the best values.
Table 2

Parameter settings for the comparative algorithms.

AlgorithmParameters
ERHHO β = 1.5, a = 0.7, b = 2, c = 6
SMA [27] z = 0.03
WOA [24] a 1 = [2, 0], a2 = [−2, 1], b = 1
SSA [25] c 1 ∈ [0, 1], c2 ∈ [0, 1]
SCA [26] a = 2
HHO [36] β = 1.5
DHHO/M [43] a = 2.5, F = 0.5
HHOCM [41] β = 1.5

4.2. Sensitivity Analysis of b and c on ERHHO

The parameter settings of comparative algorithms are in accordance with the original articles, as well as the values of β and a of ERHHO are according to the original HHO algorithm. The new parameters are b and c, which can significantly impact the performance of ERHHO. Thus, setting the appropriate parameters for better results is necessary. The aims of b and c are to find a suitable search range. According to equation (17), the original range is between 0 and 1; for expanding the scope, b is considered as 2, 4, and 6; and c is considered as 2, 4, and 6. To find the fit b-c value pair, we tested 9 situations by using 23 benchmark functions. We calculate the mean results of each function with 30 independent runs, and the results are listed in Table 3. The lowest values are highlighted in bold. We count the number of bold fonts in each function, and the ERHHO algorithm can get the best performance with b = 2 and c = 6. The parameter values will be used for further experimental tests.
Table 3

Parameters sensitivity analysis.

Function b = 2 b = 2 b = 2 b = 4 b = 4 b = 4 b = 6 b = 6 b = 6
c = 2 c = 4 c = 6 c = 2 c = 4 c = 6 c = 2 c = 4 c = 6
F1 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F2 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F3 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F4 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F54.1086E − 042.3293E − 04 9.9757E − 05 4.4925E − 042.4795E − 041.6512E − 041.5639E − 033.3876E − 044.1522E − 04
F69.1737E − 053.8910E − 05 1.4826E − 05 8.5754E − 055.2338E − 054.6220E − 051.7327E − 041.1701E − 043.8893E − 05
F7 5.7338E − 05 8.4463E − 057.1416E − 056.3548E − 057.9739E − 057.3938E − 057.3475E − 056.7715E − 056.3104E − 05
F8 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04 −1.2569E + 04
F9 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F10 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16 8.8818E − 16
F11 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00
F121.9085E − 066.3102E − 07 2.9078E − 07 4.8068E − 061.8862E − 061.8322E − 066.5051E − 062.0196E − 062.2776E − 06
F132.6364E − 058.3301E − 06 4.8525E − 06 5.0282E − 051.7997E − 051.8399E − 054.8097E − 054.7190E − 052.8779E − 05
F14 9.9800E − 01 9.9800E − 01 9.9800E − 01 9.9800E − 01 1.1303E + 001.0311E + 00 9.9800E − 01 1.3599E + 00 9.9800E − 01
F153.2234E − 043.4356E − 043.3924E − 043.6062E − 043.4211E − 04 3.0770E − 04 3.2145E − 043.1080E − 043.6977E − 04
F16 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00
F17 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01
F18 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00
F19 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00 −3.8628E + 00
F20−3.2268E + 00−3.2558E + 00−3.2689E + 00−3.1938E + 00−3.2562E + 00−3.2735E + 00−3.2015E + 00 −3.2816E + 00 −3.2741E + 00
F21−1.0152E + 01 −1.0153E + 01 −1.0153E + 01 −1.0152E + 01 −1.0153E + 01 −1.0153E + 01 −1.0153E + 01 −1.0153E + 01 −1.0153E + 01
F22−1.0402E + 01 −1.0403E + 01 −1.0403E + 01 −1.0402E + 01 −1.0403E + 01 −1.0403E + 01 −1.0402E + 01 −1.0403E + 01 −1.0403E + 01
F23−1.0536E + 01 −1.0536E + 01 −1.0536E + 01 −1.0536E + 01 −1.0536E + 01 −1.0536E + 01 −1.0535E + 01 −1.0536E + 01 −1.0536E + 01
Sum1416 20 131416141616

The best results are marked in bold.

4.3. Analysis of Benchmark Functions

4.3.1. Numerical Analysis

For the unimodal benchmark functions (F1–F7), ERHHO can obtain the best result except for F6 as shown in Table 4. Especially, for F1–F4, ERHHO can obtain the theoretical optimum. And the result of ERHHO ranks only second to SSA for F6. The aim of unimodal benchmark functions is to measure the exploitation ability. From the experimental tests of F1–F7, we can prove that the proposed algorithm possesses a strong local search ability.
Table 4

Results of algorithms on 23 benchmark functions.

FunctionERHHOSMAWOASSASCAHHODHHO/MHHOCM
F1Mean 0.0000E + 00 5.4841e − 3227.8586E − 744.5572E − 071.5650E + 012.4246E − 961.9672E − 95 0.0000E + 00
Std 0.0000E + 00 0.0000E + 00 2.1680E − 738.0901E − 072.1910E + 011.3093E − 956.7432E − 95 0.0000E + 00

F2Mean 0.0000E + 00 1.9735E − 1504.7683E − 512.4203E + 002.4894E − 022.9318E − 511.3176E − 481.2225E − 203
Std 0.0000E + 00 1.0809E − 1492.0969E − 501.7085E + 003.5592E − 021.1152E − 506.0671E − 48 0.0000E + 00

F3Mean 0.0000E + 00 2.0792E − 2814.4988E + 041.4407E + 039.3775E + 038.4362E − 737.6735E − 70 0.0000E + 00
Std 0.0000E + 00 0.0000E + 00 1.2599E + 049.7094E + 025.8048E + 034.6060E − 724.2027E − 69 0.0000E + 00

F4Mean 0.0000E + 00 2.1129E − 1373.8143E + 011.1509E + 013.2926E + 015.4154E − 493.9550E − 434.5525E − 197
Std 0.0000E + 00 1.1573E − 1362.7276E + 013.7852E + 001.3070E + 011.4703E − 482.1642E − 42 0.0000E + 00

F5Mean 8.9883E − 05 8.3666E + 002.7925E + 012.7609E + 024.3940E + 041.0142E − 026.7040E − 033.1438E − 02
Std 1.9452E − 04 1.1685E + 014.5946E  − 013.6619E + 026.4024E + 041.2554E − 029.5798E − 035.0191E − 02

F6Mean1.6455E − 055.2944E − 034.4446E − 01 2.0747E − 07 1.9795E + 011.5127E − 047.3876E − 053.1254E − 04
Std2.5216E − 054.1571E − 032.8101E − 01 4.6645E − 07 1.8292E + 011.7402E − 041.0886E − 043.8262E − 04

F7Mean 7.6595E − 05 2.1094E − 042.0569E − 031.6531E − 019.7607E − 021.3707E − 041.5754E − 041.6184E − 04
Std 5.8914E − 05 1.7724E − 042.0834E − 037.4193E − 027.9845E − 021.0626E − 041.4448E − 041.7591E − 04

F8Mean −1.2569E + 04 −1.2569E + 04 −1.0181E + 04−7.4877E + 03−3.8109E + 03 −1.2569E + 04 −1.2469E + 04−1.2554E + 04
Std 5.0470E − 02 3.5003E − 011.6535E + 036.3231E + 023.4249E + 028.9555E − 015.4333E + 025.4994E + 01

F9Mean 0.0000E + 00 0.0000E + 00 3.7896E−154.9483E + 014.6372E + 01 0.0000E + 00 0.0000E + 00 0.0000E + 00
Std 0.0000E + 00 0.0000E + 00 2.0756E−141.3277E + 013.4053E + 01 0.0000E + 00 0.0000E + 00 0.0000E + 00

F10Mean 8.8818E − 16 8.8818E − 16 5.3883E − 152.7147E + 001.2192E + 01 8.8818E − 16 8.8818E − 16 8.8818E − 16
Std 0.0000E + 00 0.0000E + 00 2.0723E − 159.8679E − 019.4808E + 00 0.0000E + 00 0.0000E + 00 0.0000E + 00

F11Mean 0.0000E + 00 0.0000E + 00 1.6696E − 021.7711E − 029.7236E − 01 0.0000E + 00 0.0000E + 00 0.0000E + 00
Std 0.0000E + 00 0.0000E + 00 5.4084E − 021.4171E − 023.5476E − 01 0.0000E + 00 0.0000E + 00 0.0000E + 00

F12Mean 3.1435E − 07 4.9064E − 032.1150E − 026.6186E + 002.4854E + 041.3945E − 058.5284E − 061.5684E − 05
Std 5.0819E − 07 5.8190E − 031.3549E − 022.8000E + 008.6251E + 042.0991E − 051.0043E − 052.2746E − 05

F13Mean 7.2242E − 06 7.2277E − 035.0105E − 011.1788E + 011.1416E + 057.4801E − 059.5150E − 052.7612E − 04
Std 9.7389E − 06 6.6267E − 031.8495E − 011.2856E + 012.8677E + 058.4403E − 051.0966E − 043.9597E − 04

F14Mean 9.9800E − 01 9.9800E − 01 2.7353E + 001.1637E + 001.7223E + 001.7549E + 001.2949E + 001.1634E + 00
Std7.4084E − 11 5.6156E − 13 2.8639E + 003.7678E−011.8868E + 001.6921E + 009.3994E − 015.2656E − 01

F15Mean 3.1210E − 04 5.6703E − 046.9856E − 042.1817E − 031.0610E − 033.8687E − 044.5821E − 045.6125E − 04
Std 2.1452E − 05 3.0981E − 044.7771E − 044.9506E − 034.1995E − 042.8444E − 043.0067E − 044.2913E − 04

F16Mean −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00 −1.0316E + 00
Std 4.8085E − 16 2.4615E − 091.2442E − 093.3423E − 145.2063E − 054.2516E − 095.7528E − 113.8734E − 09

F17Mean 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9789E − 01 3.9998E−01 3.9789E − 01 3.9789E − 01 3.9789E − 01
Std 1.1571E − 15 2.0433E 078.4094E 061.8281E 142.1427E 037.6834E 063.9921E 061.6521E 06

F18Mean 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00 3.0001E + 00 3.0000E + 00 3.0000E + 00 3.0000E + 00
Std 1.1106E − 14 1.8331E 103.8495E 053.0811E 131.3607E 043.3143E 077.5272E 081.5483E 08

F19Mean−3.8628E + 00−3.8628E + 00−3.8532E + 00−3.8628E + 00−3.8543E + 00−3.8599E + 00 −3.8608E + 00 −3.8626E + 00
Std 8.0972E − 15 3.3188E−071.1608E−027.2861E−112.5528E−034.2515E−033.0862E−035.1550E−04

F20Mean −3.2692E + 00 −3.2621E + 00−3.2474E + 00−3.2178E + 00−2.9363E + 00−3.0985E + 00−3.1112E + 00−3.2615E + 00
Std6.1618E 026.0867E 029.7561E 02 4.7860E−02 2.6267E 011.1844E 018.3931E 026.7841E 02
F21Mean −1.0153E + 01 −1.0153E + 01 −8.7154E + 00−7.0613E + 00−2.1979E + 00−5.0516E + 00−1.0032E + 01−5.0550E + 00
Std 3.0528E − 05 3.2280E 042.2751E + 003.4592E + 001.7374E + 003.1444E 031.2614E 012.6879E 04

F22Mean −1.0403E + 01 −1.0403E + 01 −8.8619E + 00−8.4885E + 00−2.7876E + 00−5.0047E + 00−1.0212E + 01−5.0876E + 00
Std 1.1586E − 07 2.0872E 042.6262E + 003.0306E + 001.7662E + 004.2878E 011.8693E 011.0644E 04

F23Mean −1.0536E + 01 −1.0536E + 01 −7.8012E + 00−8.6030E + 00−4.2432E + 00−5.4760E + 00−1.0404E + 01−5.1283E + 00
Std 4.5569E−07 3.6654E 043.0292E + 003.0771E + 001.7263E + 001.3367E + 001.5417E 011.7786E 04

The best results are marked as bold fonts. For the unimodal benchmark functions (F1–F7), ERHHO can obtain the best result except for F6.

For the multimodal functions (F8–F23), ERHHO gets the best mean values except for F9 and the best stand deviations except for F14 and F20. The goal of multimodal functions is to evaluate the exploration capability. From the test of F8–F23, we can verify that the global ability of ERHHO is excellent. In conclusion, the ERHHO algorithm outperforms the other compared algorithms in unimodal functions, which due to the random walk strategy, the algorithm's capacity to jump out of the local optimum can be effectively boosted. For the multimodal benchmark functions, the performance of proposed ERHHO is competitive as well. The reason for the excellent performance of ERHHO in the exploration phase is mainly due to the exploration factor. The exploration factor can expand the scope of exploration that is why ERHHO can find the result faster and more accurately. We tried the experiment of adding the exploration factor only to the original HHO, and the results of some multimodel test functions were better than before. Especially, for F21–F23, the result was improved remarkably and close to the theoretical values (see Table 5).
Table 5

Results of HHO (hybrid exploration factor) on F21–F23.

FunctionHHOHHO + exploration factor
F21Mean−5.3410E + 00 −1.0147E + 01
Std1.1097E + 00 7.5648E − 03

F22Mean−5.2560E + 00 −1.0394E + 01
Std9.5700E − 01 1.4686E − 02

F23Mean−5.3039E + 00 −1.0530E + 01
Std9.8154E − 01 6.1909E − 03

The best results are marked in bold.

4.3.2. Convergence Analysis

Figure 6 depicts the results of analyzing the proposed algorithm's convergence in various functions (F3, F5, F7–F10, F12–F15, and F19–F23). For unimodal functions (F3, F5, and F7), the differences in convergence curves between ERHHO and HHO can be observed visually from F5, which represents higher accuracy values after several step-like descents. That is, because when the hawks get trapped, ERHHO activates the random walk strategy, which can help the hawks fly deviate from the trap and lead to a new position with more possibilities. These three functions can reflect that ERHHO possesses a faster convergence speed and a higher accuracy than HHO and the other competitive algorithms in unimodal functions.
Figure 6

Convergence curves of 23 benchmark functions: (a) F3, (b) F5, (c) F7, (d) F8, (e) F9, (f) F10, (g) F12, (h) F13, (i) F14, (j) F15, (k) F19, (l) F20, (m) F21, (n) F22, and (o) F23.

The conclusion also holds in multimodal functions. Many of the figures show the ERRHO can get the optimal values directly after several iterations such as F8–F10, F14–F15, and F19–F23 that is because of the strong exploration ability of ERHHO whose global search scope is twice as wide as before in the early period, so the ERHHO algorithm can converge very fast. For F12, the ERHHO and HHO get a similar trend before iteration 100, but after that, ERHHO dive suddenly and get the best accuracy from all competitors. The reason is the coordination of local strategy and global search ability. The local strategy has the characteristic of a large deviation in the early stage and a small deviation in the later period. The cooperation helps the ERHHO jump out of the local optima and find better fitness values by a wide search scope. The situation is reflected in F13 as well because the functions of F12 and F13 have similar graphical features (many close local optimal points). It can be seen that no matter for unimodal functions or multimodal functions, the proposed algorithm provides a better convergence pattern in almost all functions.

4.4. Wilcoxon Signed-Rank Test

The Wilcoxon signed-rank (WSR) test is used to show that the results are statistically significant. By WSR, we can determine whether two sets of values are significantly different. A p value is lower than 0.05 suggests that the method is considerably better than the compared algorithms. The Wilcoxon signed-rank test results for each benchmark function are shown in Table 6. As can be observed, ERHHO outperforms almost all of the other algorithms to varying degrees.
Table 6

The results of the Wilcoxon signed test.

FunctionERHHO vs.
SMAWOASSASCAHHODHHO/MHHOCM
F1 NaN 6.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 07 NaN
F26.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 07
F3 7.9725E − 02 6.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 07 NaN
F46.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 076.8662E − 07
F54.1432E − 063.3918E − 063.3918E − 063.3918E − 063.6093E − 043.3568E − 057.4772E − 06
F63.3918E − 063.3918E − 063.3918E − 063.3918E − 06 5.3383E − 01 1.5846E − 01 7.9403E − 03
F74.2111E − 023.3918E − 063.3918E − 063.3918E − 06 1.0574E − 01 1.4397E − 026.1898E − 03
F81.9352E − 053.3918E − 063.3918E − 063.3918E − 069.6615E − 051.1457E − 045.4521E − 03
F9 NaN 3.5065E − 01 6.8662E − 076.8662E − 07 NaN NaN NaN
F10 NaN 2.1523E − 056.8662E − 076.8662E − 07 NaN NaN NaN
F11 NaN 3.5065E − 01 6.8662E − 076.8662E − 07 NaN NaN NaN
F123.3918E − 063.3918E − 063.3918E − 063.3918E − 067.0162E − 034.2111E − 022.8226E − 03
F133.3918E − 063.3918E − 063.3918E − 063.3918E − 061.0992E − 051.0500E − 032.2289E − 04
F145.2468E − 033.0947E − 05 2.7876E − 01 3.0659E − 066.3415E − 055.3134E − 053.5402E − 03
F153.3918E − 064.1432E − 065.0527E − 063.3918E − 061.0992E − 051.1457E − 044.0200E − 05
F161.2604E − 061.2604E − 061.2604E − 061.2604E − 061.2604E − 061.5660E − 061.2604E − 06
F172.2038E − 062.2038E − 063.4080E − 042.2038E − 064.9642E − 061.9074E − 052.4387E − 06
F185.0162E − 063.3664E − 061.0918E − 053.3664E − 063.3664E − 063.9725E − 053.3664E − 06
F193.2092E − 063.2092E − 065.5823E − 043.2092E − 063.2092E − 063.2092E − 063.2092E − 06
F20 2.4549E − 01 5.6145E − 01 1.0122E − 023.3918E − 064.7948E − 032.4626E − 03 8.6823E − 01
F211.6053E − 053.3918E − 06 7.7155E − 01 3.3918E − 063.3918E − 063.3918E − 063.3918E − 06
F227.4772E − 063.3918E − 06 1.2486E − 01 3.3918E − 063.3918E − 063.3918E − 063.3918E − 06
F234.8063E − 053.3918E − 065.7371E − 053.3918E − 063.3918E − 064.1432E − 063.3918E − 06
(W|L|T)(17|2|4)(20|3|0)(20|3|0)(23|0|0)(18|2|3)(19|1|3)(17|1|5)

The best results are marked in bold.

4.5. The Performance of ERHHO on the CEC2017

To further evaluate, one of the most challenging test functions is called CEC2017 [48], which includes 30 test functions with unimodal, multimodal, hybrid, and composition types (see Table 7). In this experiment, we set dim = 10 for all functions.
Table 7

Properties and summary of the CEC2017.

TypeNo.FunctionsGlobal minDomain
Unimodal functionF1Shifted and rotated bent cigar function100[−100, 100]
F2Shifted and rotated sum of different power function200[−100, 100]
F3Shifted and rotated Zakharov's function300[−100, 100]

Multimodal functionsF4Shifted and rotated Rosenbrock's function400[−100, 100]
F5Shifted and rotated Rastrigin's function500[−100, 100]
F6Shifted and rotated expanded Schaffer's function600[−100, 100]
F7Shifted and rotated Lunacek Bi_Rastrigin function700[−100, 100]
F8Shifted and rotated noncontinuous Rastrigin's function800[−100, 100]
F9Shifted and rotated Levy function900[−100, 100]
F10Shifted and rotated Schwefel's function1,000[−100, 100]

Hybrid functionsF11Hybrid function of Zakharov, Rosenbrock, and Rastrigin1,100[−100, 100]
F12Hybrid function of high conditioned elliptic, modified Schwefel, and bent cigar1,200[−100, 100]
F13Hybrid function of bent cigar, Rosenbrock, and Lunacek bi-Rastrigin1,300[−100, 100]
F14Hybrid function of elliptic, Ackley, Schaffer and Rastrigin1,400[−100, 100]
F15Hybrid function of bent cigar, HGBat, Rastrigin, and Rosenbrock1,500[−100, 100]
F16Hybrid function of expanded Schaffer, HGBat, Rosenbrock, and modified Schwefel1,600[−100, 100]
F17Hybrid function of Katsuura, Ackley, expanded Griewank plus Rosenbrock, modified Schwefel, and Rastrigin1,700[−100, 100]
F18Hybrid function of high conditioned elliptic, Ackley, Rastrigin, HGBat, and Discus1,800[−100, 100]
F19Hybrid function of bent cigar, Rastrigin, expanded Griewank plus Rosenbrock, Weierstrass, and expanded Schaffer1,900[−100, 100]
F20Hybrid function of HappyCat, Katsuura, Ackley, Rastrigin, modified Schwefel, and Schaffer2,000[−100, 100]

Composition functionsF21Composition function of Rosenbrock, high conditioned elliptic, and Rastrigin2,100[−100, 100]
F22Composition function of Rastrigin's, Griewank, and modified Schwefel2,200[−100, 100]
F23Composition function of Rosenbrock, Ackley, modified Schwefel, and Rastrigin2,300[−100, 100]
F24Composition function of Ackley, high conditioned elliptic, Griewank, and Rastrigin2,400[−100, 100]
F25Composition function of Rastrigin, HappyCat, Ackley, Discus, and Rosenbrock2,500[−100, 100]
F26Composition function of expanded Schaffer, modified Schwefel, Griewank, Rosenbrock, and Rastrigin2,600[−100, 100]
F27Composition function of HGBat, Rastrigin, modified Schwefel, bent cigar, high conditioned elliptic, and expanded Schaffer2,700[−100, 100]
F28Composition function of Ackley, Griewank, Discus, Rosenbrock, HappyCat, and expanded Schaffer2,800[−100, 100]
F29Composition function of shifted and rotated Rastrigin, expanded Schaffer, and Lunacek Bi_Rastrigin2,900[−100, 100]
F30F30 composition function of shifted and rotated Rastrigin, noncontinuous Rastrigin, and Levy function3,000[−100, 100]
HHO algorithm performs poorly on complicated problems such as CEC2017 but focuses on tackling simple, traditional, high-dimensional issues. We compared the proposed algorithm with HHO or optimizer based on HHO is more rational. We tested ERHHO on CEC2017 test functions (excluding F2, which is unstable) and compared it with HHO, DHHO/M, HHOCM, and CEHHO [42]. Each function was put to the test 30 times with 1,000 iterations, and the dimensions of all functions were set to 10. The results of CEC2017 test functions are listed in Table 8, which utilize Friedman's mean rank, and the best values are provided in bold. According to Table 8, the proposed algorithm performed well in F1 and got the second rank in F3. We can say, the performance of ERHHO is similar to HHOCM and better than other algorithms for unimodal functions. For multimodal functions, ERHHO has a slight advantage over HHOCM and CEHHO but outperforms others. But, for hybrid and composition functions, ERHHO shows a huge lead over the other algorithms. We evaluate the algorithms group by different types of unimodal, multimodal, hybrid, and composition (see Figure 7). From Figure 7, ERHHO got the same Friedman rank value as HHOCM on the type of unimodal and multimodal but obviously outperformed other algorithms. For the type of hybrid and composition, ERHHO got the optimal ranking in all competitive algorithms.
Table 8

The results of CEC2017 test functions.

TypeNo.ERHHOHHODHHO/MHHOCMCEHHO
Unimodal functionF1Mean 1.2052E + 04 3.6019E + 054.2939E + 052.0170E + 041.0067E + 06
Std 2.2216E + 04 1.7863E + 053.0607E + 052.4823E + 041.0026E + 06
Rank 1 3425
F3Mean3.0063E + 023.0180E + 023.0231E + 02 3.0019E + 02 3.1873E + 02
Std4.6904E − 011.0312E + 002.0173E + 00 9.7727E − 02 1.6569E + 01
Rank234 1 5

Multimodal functionsF4Mean4.1270E + 024.1578E + 024.1392E + 02 4.0689E + 02 4.1280E + 02
Std2.1283E + 012.5703E + 012.1889E + 01 1.1731E + 01 1.9551E + 01
Rank254 1 3
F5Mean 5.4272E + 02 5.4457E + 025.4797E + 025.4413E + 025.4567E + 02
Std 1.7715E + 01 1.2083E + 011.3398E + 011.5229E + 011.8236E + 01
Rank 1 3524
F6Mean 6.2997E + 02 6.3151E + 026.3265E + 026.3046E + 026.3753E + 02
Std 9.6724E + 00 1.2635E + 011.1154E + 011.0670E + 011.2120E + 01
Rank 1 3425
F7Mean7.8617E + 027.8076E + 027.7761E + 02 7.7481E + 02 7.8152E + 02
Std1.8646E + 011.6252E + 011.7448E + 01 2.0940E + 01 2.2054E + 01
Rank532 1 4
F8Mean8.3287E + 028.2779E + 028.2843E + 028.2989E + 02 8.2428E + 02
Std6.4227E + 008.0354E + 007.6805E + 008.9310E + 00 7.6576E + 00
Rank5234 1
F9Mean1.3612E + 031.4738E + 031.3986E + 031.4321E + 03 1.3168E + 03
Std3.0596E + 022.0219E + 022.0034E + 022.5386E + 02 1.9220E + 02
Rank2534 1
F10Mean 1.9272E + 03 1.9801E + 032.0096E + 032.0094E + 032.0899E + 03
Std 2.6782E + 02 3.0179E + 022.8965E + 022.9359E + 023.1406E + 02
Rank 1 2435

Hybrid functionsF11Mean 1.1567E + 03 1.1797E + 031.1939E + 031.1647E + 031.1809E + 03
Std 4.3639E + 01 8.0529E + 019.0864E + 015.1621E + 016.2806E + 01
Rank 1 3524
F12Mean 2.5268E + 04 2.3910E + 062.6341E + 062.5694E + 062.8299E + 06
Std 3.7776E + 04 2.0613E + 062.9949E + 062.8868E + 063.1678E + 06
Rank 1 2435
F13Mean 2.6880E + 03 1.7767E + 042.0431E + 041.2350E + 041.4179E + 04
Std 2.2783E + 03 1.1560E + 041.1932E + 049.9645E + 038.7188E + 03
Rank 1 4523
F14Mean 1.4865E + 03 1.5791E + 031.5321E + 031.5857E + 031.5634E + 03
Std 2.4693E + 01 2.0948E + 023.4605E + 011.9928E + 028.5603E + 01
Rank 1 4253
F15Mean 1.5942E + 03 4.0799E + 034.2891E + 034.6614E + 036.3234E + 03
Std 5.7397E + 01 1.7066E + 032.0041E + 031.7606E + 032.8184E + 03
Rank 1 2345
F16Mean1.9278E + 031.8955E + 03 1.8830E + 03 1.9312E + 031.8949E + 03
Std1.2443E + 021.5078E + 02 1.3381E + 02 1.5608E + 021.4804E + 02
Rank43 1 52
F17Mean1.7674E + 031.7769E + 031.7841E + 03 1.7664E + 03 1.7967E + 03
Std2.4383E + 012.8229E + 016.0168E + 01 2.1723E + 01 3.8642E + 01
Rank234 1 5
F18Mean 3.8251E + 03 1.3844E + 041.6139E + 041.7430E + 041.5003E + 04
Std 3.7128E + 03 1.1177E + 041.1613E + 041.0382E + 041.2422E + 04
Rank 1 2453
F19Mean 4.7207E + 03 1.1593E + 041.2784E + 041.2917E + 041.5813E + 04
Std 5.0189E + 03 1.1274E + 041.1131E + 041.1182E + 041.3094E + 04
Rank 1 2345
F20Mean 2.1509E + 03 2.1761E + 032.1553E + 032.1520E + 032.1662E + 03
Std 6.1600E + 01 6.3993E + 016.5874E + 017.4938E + 016.8566E + 01
Rank 1 5324

Composition functionsF21Mean2.3271E + 032.3298E + 032.3117E + 032.3291E + 03 2.3016E + 03
Std5.1674E + 015.1905E + 016.1347E + 016.0511E + 01 6.3126E + 01
Rank3524 1
F22Mean 2.3123E + 03 2.3776E + 032.3159E + 032.3580E + 032.3759E + 03
Std 6.4041E + 00 3.4445E + 026.1462E + 002.7117E + 022.4397E + 02
Rank 1 5234
F23Mean2.6610E + 032.6651E + 03 2.6565E + 03 2.6833E + 032.6664E + 03
Std2.3793E + 012.7693E + 01 2.6149E + 01 2.5521E + 013.0349E + 01
Rank23 1 54
F24Mean2.7871E + 032.7876E + 032.7977E + 032.8124E + 03 2.7837E + 03
Std9.6226E + 011.0482E + 027.3836E + 019.2546E + 01 8.1668E + 01
Rank2345 1
F25Mean 2.9141E + 03 2.9389E + 032.9331E + 032.9168E + 032.9448E + 03
Std 6.4027E + 01 3.7666E + 012.4771E + 018.8852E + 013.4255E + 01
Rank 1 4325
F26Mean3.4378E + 03 3.3932E + 03 3.4342E + 033.4730E + 033.7968E + 03
Std6.6555E + 02 5.4030E + 02 5.9523E + 024.9775E + 025.8494E + 02
Rank3 1 245
F27Mean 3.1341E + 03 3.1412E + 033.1451E + 033.1499E + 033.1616E + 03
Std 3.8294E + 01 4.5307E + 014.3369E + 013.2091E + 014.6269E + 01
Rank 1 2345
F28Mean 3.3287E + 03 3.4277E + 033.3626E + 033.3302E + 033.3703E + 03
Std 1.7005E + 02 1.7236E + 021.6371E + 021.0408E + 021.6007E + 02
Rank 1 5324
F29Mean 3.3221E + 03 3.3466E + 033.3369E + 033.3284E + 033.3642E + 03
Std 7.8055E + 01 1.0081E + 029.4273E + 018.6562E + 011.0507E + 02
Rank 1 4325
F30Mean 6.4195E + 05 8.8875E + 051.2359E + 067.2195E + 052.2614E + 06
Std 7.2364E + 05 1.0667E + 061.3062E + 061.1924E + 065.0252E + 06
Rank 1 3425

Friedman mean rank 1.7241 3.24143.24142.96553.8276
Rank 1 3325

The best results are marked in bold.

Figure 7

Friedman mean ranking for each type of CEC2017 test functions.

4.6. Experiments on Engineering Design Problems

We analyze the performance of ERHHO on five classic engineering design issues in this section. We perform all the experiments by setting the population (N) to 30 and the maximum iteration (T) to 500. The proposed algorithms are compared to algorithms: SMA, WOA, SSA, SCA, HHO, DHHO/M, and HHOCM. The parameter settings are identical to those listed in Table 2.

4.6.1. Tension/Compression Spring Design Problem

This challenge [49] aims to find the optimum variables of wire diameter (d), average coil diameter (D), and the number of active coils (N) by the optimization algorithms for obtaining the minimum weight of the tension or compress spring under four constraints. The tension/compression spring structure is shown in Figure 8. The conditions and formulations are as follows:
Figure 8

Tension spring design problem [50].

Consider Minimize Subject to Variable range Table 9 shows the results of ERHHO and compared algorithms for solving this problem, and we can see that the ERHHO algorithm obtained the best performance among all competitors.
Table 9

Results for tension/compression spring design problem.

AlgorithmOptimum variablesOptimum weight
d D N
ERHHO 0.054919 0.50031 5.2144 0.010886
SMA0.0560260.5324.69740.011184
WOA0.0561720.536284.63380.011225
SSA0.050.34047411.36740.011378
SCA0.0513490.392998.77620.011166
HHO0.0574820.575674.10790.011618
DHHO/M0.056240.538284.60450.011244
HHOCM0.0553030.511185.02750.010987

The best results are marked in bold.

4.6.2. Pressure Vessel Design Problem

The problem's primary goal is to obtain set values of the shell (T), the thickness of the head (T), the inner radius (R), and the length of the cylindrical section (L) by the optimization algorithms for reducing the cost of cylindrical pressure vessels [51] while meeting the pressure requirements. The structure of the pressure vessel is shown in Figure 9. The mathematical formula is represented as follows:
Figure 9

Pressure vessel design problem [50].

Consider Minimize Subject to Variable range We can see from Table 10 that ERHHO was able to find the optimal solution at the lowest cost compared to other competitor algorithms.
Table 10

Results for pressure vessel design problem.

AlgorithmOptimum variablesOptimum cost
T s T h R L
ERHHO 0.8128337 0.414164 44.19005 152.3373 5,907.41
SMA0.84987430.416885745.62804137.31145,953.6101
WOA0.73797610.503212940.319622006,100.539
SSA0.87246560.426643746.7582126.34166,004.6857
SCA0.68693960.379050640.364432006,078.1605
HHO0.89711880.437793747.79634116.84996,055.0952
DHHO0.85160570.395169544.15988152.66375,997.743
HHOCM0.8509370.414751145.41652139.44345,947.2608

The best results are marked in bold.

4.6.3. The Three-Bar Truss Design Problem

The three-bar truss design problem arises from civil engineering [52, 53]. This problem aims to minimize the weight in truss design with stress, deflection, and buckling constraints. There are two parameters, A1 and A2, involved in this design problem, and we should find the best value of A1 and A2 by the optimization algorithm for achieving the goal above. The design is shown in Figure 10. The model of the problem is depicted as follows:
Figure 10

Three-bar truss design problem [50].

Consider Minimize Subject to Variable range The results for the three-bar truss design problem are listed in Table 11. As we can see, the ERHHO algorithm can achieve the best performance as well as the SSA algorithm in solving this problem.
Table 11

Results for the three-bar truss design problem.

AlgorithmOptimum variablesOptimum cost
x1 x2
ERHHO 0.78842 0.40811 263.8523
SMA0.798930.37727264.1663
WOA0.806970.35801264.0902
SSA 0.7884 0.40816 263.8523
SCA0.801670.37081264.052
HHO0.795950.38721263.893
DHHO0.780730.43031263.897
HHOCM0.795990.38709263.8935

The best results are marked in bold.

4.6.4. Cantilever Beam Design

Cantilever beam design is one problem in which hollow square cross-section parameters (x1–x5) are optimized [54] by optimization algorithms to obtain the cantilever beam's minimum weight. This problems' architecture is depicted in Figure 11. The mathematical equations are described as follows:
Figure 11

Cantilever beam design [55].

Consider Minimize Subject to Variable range Table 12 summarized the findings, from which we can see that ERHHO was capable of finding the optimal solution and obtained total weight is minimized.
Table 12

Results for the cantilever beam design problem.

AlgorithmOptimum variablesOptimum weight
x1 x2 x3 x4 x5
ERHHO 6.0509 5.2639 4.514 3.4605 2.1878 1.3402
SMA5.9845.30744.51363.43212.2481.3407
WOA6.50836.16534.6793.13351.68821.3837
SSA5.97725.39664.46193.46752.17531.3403
SCA5.64455.96075.08943.05442.29171.3753
HHO6.05785.62314.23253.43022.20351.3445
DHHO/M5.69085.6474.70093.35782.18591.3467
HHOCM6.09325.24654.6333.37482.14761.3413

The best results are marked in bold.

4.6.5. Speed Reducer Problem

The speed reducer problem [55] aims to minimize the reducer's weight by optimizing seven variables x1–x7 through the optimization algorithms. The structure of this problem is depicted in Figure 12. The formulas and constraints are written as follows:
Figure 12

Speed reducer problem [55].

Minimize Subject to Variable range Table 13 shows the test results that demonstrate the proposed algorithm's effectiveness in obtaining the optimal values for solving this problem.
Table 13

Results for the speed reducer design problem.

AlgorithmOptimum variablesOptimum weight
x1 x2 x3 x4 x5 x6 x7
ERHHO 3.4976 0.7 17 7.3 7.8 3.35006 5.28553 2,995.4374
SMA3.497670.7177.37.83.350075.285542,995.4379
WOA3.494410.7178.100017.979573.351285.325673,033.2286
SSA 3.497620.7177.373538.141543.350195.285553,003.6298
SCA3.60.7177.60797.980833.391275.300033,061.4356
HHO3.526990.7177.37.800553.35045.284163,007.2009
DHHO3.497370.7178.045417.83.351945.285543,002.6565
HHOCM3.497770.7177.487887.990863.351565.285243,001.7286

The best results are marked in bold.

5. Conclusion

In this paper, an improved HHO algorithm named ERHHO is proposed to overcome the shortcoming of the basic HHO algorithm. The ERHHO algorithm starts with initializing population positions, where we introduced the tent chaotic map to improve the diversity. And then three phases: exploration, transaction from exploration to exploitation, and exploitation, are performed. We proposed an exploration factor to update the formula to expand the global search range at the exploration phase. And, at the phase of exploitation, we proposed a random walk strategy after four hunting strategies to effectively improve the ability to find more accurate results. Twenty-three standard benchmark functions evaluate the proposed algorithm's ability in the stage of exploration and exploitation. The results verified that the proposed ERHHO algorithm can yield very effective outcomes and get almost the best results compared to the other algorithms. Then the Wilcoxon signed-rank test is used to demonstrate the significant differences between ERHHO and other competing algorithms. The algorithm's superiority is further demonstrated by testing on five engineering design problems and CEC2017 test functions for real problems. Furthermore, some challenges in the real world can be solved by using ERHHO properly, such as feature selection, multithreshold image segmentation, convolution neural network, and so on. For further verification, another inquiry will be performed on whether this hybrid method can improve the performance of other optimization algorithms.
  7 in total

1.  Coronavirus Optimization Algorithm: A Bioinspired Metaheuristic Based on the COVID-19 Propagation Model.

Authors:  F Martínez-Álvarez; G Asencio-Cortés; J F Torres; D Gutiérrez-Avilés; L Melgar-García; R Pérez-Chacón; C Rubio-Escudero; J C Riquelme; A Troncoso
Journal:  Big Data       Date:  2020-07-22       Impact factor: 2.128

2.  An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems.

Authors:  Rong Zheng; Heming Jia; Laith Abualigah; Shuang Wang; Di Wu
Journal:  Math Biosci Eng       Date:  2022-02-14       Impact factor: 2.080

3.  An improved hybrid Aquila Optimizer and Harris Hawks Optimization for global optimization.

Authors:  Shuang Wang; Heming Jia; Qingxin Liu; Rong Zheng
Journal:  Math Biosci Eng       Date:  2021-08-24       Impact factor: 2.080

4.  An Improved Teaching-Learning-Based Optimization Algorithm with Reinforcement Learning Strategy for Solving Optimization Problems.

Authors:  Di Wu; Shuang Wang; Qingxin Liu; Laith Abualigah; Heming Jia
Journal:  Comput Intell Neurosci       Date:  2022-03-24

5.  A comprehensive survey of sine cosine algorithm: variants and applications.

Authors:  Asma Benmessaoud Gabis; Yassine Meraihi; Seyedali Mirjalili; Amar Ramdane-Cherif
Journal:  Artif Intell Rev       Date:  2021-06-02       Impact factor: 8.139

6.  An effective solution to numerical and multi-disciplinary design optimization problems using chaotic slime mold algorithm.

Authors:  Dinesh Dhawale; Vikram Kumar Kamboj; Priyanka Anand
Journal:  Eng Comput       Date:  2021-05-30       Impact factor: 7.963

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.