Literature DB >> 35909648

A velocity-guided Harris hawks optimizer for function optimization and fault diagnosis of wind turbine.

Wen Long1,2, Jianjun Jiao3, Ximing Liang4, Ming Xu3, Tiebin Wu5, Mingzhu Tang6, Shaohong Cai1.   

Abstract

Harris hawks optimizer (HHO) is a relatively novel meta-heuristic approach that mimics the behavior of Harris hawk over the process of predating the rabbits. The simplicity and easy implementation of HHO have attracted extensive attention of many researchers. However, owing to its capability to balance between exploration and exploitation is weak, HHO suffers from low precision and premature convergence. To tackle these disadvantages, an improved HHO called VGHHO is proposed by embedding three modifications. Firstly, a novel modified position search equation in exploitation phase is designed by introducing velocity operator and inertia weight to guide the search process. Then, a nonlinear escaping energy parameter E based on cosine function is presented to achieve a good transition from exploration phase to exploitation phase. Thereafter, a refraction-opposition-based learning mechanism is introduced to generate the promising solutions and helps the swarm to flee from the local optimal solution. The performance of VGHHO is evaluated on 18 classic benchmarks, 30 latest benchmark tests from CEC2017, 21 benchmark feature selection problems, fault diagnosis problem of wind turbine and PV model parameter estimation problem, respectively. The simulation results indicate that VHHO has higher solution quality and faster convergence speed than basic HHO and some well-known algorithms in the literature on most of the benchmark and real-world problems.
© The Author(s), under exclusive licence to Springer Nature B.V. 2022.

Entities:  

Keywords:  Fault diagnosis; Function optimization; Harris hawks optimizer; Refraction-opposition learning; Wind turbine

Year:  2022        PMID: 35909648      PMCID: PMC9309607          DOI: 10.1007/s10462-022-10233-1

Source DB:  PubMed          Journal:  Artif Intell Rev        ISSN: 0269-2821            Impact factor:   9.588


Introduction

Optimization can be defined as the process of choosing the best scheme from an available group of alternatives (Gupta et al. 2020a; Long et al. 2020a; Dhiman, 2021; Kumar and Dhiman 2021). By constructing an appropriate fitness function, many real-world applications in science research, management, and engineering can be formulated as function optimization problem (Long et al. 2020b, 2021a; Houssein et al. 2021c; Zhang et al. 2021). Many traditional gradient-based optimization methods have been developed to find solutions for optimization problems. However, in real-world applications, there is no guarantee that the fitness function is differentiable (Chatterjee 2021; Hassan et al. 2021; Vaishnav et al. 2021). The meta-heuristic optimization algorithms have some advantages such as gradient-free characteristics, easy implementation, and escape from local optima and have applied to tackle this type of problem successfully (Long et al. 2018a; Houssein et al. 2021a). Some of popular or recently proposed meta-heuristic optimization algorithms are particle swarm optimizer (PSO) (Kennedy and Eberhart 1995), differential evolution (DE) (Storn and Price 1997), polar bear optimization (PBO) (Polap and Wozniak 2017), cuckoo search (CS) (Gandomi et al. 2013), grey wolf optimizer (GWO) (Long et al. 2018b), whale optimization algorithm (WOA) (Mirjalili and Lewis 2016), spotted hyena optimizer (SHO) (Dhiman and Kumar 2017), sine cosine algorithm (SCA) (Mirjalili 2016), emperor penguin optimizer (EPO) (Dhiman and Kumar 2018), Harris hawks optimizer (HHO) (Heidari et al. 2019), seagull optimization algorithm (SOA) (Dhiman and Kumar 2019a), henry gas solubility optimization (HGSO) (Hashim et al. 2019), butterfly optimization algorithm (BOA) (Arora and Singh 2019), orientation search algorithm (OSA) (Dehghani et al. 2019), sooty tern optimization algorithm (STOA) (Dhiman and Kumar 2019b), marine predators algorithm (MPA) (Faramarzi et al. 2020), spring search algorithm (SSA) (Dehghani et al. 2020a), bald eagle search (BES) (Alsattar et al. 2020), darts game optimizer (DGO) (Dehghani et al. 2020b), Lévy flight distribution (LFD) (Houssein et al. 2020b), mayfly optimization algorithm (MOA) (Zervoudakis and Tsafarakis 2020), tunicate swarm algorithm (TSA) (Kaur et al. 2020), red fox optimization (RFO) (Polap and Wozniak 2021), slime mould algorithm (SMA) (Houssein et al. 2021b), chaos game optimization (CGO) (Talatahari and Azizi 2021), rat swarm optimizer (RSO) (Dhiman et al. 2021a), archimedes optimization algorithm (AOA) (Hashim et al. 2021), honey badger algorithm (HBA) (Hashim et al. 2022), and many others. In this paper, we focused on the Harris hawks optimizer (HHO), which is firstly proposed by Heidari et al. (2019). HHO mimics the foraging behavior of hawks in nature. As a novel meta-heuristic algorithm, HHO can be easily implemented and has strong exploitation ability. The studies indicate that HHO has shown excellent performance on benchmark test optimization problems. Therefore, HHO has been widely utilized for dealing with real-world problems with satisfied results (Alabool et al. 2021). For instance, image thresholding (Elaziz et al. 2020; Wunnava et al. 2020), photovoltaic models parameter extraction (Qais et al. 2020; Ridha et al. 2020), image segmentation (Rodríguez-Esparza et al. 2020), drug design and discovery (Houssein et al. 2020a), feature selection (Abdel-Basset et al. 2021), solar still productivity prediction (Essa et al. 2020), air pollution prediction (Du et al. 2020), PV array reconfiguration optimization (Yousri et al. 2020), data clustering (Singh 2020), neural network training (Ramalingam and Bakaran 2021), COVID-19 detection (Balaha et al. 2021), project scheduling and QoS-aware (Li et al. 2021), brain MRI segmentation (Bandyopadhyay et al. 2021b), slope stability prediction (Moayedi et al. 2021), breast cancer detection (Kaur et al. 2021), Cardiomyopathy smart supervision (Ding et al. 2021), load frequency control (Abd Elaziz et al. 2021), chemical descriptors selection (Houssein et al. 2021d), vehicle suspension system optimization (Issa and Samn 2022), and many others. Like other meta-heuristic optimization approaches, the conventional HHO still has some shortcomings such as unbalance of exploration and exploitation, poor solution quality, and easily fall into local optima, etc. Therefore, to mitigate these shortcomings, many HHO variants have been developed over the past two years. Some of them are summarized as follows. The escaping energy parameter E of HHO plays an important role in conversion from exploration phase to the exploitation phase. Thus, investigating its escaping energy parameter E is one of research hot issues for the HHO algorithm. Several modified versions of the escaping energy parameter E have been suggested in the literature (Gupta et al. 2020b; Qu et al. 2020; Wunnava et al. 2020; Yousri et al. 2020) to achieve a good conversion from exploration to exploitation. Although these HHO variants have performed well on low-dimensional benchmark problems, in some cases, especially on high-dimensional and/or complex multimodal problems, they may easily fall into local optima. In (Gupta et al. 2020b), the opposition-based learning (OBL) strategy was embedded into the basic HHO algorithm for escaping from the local optimal solution. The results indicated that the proposed approach obtains good performance on only benchmark test cases. Jiao et al. (2020) developed an enhanced HHO (EHHO) by introducing the orthogonal design (OD) operator and the general opposition learning mechanisms. In EHHO, the OD could improve the solution accuracy and the convergence performance, while the GOL could maintain the population diversity and the local search capability of HHO. Qu et al. (2020) put forward an improved HHO based on information exchange technique to solve numerical and engineering optimization problems with satisfied results. However, the results of the proposed method were only on benchmark test problems. In (Al-Betar et al. 2021), three different selection mechanisms (namely, tournament, proportional and linear rank-based approaches) were introduced into the basic HHO algorithm to improve its search performance. The experimental results on benchmark functions showed that the overall performance of HHO with tournament selection was better than other two selection strategies. In (Chen et al. 2020), a first powerful variant of HHO was proposed by combining the chaos, topological multi-population and differential evolution strategies to optimize the continuous functions. The comparison results indicated that the proposed technique had performed well on the selected benchmark tasks. Fan et al. (2020) developed a modified version of HHO by introducing quasi-reflection-based learning (QRBL) strategy. The proposed algorithm could effectively accelerate convergence and improve precision on benchmark test problems. In (Li et al. 2021), an enhanced HHO (called RLHHO) is proposed via incorporating three strategies (logarithmic spiral, opposition-based learning, and modified Rosenbrock method) for solving global optimization problems. The results revealed that RLHHO shows better performance than other compared algorithms on most problems. Arini et al. (2022) put forward an improved HHO with joint opposite selection (JOS) strategy for numerical optimization. In proposed approach, two opposition learning mechanisms such as selective leading opposition and dynamic opposite strategies are used to improve the performance of HHO. To utilize the advantages of different approaches, HHO were hybridized with other meta-heuristic algorithms such as salp swarm algorithm (SSA) (Elaziz et al. 2020), flower pollination algorithm (FPA) (Ridha et al. 2020), SCA (Kamboj et al. 2020; Hussain et al. 2021), multi-verse optimizer (MVO) (Ewees and Elaziz 2020), moth-flame optimization (MFO) (Elaziz et al. 2020), simulated annealing (SA) (Bandyopadhyay et al. 2021a), and so on. These hybrid variants obtain sufficiently satisfied performance, but fail to provide optimal values in some cases. The above-mentioned variants have tried to enhance the overall optimization ability of the basic HHO by introducing some additional operators or mechanisms. However, no algorithm is perfect. From “No Free Lunch (NFL)” theorem (Wolpert and Macready 1997), there is no meta-heuristic approach best suited to solve all optimization problems. This theorem has made the area of intelligent optimization very active, which leads to improve existing algorithms and developing new approaches. Furthermore, the escaping energy parameter E of HHO based on a randomized policy cannot fully reflect the actual iterative search process, the transition ability from exploration phase to exploitation phase is insufficient. At the same time, the conventional HHO is good at local exploitation while poor at global exploration, and it may easily fall into a local optima. Motivation of these considerations, this paper developed a novel HHO called velocity-guided Harris hawks optimizer (VGHHO) algorithm. More specifically, the primary contributions of this study are structured: An improved variant of HHO (VGHHO) is proposed for solving function optimization, feature selection and fault diagnosis of wind turbine. A velocity operator is embedded into the position search equation in exploitation stage of HHO that can guide the population search the potential region of solution space. A modified escaping energy parameter based on cosine function is suggested in the HHO algorithm that can achieve a good transition from exploration to exploitation phases. A refraction-opposition-based learning mechanism is introduced to enhance the diversity of VGHHO. To investigate the comprehensive performance of VGHHO by using 18 classical benchmark functions, 30 latest benchmark functions from CEC2017, 21 benchmark feature selection problems, one practical wind turbine fault diagnosis problem, and PV model parameter estimation problem. The remainder work of our study is arranged as follows. Section 2 briefly presents the conventional HHO. In Sect. 3, three modified strategies are explained and propose the framework of VGHHO. In Sect. 4, the feasibility of VGHHO is validated by using classical benchmark functions, and the comparisons are provided. In Sect. 5, the effectiveness of VGHHO is further verified on latest benchmark problems from CEC 2017. VGHHO is utilized for solving benchmark feature selection tests in Sect. 6. One practical fault diagnosis problem of wind turbine is solved by using VGHHO in Sect. 7. In Sect. 8, VGHHO is applied to solve the parameter estimation problem of PV model. Finally, Sect. 9 summarizes the conclusions and provides the future research directions.

The conventional HHO algorithm

In HHO, the hawks are considered the candidate individuals, while the rabbit denotes the best position found so far. Figure 1 shows the main stages of HHO.
Fig. 1

Different stages of Harris hawks optimizer algorithm

Different stages of Harris hawks optimizer algorithm

Exploration phase

In HHO, the exploration phase is performed by using two strategies:where is the current position of hawk, represents the number of iteration, indicates the randomly selected hawk from population, denotes the rabbit’s position, , and are random numbers, and are the left and right endpoints of the interval, and indicates the average position of hawks:where indicates the swarm size.

Transition from exploration to exploitation

In the iterative process, the transition between exploration and exploitation is usually depended on the escaping energy coefficient (), which is calculated by:where denotes a random number range in (− 1, 1), and represents the total iterative number.

Exploitation phase

After finding the target prey, hawks wait for a chance to attack the prey. However, the actual attack behavior is complicated; for example, the prey may be escaped from the enclosure. Therefore, four strategies are designed in exploitation phase to better mimic the attack characteristics of Harris hawks. Selecting the strategy is determined by both the escaping energy coefficient and the random number .

Hard besiege strategy

In HHO, when and , the escaping energy is not enough and the prey has no chance to flee. So, the hawks will attack the prey via the hard besiege strategy as follows:

Soft besiege strategy

When and , the escaping energy is enough but the prey has no chance to escape. So, the hawks will attack the prey by using the soft besiege way as:where indicates the jumping length of the prey and denotes a random number in (0,1).

Soft besiege with progressive rapid dives

When and , the escaping energy is very enough and the prey has chance to flee. In this phase, two steps are included. The first step is executed by the following equation: After executing the first step, if the hawks’ positions are not improved, the second step based on Lévy flight (LF) operator is executed by:where represents the optimization problem’s dimension, denotes the random vector, and LF is the Lévy flight function as follows:where , are the random constants of LF,. In this phase, the positions of hawks are updated by the following equation:

Hard besiege with progressive rapid dives

When and , the prey has a chance to flee but the energy after escaping is not enough. Hence, the positions of hawks are updated by:where and are calculated by the following equations: Algorithm 1 introduces the step-wise explanation of the conventional HHO.

Proposed velocity-guided HHO algorithm

Similar to other meta-heuristic optimization methods, the conventional HHO cannot effectively explore the entire search space when solving complex optimization problems (Gupta et al. 2020b; Kamboj et al. 2020; Qu et al. 2020). Therefore, the objective of our work is to develop a new variant of HHO. It needs to be emphasized that our study does not change the framework of the conventional HHO algorithm, and improves HHO by embedding three strategies, i.e., velocity-guided position search equation, nonlinear escaping energy parameter, and refraction-opposition-based learning strategy.

Velocity-guided position search equation

The meta-heuristic algorithm is developed for achieving a good trade-off between exploration and exploitation over the iterative process. This trade-off is very important to the successful implementation of optimization method. The global exploration refers the capability to search for global optimum, while the local exploitation refers the capability to utilize the existing information to seek for better agents. The conventional HHO algorithm has shown unbalanced between global exploration and local exploitation on multimodal optimization problems (Kamboj et al. 2020). The main challenging issue of the original HHO algorithm is that it may be trapped in local optima when handling multimodal optimization cases (Gupta et al. 2020b). The reason is the poor global exploration ability of HHO, while it is good at local exploitation capability. Moreover, as seen in Eqs. (4) and (5), the new candidate search agent is generated by conducting difference operation between the global best search solution (X) and the current one. It may cause the algorithm to premature convergence. Therefore, for balancing between exploration and exploitation of HHO, modifying the position search equation in exploitation phase is one of the active research directions. Many HHO variants have been suggested to achieve this objective (Gupta et al. 2020b; Kamboj et al. 2020). PSO is an efficient and effective meta-heuristic technique proposed by Kennedy and Eberhart (1995). Its idea is derived from the foraging behavior of birds. In PSO, each particle has its own position () and velocity (). In the iterative process, the velocity and position of each particle are updated (Shi and Eberhart 1998):where indicates the particles’ velocity, denotes the inertia weight, X represents the personal best position of particle, denotes the particle’s position, X represents the global best position, and are the coefficient factors, and are the random numbers. Inspired by PSO, this paper designed a novel velocity-guided position search equation in exploitation phase and the detailed expressions are as follows. In hard besiege strategy, the position is updated by In soft besiege strategy, the position is updated bywhere v denotes the velocity of each hawk, X is the personal best position of each hawk, c3 and c4 are the memory factors, r3 and r4 are the random numbers in [0, 1], w represents the inertia weight and is calculated by:where winitial and wend are respectively the initial and end values of . The first part on the right side of Eqs. (15) and (17) denotes the dynamical flight velocity of each hawk, which provides the necessary motivation for hawks to search throughout the solution space. Similar to PSO, the second term of Eqs. (15) and (17) is called as “cognitive” component, and represents the personal thinking of each hawk, which guides the hawk to move toward its own historical best position. Compared with the position search Eqs. (4) and (5) in the conventional HHO algorithm, the proposed position search Eqs. (15) and (17) have three different features: (1) The velocity term is added for enhancing the global search capability of the position search equation; (2) The hawk learns not only from its own information but also from the knowledge of the other hawks in the population; and (3) The inertia weight is introduced for dynamically balancing the optimization performance of HHO.

Nonlinear escaping energy parameter

The search efficiency of meta-heuristic algorithm depends on how well it achieves a good transition from exploration to exploitation over the optimization process. In the original HHO algorithm, we observed that the escaping energy coefficient E plays a crucial role in transiting between exploration and exploitation. A large value of the escaping energy parameter E (≥ 1) is in favor of the exploration capability, while a small value (< 1) is helpful to exploitation ability. Therefore, it is quite important to select the suitable values of the escaping energy parameter E for HHO. However, the escaping energy parameter E values of the conventional HHO are random, and the range of this randomness decreases from 2 to 0 over the iterative process. This escaping energy coefficient E has proved to be effective for some problems, but it is invalid in other cases (Gupta et al. 2020b; Qu et al. 2020). Due to the iterative search of HHO is highly nonlinear and quite complicated, the linearly decrease transition rule of E cannot truly reflect the actual optimization process. Thus, a potential research interest is to investigate the new transition parameter E rules in the HHO for achieving a good transition from global exploration to local exploitation. Many decrease nonlinearly strategies of the escaping energy parameter E have been suggested to achieve this goal (Gupta et al. 2020b; Qu et al. 2020; Yousri et al. 2020). Different from previous proposed nonlinearly decrease strategies of E, this paper proposes a novel increase nonlinearly scheme of the escaping energy parameter E. The reasons are explained as follows. On the one hand, the population of hawks has a good diversity in the early phase of iteration search. The good diversity means that HHO has a powerful capability to explore throughout the search space. The main goal of this phase is to accelerate convergence (i.e., E < 1). On the other hand, in the later phase of the iterative search, HHO may be converged at a certain point in the search space, which the loss of population diversity. Maintaining the population diversity and escaping the local optimum are main purpose of this phase (i.e., E ≥ 1). Therefore, the calculated formulation of the proposed parameter E iswhere Emax and Emin are respectively the max and min values of E. Compared with the decrease linearly strategy of the escaping energy parameter E, the increase nonlinearly strategy described in Eq. (20) is to take a longer time for exploitation as compared to exploration. Figure 2 shows the graph of the proposed nonlinear escaping energy parameter E over the course of iteration process.
Fig. 2

Proposed escaping energy parameter E

Proposed escaping energy parameter E From Fig. 2 and Eq. (20), the values of the proposed nonlinear escaping energy parameter E are small (E < 1) in the early and middle phases of iteration, which demonstrates that it is concentrated on the local exploitation phase for a long time (about 67% of the total iteration numbers) as compared to the global exploration phase. Figure 2 also indicates that the proposed nonlinear escaping energy parameter E is large in the later phase, which is focused on exploration only for about 33% iterations.

Refraction-opposition-based learning strategy

For meta-heuristic optimization algorithms, in the later stage of search, the other individuals in the population are attracted by the current best individual obtained so far, and gather towards it, thereby resulting in the loss of the population diversity and easing to fall into the local optima. This is the inherent shortcoming of meta-heuristic optimization algorithms. That is to say, enhancing the population diversity of meta-heuristic algorithms in the later stage of optimization is very important. To overcome this shortcoming, some additional strategies such as mutation operation (Gupta et al. 2020c), Lévy flight (Chawla and Duhan 2018), opposition-based learning (OBL) (Rahnamayan et al. 2008) and pinhole-imaging-based learning (Long et al. 2021b) are introduced in the meta- heuristic algorithms. The OBL strategy is a candidate technique to effectively improve the optimization ability of meta-heuristic algorithm. However, premature convergence may be occurred in the later phase of algorithms. Thus, this paper improves the OBL strategy based on the refraction theory and proposes a novel refraction-opposition learning (ROL) mechanism for the global best solution. The detailed implementation process is provided in Fig. 3.
Fig. 3

Proposed refraction-opposition-based learning process

Proposed refraction-opposition-based learning process In Fig. 3, according to the principle of light refraction, the incidence light slants from medium 1 into medium 2 and its direction will be changed. The refraction angle is smaller than the incidence angle. Based on the Snell’s Law (Griffiths 1998), the following mathematical formula is obtained by:where represents the refraction index, and are the incidence angle and refraction angle, and are the left and right endpoints of interval, is the global best individual, is called as opposite individual of , and are the distance of and , respectively. Let , Eq. (21) is modified as According to Eq. (22), the refraction learning opposition solution is computed as When and , Eq. (23) is reduced towhere Eq. (24) is the mathematical formula of OBL in (Rahnamayan et al. 2008). That is to say, OBL [Eq. (24)] is a special case of the ROL strategy [Eq. (23)]. The Eq. (23) can be generalized to D-dimensional space:where and represent the left and right bounds of jth dimensional variable, and are the jth dimension of and . Algorithm 2 provides the steps of the ROL strategy on . In summary, the flow chart of VGHHO algorithm is provided in Fig. 4.
Fig. 4

The flow chart of the proposed VGHHO algorithm

The flow chart of the proposed VGHHO algorithm

Computational complexity analysis

The worst time complexity of VGHHO is calculate according to big-O notation using its pseudo codes. The step-wise description of the obtained complexity of VGHHO is as follows: The population initialization of VGHHO requires O(N × D) time, where N is the population size and D is the dimension of the problem. Calculate the fitness value of each hawk requires O(N) time. Selection of rabbit (best solution obtained so far) requires O(N) time. Position update mechanism of each hawk in VGHHO requires O(N × D) time. Greedy selection technique in VGHHO requires O(N) time. Refraction-opposition-based learning strategy requires O(1 × D) time. In summary, the total computational time of VGHHO is O(N × D × tmax) for tmax iterations.

Experiments on classical benchmark functions

Classical benchmark test functions

18 classical benchmark functions are applied for experiments. Table 1 lists the detailed characteristics of these functions. These functions are unimodal (f1–f8) and multimodal (f9–f18) functions. The unimodal problem has only one global best value, which are used for testing exploitation ability of metaheuristic approaches. Conversely, multimodal functions are usually utilized to investigate exploration ability of metaheuristic since it has many local optimal solutions (Long et al. 2018a). In Table 1, fmin represents the theoretical optimum value.
Table 1

The 18 classical benchmark test functions

Function equationDomainfmin
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{1}} (x) = \sum\nolimits_{i = 1}^{D} {x_{i}^{2} }$$\end{document}f1(x)=i=1Dxi2[− 100, 100]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{2}} (x) = \sum\nolimits_{i = 1}^{D} {{|}x_{i} {|}} + \prod\nolimits_{i = 1}^{D} {|x_{i} |}$$\end{document}f2(x)=i=1D|xi|+i=1D|xi|[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{3}} (x) = {\text{max}}_{i} \left\{ {|x_{i} |,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 1 \le x_{i} \le D} \right\}$$\end{document}f3(x)=maxi|xi|,1xiD[− 100, 100]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{4}} (x) = \sum\nolimits_{i = 1}^{D} {\left[ {100(x_{i + 1} - x_{i}^{2} )^{2} + (x_{i} - 1)^{2} } \right]}$$\end{document}f4(x)=i=1D100(xi+1-xi2)2+(xi-1)2[− 30, 30]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{5}} (x) = \sum\nolimits_{i = 1}^{D} {i{\kern 1pt} x_{i}^{4} } + {\text{random}}{\kern 1pt} {\kern 1pt} [0,1)$$\end{document}f5(x)=i=1Dixi4+random[0,1)[− 1.28, 1.28]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{6}} (x) = \sum\nolimits_{i = 1}^{D} {i{\kern 1pt} x_{i}^{2} }$$\end{document}f6(x)=i=1Dixi2[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{7}} (x) = \sum\nolimits_{i = 1}^{D} {\left| {x_{i} } \right|^{(i + 1)} }$$\end{document}f7(x)=i=1Dxi(i+1)[− 1, 1]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{8}} (x) = \sum\nolimits_{i = 1}^{D} {{\kern 1pt} {\kern 1pt} \left( {10^{6} } \right)^{{{{(i - 1)} \mathord{\left/ {\vphantom {{(i - 1)} {(D - 1)}}} \right. \kern-\nulldelimiterspace} {(D - 1)}}}} } x_{i}^{2}$$\end{document}f8(x)=i=1D106(i-1)/(D-1)xi2[− 100, 100]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{9}} (x) = \sum\nolimits_{i = 1}^{D} {\left[ {x_{i}^{2} - 10\cos (2\pi {\kern 1pt} x_{i} ) + 10} \right]}$$\end{document}f9(x)=i=1Dxi2-10cos(2πxi)+10[− 5.12, 5.12]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{1{0}}} (x) = - 20\exp \left( { - 0.2\sqrt {\tfrac{1}{D}\sum\nolimits_{i = 1}^{D} {{\kern 1pt} {\kern 1pt} x_{i}^{2} } } } \right) - \exp \left( {\tfrac{1}{D}\sum\nolimits_{i = 1}^{D} {{\kern 1pt} {\kern 1pt} \cos ({\kern 1pt} {\kern 1pt} 2\pi {\kern 1pt} x_{i} )} } \right) + 20 + e$$\end{document}f10(x)=-20exp-0.21Di=1Dxi2-exp1Di=1Dcos(2πxi)+20+e[− 32, 32]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{1{1}}} (x) = \tfrac{1}{4000}\sum\nolimits_{i = 1}^{D} {x_{i}^{2} - \prod\nolimits_{i = 1}^{D} {\cos \left( {\tfrac{{x_{i} }}{\sqrt i }} \right) + 1} }$$\end{document}f11(x)=14000i=1Dxi2-i=1Dcosxii+1[− 600, 600]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{12}}} (x) = \sum\nolimits_{i = 1}^{D} {\left| {x_{i} \cdot \sin (x_{i} ) + 0.1 \cdot x_{i} } \right|}$$\end{document}f12(x)=i=1Dxi·sin(xi)+0.1·xi[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{13}}} (x) = {\text{sin}}^{2} (\pi {\kern 1pt} x_{1} ) + \sum\nolimits_{i = 1}^{D - 1} {\left[ {x_{i}^{2} \cdot \left( {1 + 10\sin^{2} (\pi {\kern 1pt} x_{1} )} \right) + (x_{i} - 1)^{2} \cdot \sin^{2} \left( {2\pi {\kern 1pt} x_{i} } \right)} \right]}$$\end{document}f13(x)=sin2(πx1)+i=1D-1xi2·1+10sin2(πx1)+(xi-1)2·sin22πxi[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{1{4}}} (x) = 1 - \cos \left( {2\pi \sqrt {\sum\nolimits_{i = 1}^{D} {x_{i}^{2} } } } \right) + 0.1\sqrt {\sum\nolimits_{i = 1}^{D} {x_{i}^{2} } }$$\end{document}f14(x)=1-cos2πi=1Dxi2+0.1i=1Dxi2[− 100, 100]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{15}}} (x) = 0.1\left( {\sin^{2} (3\pi x_{1} ) + \sum\nolimits_{i = 1}^{D - 1} {(x_{i} - 1)^{2} \left( {1 + \sin^{2} \left( {3\pi x_{i + 1} } \right)} \right) + } (x_{D} - 1)^{2} \left( {1 + \sin^{2} \left( {2\pi {\kern 1pt} x_{D} } \right)} \right)} \right)$$\end{document}f15(x)=0.1sin2(3πx1)+i=1D-1(xi-1)21+sin23πxi+1+(xD-1)21+sin22πxD[− 5, 5]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{16}}} (x) = \sum\nolimits_{i = 1}^{D} {\left( {0.2x_{i}^{2} + 0.1x_{i}^{2} \cdot \sin \left( {2x_{i} } \right)} \right)}$$\end{document}f16(x)=i=1D0.2xi2+0.1xi2·sin2xi[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{17}}} (x) = \sum\nolimits_{i = 1}^{D - 1} {\left( {x_{i}^{2} + 2x_{i + 1}^{2} } \right)^{0.25} \cdot \left( {\left( {\sin 50(x_{i}^{2} + x_{i + 1}^{2} )^{0.1} } \right)^{2} + 1} \right)}$$\end{document}f17(x)=i=1D-1xi2+2xi+120.25·sin50(xi2+xi+12)0.12+1[− 10, 10]D0
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{{{18}}} (x) = \sum\nolimits_{i = 1}^{D} {x_{i}^{6} \cdot \left( {2 + \sin \tfrac{1}{{x_{i} }}} \right)}$$\end{document}f18(x)=i=1Dxi6·2+sin1xi[− 1, 1]D0
The 18 classical benchmark test functions

Comparison of VGHHO with other approaches on low-dimensional problems

The search capability of VGHHO is compared with other six meta-heuristic algorithms, i.e., BOA (Arora and Singh 2019), SOA (Dhiman and Kumar 2019a, b), HHO (Heidari et al. 2019), adaptive guided differential evolution (AGDE) (Mohamed et al. 2019), exploration-enhanced GWO (EEGWO) (Long et al. 2018a), and improved SCA (ISCA) (Long et al. 2019). BOA, SOA and HHO are the tradition meta-heuristic optimization techniques, while AGDE, EEGWO and ISCA are the state-of-the-art meta-heuristic algorithms. The population size and the total iterative numbers of VGHHO and other six optimization techniques are respectively fixed to 30 and 500 for ensuring fair of comparison. In VGHHO, c3 = c4 = 2, winitial = 1, wend = 0, Emax = 2, Emin = 0, k = 5, n = 5. In this experiment, the dimensions of functions in Table 1 are set to 30. The source codes of all approaches are implemented by MATLBA R2014a software. In order to reduce errors, each algorithm is independently run 30 trials for each function. Table 2 summaries the values of the average (Mean) and standard deviation (Std) of seven algorithms for 18 functions. In addition, the Friedman ranking test values based on “Mean” and “Std” results are also presented in Table 2. The best value of each function is highlighted in bold in Table 2.
Table 2

Comparisons of VGHHO and other six approaches for 18 classical problems with 30D in Table 1

FunctionIndexBOASOAHHOAGDEEEGWOISCAVGHHO
f1Mean2.64E-114.17E-136.59E-1024.87E-03000
Std3.10E-124.36E-132.94E-1023.63E-03000
Ranking6547111
f2Mean8.65E-091.30E-089.20E-501.09E-025.42E-2408.99E-2110
Std4.62E-091.14E-082.06E-495.10E-03000
Ranking5647231
f3Mean1.27E-083.44E-031.46E-488.73E + 008.80E-2281.36E-2070
Std2.22E-094.09E-033.26E-488.09E-01000
Ranking5647231
f4Mean2.90E + 012.81E + 017.15E-026.19E + 012.89E + 012.89E + 014.40E-03
Std2.58E-026.22E-016.67E-025.65E + 012.61E-021.81E-024.43E-03
Ranking6327441
f5Mean1.23E-032.08E-032.27E-047.60E-022.56E-054.24E-055.17E-06
Std5.41E-041.55E-031.66E-044.53E-023.85E-055.55E-051.15E-05
Ranking5647231
f6Mean2.65E-118.30E-131.87E-1073.55E-04000
Std3.25E-126.74E-132.88E-1071.46E-04000
Ranking6547111
f7Mean2.87E-131.41E-472.12E-1293.10E-22000
Std2.19E-133.06E-474.63E-1291.38E-22000
Ranking7546111
f8Mean2.79E-113.71E-097.36E-968.33E + 00000
Std3.05E-125.66E-091.64E-956.30E + 00000
Ranking5647111
f9Mean1.20E + 022.96E + 0004.26E + 01000
Std1.11E + 024.62E + 0003.95E + 00000
Ranking7516111
f10Mean1.23E-082.00E + 018.88E-161.53E-028.88E-168.88E-168.88E-16
Std1.40E-096.20E-0401.83E-03000
Ranking5716111
f11Mean1.55E-112.61E-0201.42E-02000
Std1.05E-113.64E-0209.85E-03000
Ranking5716111
f12Mean1.68E-094.24E-041.71E-572.57E-028.44E-2392.03E-2100
Std1.02E-097.29E-042.59E-573.38E-03000
Ranking5647231
f13Mean4.20E-121.31E-133.17E-993.30E-01000
Std4.36E-121.19E-131.42E-985.71E-01000
Ranking6547111
f14Mean3.29E-011.42E-018.18E-491.10E + 005.39E-6300
Std4.26E-028.55E-021.82E-482.00E-014.52E-6300
Ranking6547311
f15Mean2.38E-123.47E-162.34E-1052.71E-06000
Std8.81E-134.68E-163.92E-1051.39E-06000
Ranking6547111
f16Mean2.08E-111.30E-133.60E-1009.69E-06000
Std4.44E-122.81E-138.05E-1007.84E-06000
Ranking6547111
f17Mean5.01E-051.46E-022.28E-274.96E + 00000
Std8.06E-055.10E-037.11E-278.73E-01000
Ranking5647111
f18Mean5.27E-103.53E-3709.73E-16000
Std4.76E-107.31E-3708.57E-16000
Ranking7516111
Average ranking5.725.443.226.721.501.611.00
Total ranking6547231

The best value of each function is highlighted in bold in the table

Comparisons of VGHHO and other six approaches for 18 classical problems with 30D in Table 1 The best value of each function is highlighted in bold in the table From Table 2, VGHHO gets the theoretical optima (0) for all the other problems except for f4, f5, and f10. Compared with BOA, SOA and AGDE algorithms, VGHHO shows excellent performance on all the optimization tasks. With respect to HHO, VGHHO finds better values on fourteen cases. For f9–f11, and f18, the same values are obtained by two approaches. Compared to the EEGWO algorithm, VGHHO obtains excellent and same performance on twelve and six problems (i.e., f2–f5, f12, and f14), respectively. VGHHO is superior to ISCA on thirteen benchmark functions. In addition, two algorithms obtain similar values on five functions (i.e., f2–f5, and f12). Regarding to the average Friedman ranking test results in Table 2, VGHHO obtains the first rank, followed by EEGWO, ISCA, HHO, SOA, BOA, and AGDE. To intuitively show the convergence performance, Fig. 5 plots the iterative curves of VGHHO and other six techniques on six representative optimization cases with 30D. As seen from Fig. 5, VGHHO obtains faster convergence speed than other approaches.
Fig. 5

The iterative curves of seven approaches for six representative 30D functions

The iterative curves of seven approaches for six representative 30D functions

Scalability test

Furthermore, the VGHHO is used for dealing with the higher dimensions of the 18 classical benchmark tasks in Table 1 (i.e., D = 100 and 1000) to further investigate its scalability. For all algorithms, the same parameter settings are used as in Sect. 4.2. The mean and std results of VGHHO and other six algorithms on 18 problems with 100 and 1000 dimensions are outlined in Tables 3 and 4.
Table 3

Comparisons of VGHHO and six algorithms on 18 classical benchmark problems with 100 dimensions in Table 1

FunctionIndexBOASOAHHOAGDEEEGWOISCAVGHHO
f1Mean2.81E-112.88E-051.05E-963.67E + 02000
Std8.26E-131.61E-051.91E-967.32E + 01000
Ranking5647111
f2Mean1.16E + 477.31E-052.47E-481.58E + 011.95E-2272.68E-2080
Std1.85E + 475.90E-053.76E-481.17E + 00000
Ranking7546231
f3Mean1.46E-086.88E + 018.44E-485.88E + 014.36E-2202.13E-2040
Std1.40E-092.19E + 016.62E-481.45E + 00000
Ranking5746231
f4Mean9.89E + 019.81E + 013.15E-017.65E + 049.89E + 019.90E + 015.39E-02
Std2.95E-024.10E-014.41E-011.99E + 042.78E-021.19E-028.93E-02
Ranking4327461
f5Mean1.82E-031.35E-027.20E-049.83E-013.72E-057.46E-052.00E-05
Std1.88E-031.30E-025.92E-043.01E-013.26E-051.59E-042.86E-05
Ranking5647231
f6Mean2.78E-112.24E-052.24E-1001.60E + 02000
Std3.99E-121.73E-055.13E-1003.62E + 01000
Ranking5647111
f7Mean3.91E-134.14E-062.02E-1242.40E-12000
Std4.73E-135.90E-063.25E-1249.92E-12000
Ranking5746111
f8Mean3.08E-111.69E-023.91E-936.59E + 05000
Std2.80E-123.79E-023.10E-931.21E + 05000
Ranking5647111
f9Mean1.68E + 025.08E + 0005.94E + 02000
Std3.75E + 025.53E + 0001.18E + 01000
Ranking6517111
f10Mean1.27E-082.00E + 018.88E-164.29E + 008.88E-168.88E-168.88E-16
Std1.68E-094.72E-0401.71E-01000
Ranking5716111
f11Mean2.95E-112.53E-0204.22E + 00000
Std1.65E-112.84E-0201.19E + 00000
Ranking5617111
f12Mean2.44E-091.11E-032.52E-493.71E + 011.28E-2282.14E-2080
Std2.06E-098.77E-052.06E-495.48E + 00000
Ranking5647231
f13Mean3.14E-118.95E + 031.34E-942.53E + 02000
Std1.50E-111.32E + 022.22E-946.51E + 01000
Ranking5746111
f14Mean3.34E-013.40E-011.52E-478.37E + 008.15E-5200
Std4.55E-025.48E-021.06E-471.31E + 001.62E-5100
Ranking5647311
f15Mean7.01E-129.76E-095.50E-994.16E-01000
Std5.36E-128.24E-097.71E-995.14E-02000
Ranking5647111
f16Mean2.39E-115.85E-081.23E-982.38E + 00000
Std1.99E-125.72E-081.17E-981.11E + 00000
Ranking5647111
f17Mean3.14E-064.19E-012.11E-251.17E + 02000
Std1.50E-062.27E-011.32E-254.02E + 00000
Ranking5647111
f18Mean1.60E-105.20E-2002.27E-04000
Std1.63E-103.79E-2002.67E-04000
Ranking6517111
Average ranking5.175.893.226.721.501.721.00
Total ranking5647231

The best value of each function is highlighted in bold in the table

Table 4

Comparisons of VGHHO and six approaches for 18 classical benchmark problems with 1000 dimensions in Table 1

FunctionIndexBOASOAHHOAGDEEEGWOISCAVGHHO
f1Mean3.15E-118.02E-017.12E-955.17E + 05000
Std3.20E-122.80E-016.82E-954.78E + 04000
Ranking5647111
f2MeanNA1.62E-022.12E-47NA01.26E-2070
StdNA4.43E-034.73E-47NA000
Ranking6546131
f3Mean1.50E-089.97E + 018.92E-489.51E + 012.55E-2141.46E-2030
Std8.56E-101.00E-016.99E-481.97E-01000
Ranking5746231
f4Mean9.99E + 021.21E + 048.34E + 008.03E + 089.99E + 029.99E + 029.39E-01
Std2.87E-021.04E + 047.02E + 003.51E + 074.99E-021.66E-026.05E-01
Ranking3627331
f5Mean2.33E-036.72E-011.23E-031.13E + 049.13E-059.99E-054.80E-05
Std2.44E-042.45E-011.58E-032.39E + 036.66E-051.91E-043.96E-05
ranking5647231
f6Mean3.22E-115.23E + 002.56E-932.42E + 06000
Std3.07E-121.29E + 004.37E-932.30E + 05000
Ranking5647111
f7Mean7.50E-133.24E + 004.85E-1241.60E-06000
Std6.08E-136.38E-015.33E-1245.71E-07000
Ranking5746111
f8Mean3.56E-114.27E + 045.97E-907.75E + 09000
Std2.44E-124.30E + 046.41E-907.38E + 08000
Ranking5647111
f9Mean1.89E + 035.04E + 0101.15E + 04000
Std4.21E + 037.96E + 0105.16E + 02000
Ranking6517111
f10Mean1.28E-082.00E + 018.88E-161.72E + 018.88E-168.88E-168.88E-16
Std7.25E-104.46E-0502.81E-01000
Ranking5716111
f11Mean3.12E-112.01E-0104.22E + 00000
Std4.10E-121.63E-0101.19E + 00000
Ranking5617111
f12Mean4.90E-091.27E-012.89E-481.34E + 032.61E-2188.06E-2080
Std2.17E-092.19E-012.37E-484.61E + 01000
Ranking5647231
f13Mean3.59E-119.96E + 045.73E-944.57E + 04000
Std5.94E-126.95E + 014.60E-947.07E + 03000
ranking5746111
f14Mean3.60E-018.80E-013.53E-468.43E + 013.15E-471.22E-580
Std4.30E-021.30E-013.27E-462.92E + 006.52E-472.46E-580
Ranking5647321
f15Mean1.97E-112.18E-042.24E-972.24E + 02000
Std1.31E-112.49E-041.96E-972.48E + 01000
Ranking5647111
f16Mean2.88E-119.94E-037.85E-969.80E + 02000
Std2.25E-125.93E-036.53E-967.76E + 01000
Ranking5647111
f17Mean5.28E-063.80E + 033.72E-242.59E + 03000
Std6.03E-071.37E + 004.10E-247.81E + 01000
Ranking5746111
f18Mean6.87E-144.80E-033.89E-2915.49E + 00000
Std3.02E-145.59E-0301.05E + 00000
Ranking5647111
Average ranking5.006.173.396.671.391.611.00
Total ranking5647231

NA represents no available solution

The best value of each function is highlighted in bold in the table

Comparisons of VGHHO and six algorithms on 18 classical benchmark problems with 100 dimensions in Table 1 The best value of each function is highlighted in bold in the table Comparisons of VGHHO and six approaches for 18 classical benchmark problems with 1000 dimensions in Table 1 NA represents no available solution The best value of each function is highlighted in bold in the table From Tables 3 and 4, VGHHO shows very excellent scalability for the search dimensions on most problems in Table 1. In other words, the comprehensive performance of VGHHO does not seriously deteriorate. It must be emphasized that a function with 1000 dimensions is very challenging for HHO. The reason is that it does not use the specific search strategies customized to deal with high-dimensional optimization problems. Compared with BOA and SOA, VGHHO obtains better performance on all the functions with 100 and 1000 dimensions. VGHHO provides better results than the conventional HHO algorithm on fourteen functions with high dimensionality. For most high dimensional functions, EEGWO, ISCA and VGHHO obtain similar results. From the average results of Friedman ranking test in Tables 3 and 4, the first rank is obtained by VGHHO, followed by EEGWO, ISCA, HHO, BOA, SOA, and AGDE. Furthermore, Figs. 6 and 7 plot the iterative curves of seven algorithms for six representative 100D and 1000D problems.
Fig. 6

The iterative curves of seven approaches for six representative 100D functions

Fig. 7

The iterative curves of seven approaches for six representative 1000D functions

The iterative curves of seven approaches for six representative 100D functions The iterative curves of seven approaches for six representative 1000D functions From Figs. 6 and 7, VGHHO shows very faster convergence speed than other techniques for six representative problems with high dimensionality. It can be seen from the above comparison results that VGHHO is a highly competitive meta-heuristic algorithm for solving high-dimensional optimization problems.

Statistical test analysis

In this section, Wilcoxon rank sum test with a significance level of 5% based on the average values of 30 times trials is used to investigate the difference between VGHHO and BOA, SOA, HHO, AGDE, EEGWO, and ISCA on 18 benchmark functions with 30, 100, and 1000 dimensions. Tables 5 shows the Wilcoxon rank sum test results between VGHHO and other six approaches. The “p-value” is the significance determines whether the statistical hypothesis should be rejected.
Table 5

Wilcoxon rank sum test results between VGHHO and other six algorithms

DimensionAlgorithmBetterEqualWorstR + Rp-valueα = 0.05
D = 30VGHHO versus BOA180017101.1614E-05Yes
VGHHO versus SOA180017106.2991E-06Yes
VGHHO versus HHO144016650.0022Yes
VGHHO versus AGDE180017105.4783E-07Yes
VGHHO versus EEGWO6120132390.2216No
VGHHO versus ISCA5130125.545.50.3386No
D = 100VGHHO versus BOA180017109.9822E-06Yes
VGHHO versus SOA180017101.7630E-06Yes
VGHHO versus HHO144016650.0022Yes
VGHHO versus AGDE180017101.9314E-07Yes
VGHHO versus EEGWO6120132390.2216No
VGHHO versus ISCA5130125.545.50.3386No
D = 1000VGHHO versus BOA180017101.1614E-05Yes
VGHHO versus SOA180017105.4783E-07Yes
VGHHO versus HHO153016830.0010Yes
VGHHO versus AGDE180017101.6174E-07Yes
VGHHO versus EEGWO5130125.545.50.3386No
VGHHO versus ISCA6120132390.2216No
Wilcoxon rank sum test results between VGHHO and other six algorithms The results in Table 5 indicate that VGHHO gets larger “R+” results than “R−” results on different cases. Furthermore, the p-values of VGHHO versus BOA, SOA, HHO, and AGDE on the classical benchmark problems with 30D, 100D, and 1000D are less than 0.05. That is to say, the performance difference of VGHHO and BOA, SOA, HHO, and AGDE is quite obvious.

Comparison of CPU runtime

In this subsection, the CPU runtime results of the basic HHO and the proposed VGHHO are introduced to investigate the computational complexity analysis and comparisons of two algorithms. In this experiment, 18 classical test functions in Table 1 are used. The dimensions of these functions are set 30, 100, and 1000. Table 6 lists the average CPU runtime (in seconds) results of the basic HHO and VGHHO.
Table 6

Comparisons of the average CPU runtime (in seconds) for two algorithms on 18 classical benchmark functions

FunctionD = 30D = 100D = 1000
HHOVGHHOHHOVGHHOHHOVGHHO
f11.04511.26151.13951.48121.79773.4857
f20.99501.29311.02261.49011.88603.6052
f31.03941.36041.11861.57732.22713.4668
f41.27721.27761.45101.46553.11942.8505
f51.08761.50381.55672.08707.67048.5438
f61.01121.33391.09531.53641.93483.7386
f71.14201.56901.49962.13865.78908.3051
f81.10141.56921.49592.18447.038710.265
f91.16161.43011.31881.66393.18264.2151
f101.24731.54211.35631.78803.23244.1397
f111.21481.45961.33981.70053.32074.2764
f121.07221.39121.16481.56332.06953.7206
f131.17601.52391.38511.81662.40754.8079
f141.13741.49031.29371.72072.08013.8852
f151.10231.44301.23561.68582.28864.8273
f161.16981.49161.27581.72562.29834.8760
f171.43601.92962.20132.789812.83714.776
f181.31621.74222.11962.761612.77716.458
Comparisons of the average CPU runtime (in seconds) for two algorithms on 18 classical benchmark functions As can be seen in Table 6 that the CPU runtime of the basic HHO is less than VGHHO on all of the functions except for f4 with D = 1000. This shows that the introduction of three operators (i.e., velocity-guided position search equation, nonlinear escaping energy parameter and refraction-opposition-based learning strategy) in HHO will increase the computational cost. However, this increased computational cost is acceptable. In addition, it should be pointed out that the optimization performance of VGHHO is significantly better than that of basic HHO.

Comparison with existing studies

In this section, VGHHO is also compared with three HHO variants such as Leader HHO (LHHO) (Naik et al. 2021), HHO with joint opposite selection (HHO-JOS) (Arini et al. 2022) and modified HHO (m-HHO) (Gupta et al. 2020b). In this experiment, 18 benchmark test functions in Table 1 are used. The dimensions of all functions are set 30. The population size and the total iterative numbers of VGHHO and LHHO, HHO-JOS, m-HHO are respectively fixed to 30 and 500 for ensuring fair of comparison. Table 7 lists the “Mean” and “Std” results of four HHO variants on 18 classical benchmark functions with 30 dimensions. In addition, the Friedman ranking test results of four algorithms are also shown in Table 7.
Table 7

Comparison results of four HHO variants on 18 classical benchmark functions with 30D

FunctionLHHOHHO-JOSmHHOVGHHO
MeanStdMeanStdMeanStdMeanStd
f13.23E-1495.60E-1492.64E-26100000
f21.37E-781.49E-787.26E-1371.62E-1360000
f35.26E-739.11E-736.57E-1221.46E-1210000
f47.23E-031.10E-024.82E-035.55E-037.68E-021.60E-014.40E-034.43E-03
f51.28E-041.74E-041.21E-041.27E-047.47E-057.17E-055.17E-061.15E-05
f63.64E-1516.31E-1516.58E-26100000
f71.01E-2010000000
f81.30E-1402.25E-1402.22E-24700000
f900000000
f108.88E-1608.88E-1608.88E-1608.88E-160
f1100000000
f123.24E-794.75E-798.99E-1381.90E-1370000
f134.15E-1565.12E-1568.80E-24400000
f141.47E-722.53E-723.92E-1308.76E-1300000
f151.06E-1511.80E-1517.36E-26500000
f161.26E-1542.18E-1546.66E-23700000
f171.27E-391.15E-398.88E-741.97E-730000
f1800000000
Average ranking3.282.391.221.00
Total ranking4321

The best value of each function is highlighted in bold in the table

Comparison results of four HHO variants on 18 classical benchmark functions with 30D The best value of each function is highlighted in bold in the table From Table 7, compared with LHHO algorithm, VGHHO obtains better and similar results on fourteen and four functions (i.e., f9–f11, f18), respectively. The comprehensive performance of VGHHO is superior to HHO-JOS on thirteen benchmark functions. Furthermore, the similar results are obtained by two algorithms on five functions (i.e., f7, f9–f11, f18). VGHHO and mHHO achieve the similar optimization performance on all of functions except for two functions. For f4 and f5, the better values are obtained by VGHHO.

Parameters sensitivity analysis

In the proposed VGHHO algorithm, there are eight parameters such as c3, c4, winitial, wend, Emax, Emin, k and n. Similar to PSO, the values of c3, c4, winitial, wend are set to 2, 2, 1, and 0, respectively. At the same time, it can be seen that these four parameters are not sensitive to affect the performance of the VGHHO algorithm by conducting many trials. The values of Emax = 2 and Emin = 0 are similar to the base HHO for a fair comparison. Furthermore, in VGHHO, k and n are the two critical parameters which help the population of algorithm to escape from the local optima. Therefore, in this subsection, a series of experiments are conducted to investigate the sensitivity of the parameters k and n. We manipulate the values of k and n while keeping the other parameters fixed. Table 8 lists the experimental results of different k and n values on 18 classical benchmark functions with D = 30. The results related to k = 2.0 and n = 2.0 are also reported, along with those of new values in Table 8.
Table 8

Comparison results of VGHHO using different k and n values on 18 functions with 30D

Functionk = 0.1, n = 0.1k = 0.5, n = 0.5k = 1.0, n = 1.0k = 1.5, n = 1.5k = 2.0, n = 2.0
MeanStdMeanStdMeanStdMeanStdMeanStd
f11.07E-922.40E-924.90E-18501.91E-29800000
f23.89E-558.28E-552.32E-982.14E-982.19E-1423.21E-1422.79E-235000
f31.43E-512.79E-518.55E-991.31E-986.82E-1387.25E-1385.88E-230000
f46.62E-013.97E-017.32E-021.12E-012.30E-024.28E-021.71E-022.17E-024.40E-034.43E-03
f52.35E-031.83E-033.73E-047.04E-043.51E-042.53E-045.00E-054.61E-055.17E-061.15E-05
f61.17E-982.61E-982.96E-19603.30E-29300000
f79.85E-1351.78E-1343.71E-2620000000
f87.44E-971.66E-963.94E-18502.59E-28800000
f90000000000
f108.88E-1608.88E-1608.88E-1608.88E-1608.88E-160
f110000000000
f128.98E-582.01E-571.08E-1019.61E-1002.20E-1497.15E-1502.66E-237000
f131.10E-952.47E-954.00E-18006.52E-28800000
f143.64E-368.14E-362.37E-933.75E-933.72E-1443.89E-1440000
f151.12E-1122.20E-1122.48E-19703.21E-29200000
f165.10E-1061.14E-1056.92E-19408.59E-28500000
f172.06E-293.16E-292.79E-505.44E-503.36E-724.94E-720000
f189.13E-238000000000

The best value of each function is highlighted in bold in the table

Comparison results of VGHHO using different k and n values on 18 functions with 30D The best value of each function is highlighted in bold in the table From Table 8, the comprehensive convergence accuracy of VGHHO with k = 2.0 and n = 2.0 is better than that of other values. Furthermore, we conducted several experiments VGHHO with the larger k (k > 2.0), n (n > 2.0) and compared with k = 2.0, n = 2.0. The comparison results showed that they exhibited similar performance on average. Therefore, considering all of the k and n values analyzed, it concluded that the setting of k = 2.0 and n = 2.0 for VGHHO is an appropriate choice.

Experiments on latest benchmark problems for CEC 2017

The overall performance of VGHHO is further evaluated on 30 latest CEC 2017 benchmark problems, which are more complicated than the eighteen classical functions from Table 1. These problems are classified as four types, namely, unimodal (F01–F03), multimodal (F04–F10), hybrid (F11–F20), and composite cases (F21–F30) to investigate different search ability of algorithm (Awad et al. 2016). In this experiment, the dimensions of each problem are set to 30. The experimental results of VGHHO are compared with BOA, SOA, HHO, HHO-JOS, EEGWO, AGDE and ISCA, respectively. The terminate condition for seven algorithms is the same fitness evaluation maximum number (i.e., 104 × D, D denotes the problems’ dimension) to maintain fairness. The error value [f(x)—f(x0)] of each algorithm is calculated over 30 independent trials on each problem. The f(x) is a best value obtained by each method, while f(x0) is the theoretical best value on each problem. The comparison results of seven algorithms are listed in Table 9. Due to its unstable behavior, F02 has been deleted from test set and its result is not reported.
Table 9

Results of VGHHO and other seven optimization techniques for CEC 2017 problems

ProblemIndexBOASOAHHOHHO-JOSEEGWOAGDEISCAVGHHO
F01Mean6.64E + 108.34E + 094.16E + 071.38E + 075.03E + 091.42E-145.55E + 095.25E + 06
St.dev9.56E + 091.72E + 095.71E + 069.46E + 052.10E + 081.00E-142.93E + 081.86E + 06
F02Mean
St.dev
F03Mean7.78E + 043.47E + 045.01E + 045.46E + 036.28E + 045.68E-146.05E + 045.76E + 02
St.dev6.05E + 039.16E + 031.34E + 042.66E + 035.66E + 0307.37E + 032.73E + 01
F04Mean2.38E + 043.17E + 021.85E + 021.15E + 021.43E + 041.17E + 011.86E + 041.04E + 02
St.dev8.02E + 031.09E + 026.59E + 011.38E + 018.90E + 012.62E + 014.59E + 031.95E + 01
F05Mean4.23E + 021.44E + 022.60E + 022.15E + 023.96E + 025.78E + 014.15E + 021.78E + 02
St.dev2.11E + 012.06E + 013.83E + 011.42E + 013.66E + 011.23E + 011.86E + 014.83E + 01
F06Mean9.15E + 013.34E + 016.41E + 014.11E + 018.19E + 011.14E-138.38E + 014.92E + 01
St.dev4.78E + 009.36E + 005.88E + 002.22E + 007.05E + 0003.31E + 003.53E + 00
F07Mean7.50E + 023.71E + 026.15E + 025.49E + 026.11E + 021.03E + 025.92E + 023.94E + 02
St.dev5.03E + 013.36E + 016.02E + 011.01E + 023.82E + 015.19E + 006.72E + 013.46E + 01
F08Mean3.41E + 021.40E + 021.80E + 021.49E + 023.24E + 027.04E + 013.39E + 021.33E + 02
St.dev2.38E + 012.82E + 012.38E + 019.25E + 001.53E + 017.36E + 001.08E + 012.59E + 01
F09Mean1.02E + 043.54E + 036.98E + 035.36E + 039.94E + 0308.46E + 034.53E + 03
St.dev1.28E + 031.19E + 031.26E + 034.60E + 027.95E + 0208.34E + 021.31E + 03
F10Mean7.78E + 034.39E + 035.70E + 033.28E + 036.58E + 033.55E + 037.25E + 033.34E + 03
St.dev4.41E + 028.02E + 029.21E + 022.18E + 028.82E + 023.56E + 023.62E + 029.67E + 02
F11Mean9.81E + 033.82E + 022.34E + 021.50E + 027.69E + 032.75E + 015.93E + 039.48E + 01
St.dev2.88E + 031.21E + 023.83E + 014.79E + 012.73E + 032.61E + 011.35E + 037.65E + 01
F12Mean1.60E + 103.51E + 085.11E + 079.31E + 061.55E + 096.57E + 031.37E + 093.38E + 06
St.dev4.38E + 091.68E + 085.67E + 072.39E + 063.18E + 085.79E + 033.06E + 082.32E + 06
F13Mean1.66E + 101.11E + 089.29E + 053.34E + 053.59E + 093.74E + 015.96E + 091.79E + 05
St.dev4.85E + 093.30E + 072.20E + 057.83E + 045.46E + 081.69E + 015.42E + 091.96E + 05
F14Mean2.62E + 076.91E + 041.67E + 065.66E + 043.29E + 052.74E + 013.35E + 053.38E + 04
St.dev2.81E + 072.97E + 041.37E + 066.01E + 042.36E + 051.61E + 012.53E + 051.68E + 03
F15Mean8.93E + 082.41E + 051.45E + 052.09E + 047.07E + 072.85E + 017.63E + 071.67E + 04
St.dev4.45E + 082.66E + 051.42E + 059.56E + 033.99E + 073.06E + 013.49E + 073.82E + 04
F16Mean5.08E + 039.14E + 022.12E + 031.77E + 034.88E + 034.67E + 024.58E + 031.15E + 03
St.dev1.37E + 032.44E + 023.96E + 022.48E + 027.79E + 021.16E + 028.42E + 022.96E + 02
F17Mean7.39E + 033.83E + 021.13E + 039.08E + 023.56E + 036.49E + 012.14E + 035.11E + 02
St.dev4.38E + 032.20E + 022.33E + 026.88E + 014.95E + 021.04E + 013.32E + 022.27E + 02
F18Mean2.51E + 086.63E + 054.04E + 067.63E + 051.16E + 072.37E + 033.53E + 071.59E + 05
St.dev1.56E + 081.96E + 063.51E + 064.12E + 051.81E + 042.16E + 032.10E + 072.38E + 05
F19Mean1.96E + 095.23E + 061.52E + 063.42E + 053.08E + 089.78E + 005.63E + 082.17E + 05
St.dev8.40E + 086.72E + 068.67E + 051.41E + 051.64E + 085.11E + 002.50E + 081.04E + 05
F20Mean1.10E + 034.87E + 025.95E + 028.60E + 021.04E + 031.15E + 027.98E + 025.53E + 02
St.dev1.20E + 021.90E + 021.59E + 022.20E + 021.62E + 026.16E + 011.30E + 021.92E + 02
F21Mean6.52E + 023.66E + 024.65E + 024.06E + 025.41E + 022.68E + 026.21E + 022.93E + 02
St.dev2.21E + 013.36E + 012.19E + 012.32E + 012.98E + 019.09E + 004.73E + 017.62E + 01
F22Mean6.59E + 034.51E + 035.52E + 031.20E + 025.24E + 031.00E + 027.09E + 032.03E + 03
St.dev1.12E + 037.01E + 027.73E + 026.13E-011.16E + 030.00E + 002.62E + 025.08E + 02
F23Mean1.33E + 034.95E + 029.83E + 026.47E + 021.12E + 034.10E + 021.40E + 036.12E + 02
St.dev1.69E + 022.30E + 012.22E + 027.46E + 012.53E + 021.04E + 019.97E + 017.96E + 01
F24Mean1.50E + 035.81E + 021.12E + 039.68E + 021.46E + 034.79E + 021.38E + 035.38E + 02
St.dev9.01E + 012.89E + 011.69E + 022.68E + 011.90E + 021.42E + 012.04E + 029.58E + 01
F25Mean4.62E + 036.11E + 024.50E + 024.16E + 022.96E + 033.87E + 023.20E + 033.85E + 02
St.dev6.79E + 028.78E + 012.02E + 012.31E + 014.21E + 023.01E + 017.26E + 022.01E + 01
F26Mean1.01E + 042.57E + 035.29E + 033.07E + 029.38E + 031.58E + 039.46E + 031.36E + 03
St.dev1.05E + 032.39E + 026.36E + 022.16E + 017.07E + 028.94E + 015.29E + 025.42E + 02
F27Mean1.73E + 035.63E + 028.43E + 025.79E + 022.68E + 034.97E + 022.12E + 035.23E + 02
St.dev3.77E + 023.05E + 011.35E + 023.20E + 012.99E + 021.16E + 012.24E + 023.27E + 01
F28Mean5.92E + 032.35E + 035.15E + 024.55E + 023.24E + 033.41E + 023.65E + 034.46E + 02
St.dev3.83E + 021.51E + 033.82E + 012.58E + 016.71E + 025.66E + 018.85E + 023.81E + 01
F29Mean1.18E + 041.34E + 031.99E + 031.33E + 034.42E + 035.10E + 024.56E + 031.11E + 03
St.dev1.29E + 042.28E + 026.22E + 025.34E + 024.93E + 022.34E + 015.40E + 022.98E + 02
F30Mean2.32E + 091.21E + 071.24E + 077.67E + 051.98E + 095.11E + 031.44E + 091.04E + 06
St.dev1.27E + 098.65E + 069.62E + 063.03E + 058.34E + 082.97E + 037.58E + 087.29E + 05

The best value of each function is highlighted in bold in the table

Results of VGHHO and other seven optimization techniques for CEC 2017 problems The best value of each function is highlighted in bold in the table The optimization results of VGHHO in Table 9 are much better than BOA, HHO, EEGWO and ISCA on all the problems. Compared with SOA, VGHHO finds the better error values on 21 benchmark test problems. However, the better results are obtained by SOA on other eight problems (namely, F05–F07, F09, F16, F17, F20, and F23). With respect to the HHO-JOS algorithm, VGHHO gets the better and worse error values on 24 and five problems (i.e., F6, F10, F22, F26 and F30), respectively. AGDE is a state-of-the-art meta-heuristic algorithm for CEC 2017 benchmark test suit. AGDE obtains better results than VGHHO on 26 problems. However, the better values of three problems (i.e., F10, F25 and F26) are obtained by VGHHO algorithm. According to the non- parametric statistical Friedman ranking test results, Fig. 8 plots the column chart of the average ranking results of seven algorithms on 30 benchmark problems from CEC 2017. From Fig. 8, AGDE achieves the first rank, followed by VGHHO, HHO-JOS, SOA, HHO, EEGWO, ISCA and BOA.
Fig. 8

Friedman test ranks of seven approaches for CEC 2017 problems

Friedman test ranks of seven approaches for CEC 2017 problems Additionally, the Wilcoxon’s rank sum statistical test based on “Mean” values in Table 9 is also used to investigate the difference between VGHHO and other seven optimization algorithms. Table 10 provides the statistical performance of VGHHO and other seven algorithms. From Table 10, VGHHO obtains higher “R+” than “R−” results on all of cases except for VGHHO versus AGDE. The p-values of VGHHO versus BOA, EEGWO, AGDE and ISCA are less than 0.05.
Table 10

Wilcoxon’s rank sum test values are obtained by VGHHO versus other seven approaches

AlgorithmBetterEqualWorstR + Rp-valueα = 0.05
VGHHO vs. BOA290043500.0054Yes
VGHHO vs. SOA2108373620.4368No
VGHHO vs. HHO290043500.2078No
VGHHO vs. HHO-JOS2405359760.7795No
VGHHO vs. EEGWO290043500.0103Yes
VGHHO vs. AGDE3026264091.08E-04Yes
VGHHO vs. ISCA290043500.0075Yes
Wilcoxon’s rank sum test values are obtained by VGHHO versus other seven approaches

VGHHO for benchmark feature selection problems

In this section, the feasibility of VGHHO is further verified by dealing with feature selection (FS) problems. In fact, FS is a typical combinatorial optimization problem and its solution space is represented by binary values (Neggaz et al. 2020; Dhiman et al. 2021b; Hussain et al. 2021). However, VGHHO is a continuous version optimization technique which needs to transform it from continuous space into binary one when solving FS problems. One of the easiest handle ways is to introduce a transfer function (Tubishat et al. 2020). The biggest characteristic of this way is not to change the framework of VGHHO. In this paper, the following S-shaped transfer function is used:where τ is a constant number. In our experiment, twenty-one feature selection benchmark datasets from UCI are used. These datasets have been widely used to verify the optimization performance of meta-heuristic algorithms and their detailed information are shown in Table 11.
Table 11

The detailed information of 21 UCI datasets

NumberDatasetNumber of featuresNumber of samples
1Breastcancer9699
2BreastEW30569
3Clean1166476
4Clean21666598
5CongressEW16435
6Exactly131000
7Exactly2131000
8HeartEW13270
9IonosphereEW34351
10KrvskpEW363196
11Lymphography18148
12M-of-n131000
13PenglungEW32573
14Semeion2651593
15SonarEW60208
16SpectEW22267
17Tic-tac-toe9958
18Vote16300
19WaveformEW405000
20WineEW13178
21Zoo16101
The detailed information of 21 UCI datasets For twenty-one datasets, the wrapper technique and k-Nearest Neighbors (KNN) classifier (k = 3) are combined to use. The number of samples of each dataset is randomly classified as two groups, namely, 80% is utilized for training while 20% is used for testing. In this experiment, the same population scale (N = 10) and maximum iterative number (tmax = 100) for all the approaches are used to obtain a fair comparison. Each algorithm runs 30 independently trials on each dataset. Table 12 provides the average classification accuracy for seven algorithms on twenty-one datasets. The average feature numbers of seven methods on each dataset are listed in Table 13. Furthermore, the non-parametric statistical Friedman test results are also provided in Tables 12 and 13.
Table 12

The average classification rates are obtained by seven algorithms on twenty-one selected datasets

DatasetBOASOAHHOm-HHOEEGWOISCAVGHHO
Breastcancer0.96960.97100.97360.97600.96680.97680.9822
BreastEW0.95720.97500.95870.96170.94830.95190.9736
Clean10.89380.93630.84210.93330.86830.88750.9366
Clean20.96850.97830.97220.97400.96690.96260.9759
CongressEW0.97200.97200.96930.97700.96270.96040.9846
Exactly0.78180.90610.71670.71170.75960.69090.9178
Exactly20.76460.76060.74330.76170.76060.75760.7697
HeartEW0.81650.82770.81840.82720.80900.83520.8642
IonosphereEW0.89860.94780.91900.91900.88980.92750.9333
KrvskpEW0.91490.98130.95150.95880.87890.94080.9760
Lymphography0.86110.90970.77010.90800.86810.87500.9136
M-of-n0.82420.95860.85500.90670.80200.88580.9200
PenglungEW0.90280.95830.92860.92860.87500.91670.9583
Semeion0.97080.98530.98110.98950.96950.98030.9860
SonarEW0.90690.94610.91870.95930.89220.85290.9293
SpectEW0.84460.85980.89300.88680.84850.87880.8977
Tic-tac-toe0.73840.77000.78010.78530.75840.76900.8083
Vote0.95620.95961.00001.00000.93600.95281.0000
WaveformEW0.74000.79980.74400.77030.73840.73310.7739
WineEW0.97120.98270.93331.00000.97120.97121.0000
Zoo0.95960.95960.95000.98330.95960.89900.9866
Average ranking5.172.604.572.836.075.241.52
Total ranking5243761

The best value of each function is highlighted in bold in the table

Table 13

The average feature numbers are obtained by seven algorithms on twenty-one selected datasets

DatasetBOASOAHHOm-HHOEEGWOISCAVGHHO
Breastcancer5.00003.33335.66674.00005.33335.33333.6667
BreastEW14.6679.666716.00012.00016.00016.00012.000
Clean173.66736.00076.66769.00079.66781.66763.000
Clean268.66744.33389.66780.66788.66785.33339.333
CongressEW4.66672.00008.33333.33337.33332.00003.0000
Exactly5.00005.66679.33334.33333.33331.00001.0000
Exactly21.00002.33335.33333.00001.00001.00001.0000
HeartEW7.33334.33338.66676.00008.33333.66674.3333
IonosphereEW11.3334.666717.3334.333316.0003.66673.3333
KrvskpEW19.33314.33330.33329.33320.66715.66714.667
Lymphography7.66674.333311.3336.000010.3335.66674.0000
M-of-n8.00006.000013.0008.33337.33335.66677.6667
PenglungEW55.66714.333157.0037.000158.00141.3322.000
Semeion112.6749.333132.67111.33123.00133.33102.67
SonarEW25.66713.66729.00023.33328.00027.33324.667
SpectEW10.6674.333314.0008.33339.66677.33337.0000
Tic-tac-toe6.00004.66676.66676.66676.33334.33334.3333
Vote6.33333.33337.33331.33337.00002.33332.0000
WaveformEW23.33318.66734.66721.00020.0008.000018.333
WineEW6.33335.00004.33335.66678.00004.00004.0000
Zoo7.00004.66677.00007.66678.00004.33334.0000
Average ranking4.382.386.434.055.383.242.00
Total ranking5274631

The best value of each function is highlighted in bold in the table

The average classification rates are obtained by seven algorithms on twenty-one selected datasets The best value of each function is highlighted in bold in the table The average feature numbers are obtained by seven algorithms on twenty-one selected datasets The best value of each function is highlighted in bold in the table From Table 12, compared with BOA, EEGWO, and ISCA, VGHHO gets better classification accuracy on all the datasets. VGHHO obtains better classification accuracy than SOA on thirteen and one (i.e., PenglungEW) datasets, respectively. However, the better results are obtained by SOA on BreastEW, Clean2, IonosphereEW, KrvskpEW, M-of-n, SonarEW, and WaveformEW datasets, respectively. With respect to the basic HHO algorithm, VGHHO achieves better performance on all the datasets except for Vote. For Vote dataset, two algorithms find similar classification accuracy. Compared to the m-HHO algorithm, VGHHO provides better and similar classification rate for seventeen and two (i.e., Vote and WineEW) datasets. However, the better results are found by OBL-HHO on Semeion and SonarEW datasets. In addition, according to the Friedman test results in Table 12, the ranking order is VGHHO, SOA, m-HHO, HHO, BOA, ISCA, and EEGWO. From Table 13, the numbers of the selected features of VGHHO are less than BOA on all the datasets except for Exactly2. Compared with SOA, VGHHO selects less and more feature numbers on eleven and nine datasets, respectively. For M-of-n dataset, two algorithms find the equal numbers of features. The numbers of the selected features of HHO are more than VGHHO on all the datasets. VGHHO selects the numbers of features less than m-HHO on all the datasets except for Sonar and Vote. With respect to EEGWO, VGHHO obtains more feature numbers on nineteen datasets. The numbers of the selected features of VGHHO are less and more than ISCA on fifteen and four datasets, respectively. Additionally, VGHHO obtains the first rank based on Friedman test.

VGHHO for fault diagnosis of wind turbine

Although VGHHO has shown excellent performance on benchmark problems, it is necessary for investigating its effectiveness in real-world problems. Therefore, in this section, a practical fault diagnosis problem of wind turbine is used to verify the effectiveness of VGHHO. Wind turbine is a kind of clean energy, which has been widely used. Figure 9 plots the framework diagram of connection between components of wind turbine.
Fig. 9

The components connection framework of a typical wind turbine

The components connection framework of a typical wind turbine Pitch control system is an important parts of wind turbine. Its internal structure is complex. When it operates in extremely harsh environment, it is likely to cause its failure (Tang et al. 2020a). The fault of variable pitch system directly affects the power operation efficiency of wind turbine (Tang et al. 2020b). Therefore, the research on fault diagnosis of variable pitch system plays an important role in reducing the operating cost of wind turbine and improving the power generation (Cho et al. 2018). One year monitoring and data acquisition (SACDA) data set of a wind farm in East China is the experimental data of fault data of variable pitch system. Table 14 lists the fault feature and sample numbers of two datasets.
Table 14

The feature and sample numbers of two fault datasets

DatasetFaculty typeNumber of featuresNumber of samples
Dataset-1Variable pitch system super capacitor voltage low fault1062883
Dataset-2Variable pitch paddle 3 super capacitor voltage low fault1103585
The feature and sample numbers of two fault datasets The wrapper method with KNN classifier and VGHHO on feature selection is used on two fault datasets. Each dataset is randomly divided into two parts, namely, 80% is the training set while 20% is the testing one. The performance of VGHHO is compared against BOA, SOA, HHO, OBL-HHO, EEGWO and ISCA. For all algorithms, the swarm scale is 10 and the maximum of iterative numbers is 10. Each algorithm runs 30 independently trials on each dataset. Table 15 provides the average classification accuracy of seven algorithms on two datasets. The average feature numbers of seven algorithms on two datasets is shown in Table 16. In addition, the non-parametric statistical Friedman test results on seven algorithms are also provided in Tables 15 and 16.
Table 15

The average classification accuracy of seven algorithms on two fault datasets

DatasetBOASOAHHOOBL-HHOEEGWOISCAVGHHO
Dataset-10.99680.99880.99931.00000.99791.00001.0000
Dataset-20.99440.99740.99700.99440.99720.99721.0000
Average ranking6.753.504.504.254.752.751.50
Total ranking7354621

The best value of each function is highlighted in bold in the table

Table 16

The average feature numbers are obtained by seven algorithms on two fault datasets

DatasetBOASOAHHOOBL-HHOEEGWOISCAVGHHO
Dataset-153.33346.00012.6678.000034.33323.0006.3333
Dataset-254.00047.00019.6673.333331.66719.0002.6667
Average ranking7.006.003.502.55.003.501.00
Total ranking7632531

The best value of each function is highlighted in bold in the table

The average classification accuracy of seven algorithms on two fault datasets The best value of each function is highlighted in bold in the table The average feature numbers are obtained by seven algorithms on two fault datasets The best value of each function is highlighted in bold in the table As seem in Tables 15, the average classification accuracy of VGHHO is better than BOA, SOA, HHO, and EEGWO on Dataset-1. Compared with OBL-HHO and ISCA, VGHHO obtains similar results on Dataset-1. For Dataset-2, the result of VGHHO is better than other six algorithms. From the statistical Friedman test results, VGHHO achieves the first rank. From Table 16, the numbers of the selected features of VGHHO are less than other six algorithms on Dataset-1. For Dataset-2, VGHHO achieves better results than BOA, SOA, HHO, OBL-HHO, EEGWO, and ISCA. With respect to the non-parametric statistical Friedman test results in Table 16, VGHHO obtains the first rank, followed by OBL-HHO, HHO, ISCA, EEGWO, SOA, and BOA.

VGHHO for parameter estimation of photovoltaic model

In recent years, low-carbon technology has developed rapidly around the world. Solar energy is considered one of the most promising renewable energy resources due to its abundance, cleanliness, and pollution-free. Photovoltaic (PV) power generation systems can convert solar energy into electrical energy. As the main com- ponent of the PV power generation system, accurately estimate the parameters of PV cells is a great significant to model the PV systems. Parameters with low accuracy will not only cause large errors, but may even lead to the failure of the maximum power point tracking (MPPT). Therefore, establishing a reliable mathematical model based on the measure data that describe the nonlinear characteristics of solar cells and accurately estimating its parameters can provide a guarantee for the design and application of solar cell fault diagnosis and MPPT control. In generally, single diode (SD) is one of the widely used models in solar PV power generation system. The equivalent circuit structure of SD model is shown in Fig. 10.
Fig. 10

The equivalent circuit structure of SD model

The equivalent circuit structure of SD model In Fig. 10, based on Shockley equation, the output current (I) of SD model is calculated as follows (Long et al. 2020a, 2021a):where I is the photo-generated current, I represents the reverse saturation current, R denotes the series resistance, q is the electron charge (= 1.60217646), k represents the Boltzmann constant (1.38 × 10–23), V denotes the output voltage, R is the shunt resistance, n represents the ideality factor, and T denotes the cell temperature in Kelvin. From Eq. (27), for SD model, five parameters (i.e., I, I, R, R, and n) are required to be estimated based on the measured I and V data. Over the past twenty years, many methods have been proposed to estimate the unknown parameters of SD model. Among them, meta-heuristic optimization algorithm is the most popular parameters estimation method of SD model (Long et al. 2020a, 2021a). In this paper, VGHHO is used to estimate the unknown parameters of SD model. The range of five parameters is set as follows: 0 ≤ I, I ≤ 1, 0 ≤ R ≤ 0.5, 0 ≤ R ≤ 100, and 1 ≤ n ≤ 2. The measured current–voltage (I-V) data are acquired from Easwarakhanthan et al. (1986). VGHHO is also compared with the standard HHO and other seven algorithms. All algorithms have the same maximum number of fitness evaluations 50,000. The best estimated parameter and their corresponding RMSE values of various methods are listed in Table 17.
Table 17

The best estimated parameters and their corresponding RMSE values of various algorithms

AlgorithmIph (A)Isd (μA)RS (Ω)Rsh (Ω)nRMSE
CLPSO (Liang et al. 2006)0.76080.343020.036154.19651.48739.9633E-04
DE-BBO (Gong et al. 2010)0.76050.324770.036455.26271.48179.9922E-04
BSA (Civicioglu 2013)0.76090.377490.035856.52661.49701.0398E-03
GOTLBO (Chen et al. 2016)0.76080.329700.036353.36641.48339.8856E-04
IBSA (Nama et al. 2017)0.76070.355020.036158.20121.49071.0092E-03
GWOCS (Long et al. 2020a)0.7607730.321920.0363953.63201.48089.8607E-04
EABOA (Long et al. 2021a)0.7607710770.3229290.03637959353.766001441.4811534579.8602E-04
HHO (Heidari et al. 2019)0.75994650.3581150.037347782.486711.49125672.4122E-03
VGHHO0.76075490.3243880.036352153.944241.48161359.8628E-04
The best estimated parameters and their corresponding RMSE values of various algorithms From Table 17, the best parameters and RMSE value of SD model are obtained by EABOA. Compared with CLPSO, DE-BBO, BSA, GOTLBO, IBSA, and HHO algorithms, VGHHO obtains better RMSE value for SD model. However, the RMSE value of VGHHO is worse than that of GWOCS and EABOA. Furthermore, based on the best estimated parameters, the fitting curves of the calculated data obtained by VGHHO and the measured data is shown in Fig. 11. As can be seen from Fig. 11, the calculated data obtained by VGHHO are in very good agreement with the measured data for SD model.
Fig. 11

The calculated values obtained by VGHHO and the measured values for SD model

The calculated values obtained by VGHHO and the measured values for SD model

Conclusions

The main purpose of this study was to develop an improved version of HHO (i.e., VGHHO) by introducing three modified strategies to overcome the drawbacks of HHO. The velocity and inertia weight were added into the position search equation in exploitation phase for guiding search direction of algorithm. Thus, the convergence speed and solution precision were improved. To obtain a good transition from exploration to exploitation, a nonlinear escaping energy coefficient E based on cosine function was proposed. The refraction-opposition-based learning mechanism was introduced to enhance the population diversity and avoid premature convergence. To investigate the effectiveness of VGHHO, several experiments were conducted. In the first experiment, eighteen classical benchmark functions with different scales were selected to evaluate the performance of VGHHO and compared it with other algorithms. The experimental results indicated that VGHHO had higher precision and better scalability than other algorithms on most classical benchmark functions. In the second experiment, 30 latest benchmark functions from CEC 2017 were tested. Simulations indicated that VGHHO obtained better performance than other algorithms on complex test functions. In the third experiment, we utilized twenty-one benchmark feature selection problems from UCI to further test the optimization ability of VGHHO. The test investigation showed that the VGHHO gave satisfactory results in term of classification rate. Finally, a practical fault diagnosis problem of wind turbine and a parameter estimation problem of PV model were used to verify the performance of VGHHO in solving the real-world applications. The comprehensive results indicated that VGHHO was a feasible and promising technique for wind turbine fault diagnosis and PV model parameter estimation. These experiments confirmed that VGHHO performed better competitiveness than other selected algorithms on benchmark functions, benchmark feature selection problems and real-world problems. The main limitation and disadvantage of VGHHO was that different parameters need to be set for different optimization problems. Furthermore, for complex optimization problems such as CEC 2017 benchmark functions and feature selection problems of UCI complex datasets, the solution results of VGHHO are not very satisfied. In the future, VGHHO will be used for solving more complex optimization problems and real-world applications, especially in the fields of constrained, multi-objective, and combinational optimization problems, signal processing, pattern recognition, and automatic control as well as data mining. Moreover, the velocity-guided strategy can also be added into other meta-heuristic algorithms and further investigates its effectiveness and feasibility.
  6 in total

1.  Smart Supervision of Cardiomyopathy Based on Fuzzy Harris Hawks Optimizer and Wearable Sensing Data Optimization: A New Model.

Authors:  Weiping Ding; Mohamed Abdel-Basset; Khalid A Eldrandaly; Laila Abdel-Fatah; Victor Hugo C de Albuquerque
Journal:  IEEE Trans Cybern       Date:  2021-10-12       Impact factor: 11.448

2.  Optimizing quantum cloning circuit parameters based on adaptive guided differential evolution algorithm.

Authors:  Essam H Houssein; Mohamed A Mahdy; Manal G Eldin; Doaa Shebl; Waleed M Mohamed; Mahmoud Abdel-Aty
Journal:  J Adv Res       Date:  2020-10-17       Impact factor: 10.479

3.  An enhanced version of Harris Hawks Optimization by dimension learning-based hunting for Breast Cancer Detection.

Authors:  Navneet Kaur; Lakhwinder Kaur; Sikander Singh Cheema
Journal:  Sci Rep       Date:  2021-11-09       Impact factor: 4.379

4.  Harris Hawks optimisation with Simulated Annealing as a deep feature selection method for screening of COVID-19 CT-scans.

Authors:  Rajarshi Bandyopadhyay; Arpan Basu; Erik Cuevas; Ram Sarkar
Journal:  Appl Soft Comput       Date:  2021-07-14       Impact factor: 6.725

5.  Hybrid Harris hawks optimization with cuckoo search for drug design and discovery in chemoinformatics.

Authors:  Essam H Houssein; Mosa E Hosney; Mohamed Elhoseny; Diego Oliva; Waleed M Mohamed; M Hassaballah
Journal:  Sci Rep       Date:  2020-09-02       Impact factor: 4.379

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.