Literature DB >> 35634108

A new optimization algorithm based on mimicking the voting process for leader selection.

Pavel Trojovský1, Mohammad Dehghani1.   

Abstract

Stochastic-based optimization algorithms are effective approaches to addressing optimization challenges. In this article, a new optimization algorithm called the Election-Based Optimization Algorithm (EBOA) was developed that mimics the voting process to select the leader. The fundamental inspiration of EBOA was the voting process, the selection of the leader, and the impact of the public awareness level on the selection of the leader. The EBOA population is guided by the search space under the guidance of the elected leader. EBOA's process is mathematically modeled in two phases: exploration and exploitation. The efficiency of EBOA has been investigated in solving thirty-three objective functions of a variety of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and CEC 2019 types. The implementation results of the EBOA on the objective functions show its high exploration ability in global search, its exploitation ability in local search, as well as the ability to strike the proper balance between global search and local search, which has led to the effective efficiency of the proposed EBOA approach in optimizing and providing appropriate solutions. Our analysis shows that EBOA provides an appropriate balance between exploration and exploitation and, therefore, has better and more competitive performance than the ten other algorithms to which it was compared. ©2022 Trojovský and Dehghani.

Entities:  

Keywords:  Applied mathematics; Human-based metahurestic algorithm; Leader selection; Optimization; Optimization problem; Population matrix; Population-based algorithms; Recurring process; Stochastic algorithms; Voting process

Year:  2022        PMID: 35634108      PMCID: PMC9138015          DOI: 10.7717/peerj-cs.976

Source DB:  PubMed          Journal:  PeerJ Comput Sci        ISSN: 2376-5992


Introduction

Optimization is an integral part of engineering, industry, technology, mathematics, and many other applications in science. Decision variables, constraints, and objective function are the three main parts of any optimization issue, where determining the values of decision variables while respecting the constraints to optimize the objective function is the main challenge of optimization (Ray & Liew, 2003). Optimization problem solving approaches are included in two groups: deterministic and stochastic (Kozlov, Samsonov & Samsonova, 2016). Deterministic approaches, which include gradient-based and non-gradient-based techniques, have successful performance in handling convex and linear optimization problems. However, these approaches fail to meet real-world challenges with features such as convex behavior, nonlinear search space, high number of variables, complex objective function, high number of constraints, as well as NP-hard problems. Following the inability of deterministic approaches to address these types of issues, researchers have developed a new approach called stochastic optimization techniques. Metaheuristic algorithms are one of the most widely used stochastic approach techniques that are effective in optimization applications by using random operators, random scanning in the search space, and trial and error (Curtis & Robinson, 2019). Simplicity of concept, ease of implementation, problem-independent, needlessness of the derivation process, and efficiency in complex problems are some of the advantages that have led to the popularity and applicability of metaheuristic algorithms (Gonzalez et al., 2022). Metaheuristic algorithms have a nearly identical problem-solving process that begins with the production of a certain number of candidate solutions at random. Then, in a repetition-based process, the effect of the algorithm steps on these candidate solutions improves them. At the end of the implementation, the best-founded candidate solution is introduced as the solution to the problem (Toloueiashtian, Golsorkhtabaramiri & Rad, 2022). It is important to note here that there is no guarantee that metaheuristic algorithms will be able to provide the best optimal solution known as the global optimal. This is due to the random nature of the search process of these algorithms. For this reason, the solution obtained from metaheuristic algorithms is called quasi-optimal (Yu, Semeraro & Matta, 2018). The two indicators of exploration with the concept of global search and exploitation with the concept of local search are effective in the performance of metaheuristic algorithms in handling optimization problems and providing better quasi-optimal solutions (Mejahed & Elshrkawey, 2022). What has led researchers to develop numerous optimization methods is to achieve better solutions closer to the global optimal. The main research question is whether there is a need to develop new metaheuristic algorithms given that countless algorithms have been developed so far. This question is answered with the concept of the No Free Lunch (NFL) theorem (Wolpert & Macready, 1997). The NFL theorem explains that the effective performance of an algorithm in solving a set of optimization problems does not create any presuppositions on the ability of that algorithm to provide similar performance in other optimization applications. In other words, it cannot be claimed that a particular metaheuristic algorithm performs best in the face of all optimization problems compared to all other optimization methods. The NFL theorem is the main incentive for the authors to design new optimization approaches that perform more effectively in solving optimization problems in a variety of applications. The NFL theorem has motivated the authors of this article to develop a new metaheuristic algorithm applicable in optimization challenges that is effective in providing solutions closer to the global optimization. The novelty of this article is in introducing and designing a new metaheuristic algorithm named Election-Based Optimization Algorithm (EBOA), which its fundamental inspiration is the simulation of the voting process and the popular movement. The main contributions of this study are as follows: A novel human-based Election-Based Optimization Algorithm (EBOA) is proposed. The process of public movement and the electoral voting process are examined and then mathematically modeled in the EBOA design. The efficiency of EBOA in optimizing thirty-three objective functions (i.e., unimodal, high-dimensional multimodal, and fixed-dimensional multimodal, and CEC 2019) is tested. The quality of EBOA results is compared with ten state-of-the-art metaheuristic algorithms. The rest of the article is structured in such a way that the literature review is presented in ‘Lecture Review’. Then in ‘Election-Based Optimization Algorithm’ the proposed EBOA is introduced and modeled. Simulation studies are presented in ‘Results’. The discussion is provided in ‘Discussion’. Conclusions and several research directions for future studies are presented in ‘Conclusions’.

Lecture review

Natural phenomena, the behavior of living things in nature, the biological sciences, genetic sciences, the laws of physics, the rules of the game, human behavior, and any evolutionary process that has an optimization process have been the source of inspiration in the design and development of metaheuristic algorithms. Accordingly, metaheuristic algorithms fall into nine groups: swarm-based, biology-based, physics-based, human-based, sport-based, math-based, chemistry-based, music-based, and the other hybrid approaches (Akyol & Alatas, 2017). Behaviors of living organisms such as animals, birds, and insects have been the main source of ideas in the development of numerous swarm-based algorithms. The most common feature used in many swarm-based methods is the ability of living organisms to search for food sources. The most popular methods developed based on food search process modeling are the Particle Swarm Optimization (PSO) based on the search behavior of birds and fish (Kennedy & Eberhart, 1995), Ant Colony Optimization (ACO) based on ants search for the shortest path to food (Dorigo, Maniezzo & Colorni, 1996), Artificial Bee Colony (ABC) based on bee colony search behavior (Karaboga & Basturk, 2007), the Butterfly Optimization Algorithm (BOA) based on search and mating behavior of butterflies (Arora & Singh, 2019), and the Tunicate Search Algorithm (TSA) based on search behavior tonics (Kaur et al., 2020). The process of reproduction among bees and the scout bees search mechanism to find suitable new places for hives have been employed in the designing Fitness Dependent Optimizer (FDO) (Abdullah & Ahmed, 2019). Chimpanzee’s hunting strategy using operators such as emotional intelligence and sexual motivation has been the main source of inspiration in designing the Chimp Optimization Algorithm (ChOA) (Khishe & Mosavi, 2020). Modeling of hunting strategies of living organisms in the wild has been a source of inspiration in designing various optimization approaches, including Grey Wolf Optimizer (GWO) based on gray wolf strategy (Mirjalili, Mirjalili & Lewis, 2014), Whale Optimization Algorithm (WOA) (Mirjalili & Lewis, 2016) based on humpback whales strategy, and Pelican Optimization Algorithm (POA) based on pelican behavior (Trojovský & Dehghani, 2022). Applying the concepts of biology, genetics, and natural selection alongside random operators such as selection, crossover, and mutation has led to the development of biology-based algorithms. The process of reproduction, Darwin’s evolutionary theory, and natural selection are key concepts in the development of two widely used methods, the Genetic Algorithm (GA) (Goldberg & Holland, 1988) and the Differential Evolution (DE) algorithm (Storn & Price, 1997). The mechanism of the immune system in the face of diseases, viruses, and microbes has been the major inspiration in the development of the artificial immune system (AIS) method (Hofmeyr & Forrest, 2000). Many phenomena, laws, and forces in physics science have been employed as inspiration sources for the development of physics-based metaheuristic algorithms. The phenomenon of melting and cooling of metals, which is known in physics as the refrigeration process, has been the main inspiration in the development of the Simulated Annealing (SA) approach (Van Laarhoven & Aarts, 1987). The phenomenon of the water cycle based on its physical changes in nature has inspired the Water Cycle Algorithm (WCA) (Eskandar et al., 2012). Gravitational force and Newton’s laws of motion have been the main concepts employed to introduce the method of Gravitational Search Algorithm (GSA) (Rashedi, Nezamabadi-Pour & Saryazdi, 2009). The application of Hook’s law and spring tensile force has been the main inspiration in Spring Search Algorithm (SSA) design (Dehghani et al., 2020a). Various physical theories and concepts have been the source of inspiration in the development of physics-based methods such as Multiverse Optimizer (MVO), inspired from cosmology concepts (Mirjalili, Mirjalili & Hatamlou, 2016), Big Bang-Big Crunch (BB-BC) inspired from Big Bang and Big Crunch theories (Erol & Eksin, 2006), Big Crunch Algorithm (BCA) inspired from Closed Universe theory (Kaveh & Talatahari, 2009), Integrated Radiation Algorithm (IRA) inspired from gravitational radiation concept in Einstein’s theory of general relativity (Chuang & Jiang, 2007), and Momentum Search Algorithm (MSA) inspired from momentum concept (Dehghani & Samet, 2020). Behavior, thought, interactions, and collaborations in humans have been designed ideas in the development of human-based approaches. The most widely used human-based method is the Teaching-Learning-Based Optimization (TLBO) algorithm, which mimics the classroom learning environment and the interactions between students and teachers (Rao, Savsani & Vakharia, 2011). The competition between political parties and the efforts of parties to seize control of parliament is the source of inspiration in designing the Parliamentary Optimization Algorithm (POA) (Borji & Hamidi, 2009). The economic activities of the rich and the poor to gain wealth in society have been the main inspiration for the Poor and Rich Optimization (PRO) approach (Moosavi & Bardsiri, 2019). Influencing people in the community from the best successful people in the community has been the main idea of the Following Optimization Algorithm (FOA) (Dehghani, Mardaneh & Malik, 2020). The mechanism of admission of high school graduates to the university and the process of improving the educational level of students has been the main idea in designing the Learner Performance-based Behavior (LPB) algorithm (Rahman & Rashid, 2021). The cooperation of the members of a team to improve the performance of the team in performing their tasks and achieving the goal has been the main inspiration of the Teamwork Optimization Algorithm (TOA) (Dehghani & Trojovský, 2021). The efforts of human society to achieve felicity by changing and improving the thinking of individuals has been employed in the design of the Human Felicity Algorithm (HFA) (Veysari, 2022). The strategic movement of army troops during the war, using attack, defense, and troop relocation operations, has been a central idea in the design of War Strategy Optimization (WSO) (Ayyarao et al., 2022). The rules governing various games, both individual and group, along with the activities of players, referees, coaches, and influential individuals, have been the main source of inspiration in the development of sport-based methods. The effort of the players in the tug-of-war competition have been the main idea in designing the Tug of War Optimization (TWO) technique (Kaveh & Zolghadr, 2016). The use of volleyball club interactions and the coaching process has been instrumental in designing the Volleyball Premier League (VPL) approach (Kaveh & Zolghadr, 2016). The players’ effort to find a hidden object was the main idea used in Hide Object Game Optimization (HOGO) (Dehghani et al., 2020b). The strategy that players and individuals use to solve the puzzle and arrange the puzzle pieces to complete it has been the source of inspiration in designing the Puzzle Optimization Algorithm (POA) (Zeidabadi & Dehghani, 2022). Matheuristics (Boschetti et al., 2009) and the Base Optimization Algorithm (BOA) (Salem, 2012) are among math-based methods. Chemical Reaction Optimization (CRO) (Lam & Li, 2009) and the Artificial Chemical Reaction Optimization Algorithm (ACROA) (Alatas, 2011) are among chemistry-based methods. The Harmony Search Algorithm (HSA) (Geem, Kim & Loganathan, 2001) is music-based method. In addition, by combining metaheuristic algorithms with each other, researchers have developed hybrid metaheuristic approaches, including: the Sine-Cosine and Spotted Hyena-based Chimp Optimization Algorithm (SSC) (Dhiman, 2021) and the Hybrid Aquila Optimizer with Arithmetic Optimization Algorithm (AO-AOA) (Mahajan et al., 2022). The literature review shows that numerous metaheuristic algorithms have been developed so far. However, according to the best knowledge of the literature, the voting process to determine the leader of the community has not yet been used in the design of any algorithm. This research gap motivated the authors of this article to develop a new human-based metaheuristic algorithm based on mathematical modeling of the electoral process and public movement.

Election-based optimization algorithm

This section is dedicated to introducing the proposed Election-Based Optimization Algorithm (EBOA) and then mathematical modeling of it.

Inspiration

An election is a process by which individuals in a community select a person from among the candidates. The person elected as the leader influences the situation of all members of that society, even those who did not vote for him. The more aware the community members are, the better they will be able to choose and vote for the better candidate. These expressed concepts of the election and voting process are employed in the design of the EBOA.

Algorithm initialization

EBOA is a population-based metaheuristic algorithm whose members are community individuals. In the EBOA, each member of the population represents a proposed solution to the problem. From a mathematical point of view, the EBOA population is represented by a matrix called the population matrix using Eq. (1). where X refers to the EBOA population matrix, X refers to the ith EBOA member (i.e., the proposed solution), x refers to the value of the jth problem variable specified by the ith EBOA member, N refers to EBOA population size, and m refers to number of decision variables. The initial position of individuals in the search space is determined randomly according to Eq. (2). where lb and ub refer to the lower bound and upper bound of the jth variable, respectively, and r is a random number in the interval . Based on the values proposed by each EBO member for the problem variables, a value can be evaluated for the objective function. These evaluated values for the objective function of the problem are specified using a vector according to Eq. (3). where OF refers to the vector of obtained objective function values of EBOA population and OF refers to the obtained objective function value for the ith EBOA member. The values of the objective function are the criterion for measuring the quality of the proposed solutions in such a way that the best value of the objective function specifies the best member while the worst value of the objective function specifies the worst member.

Mathematical model of EBOA

The main difference between metaheuristic algorithms is how members of the population are updated and the process that improves the proposed solutions in each iteration. The process of updating the algorithm population in EBOA has two phases of exploration and exploitation, which are discussed below. Phase 1: Voting process and holding elections (exploration). EBOA members, based on their awareness, participate in the election and vote for one of the candidates. People’s awareness can be considered as dependent on the quality and goodness of the value of the objective function. Accordingly, the awareness of individuals in the community is simulated using Eq. (4). In this awareness simulation process, individuals with better values of the objective function are more aware. where A is the awareness of the ith EBOA member, OFbest and OFworst are the best and worst values of the objective function, respectively. It should be noted that in minimization problems, OFbest is related to the minimum value of the objective function and OFworst is related to the maximum value of the objective function, while in maximization problems, OFbest is related to the maximum value of the objective function and OFworst is related to the minimum value of the objective function. Among the members of the society, 10% of the most awareness individuals in the society are considered as election candidates. In the EBOA, it is assumed that the minimum number of candidates (N) is equal to 2 (i.e., N ≥ 2), meaning that at least two candidates will register for the election. The implementation of the voting process in EBOA is such that the level of awareness of each person is compared to a random number, if the level of awareness of a person is higher than that random number, the person is able to vote for the best candidate (known as C1). Otherwise, that person randomly votes for one of the other candidates. This voting process is mathematically modeled in Eq. (5). where V refers to the vote of the ith person in the community, C1 refers to the best candidate, and C refers to the k th candidate, where k isa randomly selected number from the set . At the end of the voting process, based on the counting of votes, the candidate who has received the highest number of votes is selected as elected (leader). This elected leader affects the situation of all members of the society and even those who did not vote for him. The position of individuals in the EBOA is updated under the influence and guidance of the elected leader. This leader directs the algorithm population to different areas in the search space and increases the EBOA’s exploration ability in the global search. The process of updating the EBOA population is led by the leader in such a way that firstly a new position is generated for each member. The newly generated position is acceptable for updating if it improves the value of the objective function. Otherwise, the corresponding member remains in the previous position. This update process in the EBOA is modeled using Eqs. (6) and (7). where refers to a new generated position for the ith EBOA member, is its jth dimension, is its value of the objective function, I is an integer selected randomly from the values 1 or 2, L refers to the elected leader, L is its jth dimension, and OF is its objective function value. Phase 2: Public movement to raise awareness (exploitation). The awareness of the people of the society has a great impact on their correct decisions in the election and voting process. In addition to the leader’s influence on people’s awareness, every person’s thoughts and activities can increase that person’s awareness. From a mathematical point of view, a better solution may be identified based on a local search adjacent to any proposed solution. Thus, the activities of community members to increase their awareness, lead to an increase in the EBOA’s exploitation ability in the local search and find better solutions to the problem. To simulate this local search process, a random position is considered in the neighborhood of each member in the search space. The objective function of the problem is then evaluated based on this new situation to determine if this new situation is better than the existing situation of that member. If the new position has a better value for the objective function, the local search is successful and the position of the corresponding member is updated. Improving the value of the objective function will increase that person’s awareness for better decision-making in the next election (in the next iteration). This update process to increase people’s awareness in the EBOA is modeled using Eqs. (8) and (9). where refers to a new generated position for the ith EBOA member, is its jth dimension, is its value of the objective function, R is the constant equals to 0.02, t refers to iteration contour, and T refers to maximum number of iterations.

Repetition process, pseudocode, and flowchart of EBOA

An EBOA iteration is completed after updating the status of all members of the population. The EBOA enters the next iteration with the newly updated values, and the population update process is repeated based on the first and second phases according to Eqs. (4) to (9) until the last iteration. Upon completion of the full implementation of the algorithm, EBOA introduces the best proposed solution found during the algorithm iterations as the solution to the problem. The EBOA steps are summarized as follows: Start. Step 1: Specify the given optimization problem information: objective function, constraints, and a number of decision variables. Step 2: Adjust the number of iterations of the algorithm (T) and the population size (N). Step 3: Initialize the EBOA population at random and evaluate the objective function. Step 4: Update the best and worst members of the EBOA population. Step 5: Calculate the awareness vector of the community. Step 6: Determine the candidates from the EBOA population. Step 7: Hold the voting process. Step 8: Determine the elected leader based on the vote count. Step 9: Update the position of EBOA members based on elected leader guidance in the search space. Step 10: Update the position of EBOA members based on the concept of local search and public movement to raise awareness. Step 11: Save the best EBOA member as the best candidate solution so far. Step 12: If the iterations of the algorithm are over, go to the next step, otherwise go back to Step 4. Step 13: Print the best-obtained candidate solution in the output. End. The flowchart of all steps of implementation of the EBOA is specified in Fig. 1 and its pseudocode is presented in Algorithm 1.
Figure 1

Flowchart of EBOA.

Pseudocode of EBOA.

Algorithm 1.

Computational complexity of EBOA

This subsection is devoted to examining the computational complexity of the EBOA. The computational complexity of EBOA initialization, including random population generation and initial evaluation of the objective function, is equal to O(Nm)where N is the size of the EBOA population and m is the number of problem variables. Holding the election and updating the EBOA population in the first phase has the computational complexity equal to O(NmT) where T is the number of iterations. Population update based on the second phase of EBOA to increase people’s awareness is equal to O(NmT). Accordingly, the total computational complexity of EBOA is equal to O(Nm(1 + 2T)).

Results

This section is dedicated to analyzing EBOA performance in optimization and its ability to provide solutions to problems. Thirty-three objective functions of different types have been selected to evaluate different aspects of the proposed approach. Information and details of these benchmark functions are specified in Tables 1, 2, 3 and 4. The reasons for selecting these objective functions are as follows: functions F1 to F7 are selected as a unimodal type. These types of functions have only one extremum in their search space and are suitable in this regard to evaluate the EBOA’s exploitation ability in local search and converge to this optimal position. Therefore, the reason for choosing unimodal functions is to evaluate the exploitation potential of EBOA. The high-dimensional multimodal functions F8 to F13 include numerous local extremums in their search space in addition to the main extremum. These local optimal situations may cause the algorithm to fail. This feature has adapted the functions F8 to F13 to analyze the EBOA’s exploration ability in global search and determine whether the proposed approach is able to bypass local optimal locations and identify the original optimal location. Therefore, the reason for choosing high-dimensional multimodal functions is to evaluate the EBOA exploration capability. Fixed-dimensional multimodal functions F14 to F23 have fewer local optimal locations in their search space. These types of functions are great criteria for simultaneously measuring exploration and exploitation in optimization methods. Therefore, the reason for choosing fixed-dimensional multimodal functions is to evaluate the EBOA’s ability to strike the balance between the powers of exploration and exploitation. In addition to the twenty-three classic F1 to F23 objective functions, EBOA performance on ten complexes CEC 2019 suite test functions (known as the 100-Digit Challenge Test Functions) is also tested. More information and details about CEC 2019 test functions are available in Price et al. (2018).
Table 1

Information of unimodal objective functions.

Objective functionRangeDimensions F min
1. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{1} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m}{x}_{i}^{2}$\end{document}F1x=i=1mxi2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -100,100 \right] $\end{document}100,100 300
2. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{2} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m} \left\vert {x}_{i} \right\vert +{\mathop{\prod }\nolimits }_{i=1}^{m} \left\vert {x}_{i} \right\vert $\end{document}F2x=i=1mxi+i=1mxi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -10,10 \right] $\end{document}10,10 300
3. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{3} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m}{ \left( {\mathop{\sum }\nolimits }_{j=1}^{i}{x}_{i} \right) }^{2}$\end{document}F3x=i=1mj=1ixi2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -100,100 \right] $\end{document}100,100 300
4. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{4} \left( x \right) =\mathrm{max} \left\{ \left\vert {x}_{i} \right\vert ,1\leq i\leq m \right\} $\end{document}F4x=maxxi,1im \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -100,100 \right] $\end{document}100,100 300
5. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{5} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m-1} \left[ 100{ \left( {x}_{i+1}-{x}_{i}^{2} \right) }^{2}+{ \left( {x}_{i}-1 \right) }^{2} \right] $\end{document}F5x=i=1m1100xi+1xi22+xi12 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -30,30 \right] $\end{document}30,30 300
6. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{6} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m} \left( [{x}_{i}+0.5] \right) ^{2}$\end{document}F6x=i=1mxi+0.52 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -100,100 \right] $\end{document}100,100 300
7. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{7} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m}i{x}_{i}^{4}+\mathrm{random}(0,1)$\end{document}F7x=i=1mixi4+random0,1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -1.28,1.28 \right] $\end{document}1.28,1.28 300
Table 2

Information of high-dimensional multimodal objective functions.

Objective functionRangeDimensions F min
8. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{8} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m}-{x}_{i}\sin (\sqrt{{|}{x}_{i}{|}})$\end{document}F8x=i=1mxi sin|xi| \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -500,500 \right] $\end{document}500,500 30−1.2569E +04
9. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{9} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{m}[{x}_{i}^{2}-10\cos \left( 2\pi {x}_{i} \right) +10]$\end{document}F9x=i=1mxi210cos2πxi+10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -5.12,5.12 \right] $\end{document}5.12,5.12 300
10. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{10} \left( x \right) =-20\exp \left( -0.2\sqrt{ \frac{1}{m} {\mathop{\sum }\nolimits }_{i=1}^{m}{x}_{i}^{2}} \right) -\exp \left( \frac{1}{m} {\mathop{\sum }\nolimits }_{i=1}^{m}\mathit{cos} \left( 2\pi {x}_{i} \right) \right) +20+e$\end{document}F10x=20exp0.21mi=1mxi2 exp1mi=1mcos2πxi+20+e \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -32,32 \right] $\end{document}32,32 300
11. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{11} \left( x \right) = \frac{1}{4000} {\mathop{\sum }\nolimits }_{i=1}^{m}{x}_{i}^{2}-{\mathop{\prod }\nolimits }_{i=1}^{m}cos \left( \frac{{x}_{i}}{\sqrt{i}} \right) +1$\end{document}F11x=14000i=1mxi2i=1mcosxii+1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -600,600 \right] $\end{document}600,600 300
12. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\begin{array}{@{}l@{}} \displaystyle {F}_{12} \left( x \right) = \frac{\pi }{m} \left\{ 10\sin \left( \pi {y}_{1} \right) \right. \left. +{\mathop{\sum }\nolimits }_{i=1}^{m}{ \left( {y}_{i}-1 \right) }^{2} \left[ 1+10\sin 2 \left( \pi {y}_{i+1} \right) \right] +{ \left( {y}_{n}-1 \right) }^{2} \right\} +{\mathop{\sum }\nolimits }_{i=1}^{m}u \left( {x}_{i},10,100,4 \right) \\ \displaystyle u \left( {x}_{i},a,i,n \right) = \left\{ \begin{array}{@{}l@{}} \displaystyle k{ \left( {x}_{i}-a \right) }^{n},{x}_{i}\gt -a; \\ \displaystyle 0,-a\leq {x}_{i}\leq a; \\ \displaystyle k{ \left( -{x}_{i}-a \right) }^{n},{x}_{i}\lt -a. \end{array} \right. \end{array}$\end{document}F12x=πm10sinπy1+i=1myi121+10sin2πyi+1+yn12+i=1muxi,10,100,4uxi,a,i,n=kxian,xi>a;0,axia;kxian,xi<a. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -50,50 \right] $\end{document}50,50 300
13. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{13}(x)=0.1\{ \sin 2(3\pi {x}_{1})+{\mathop{\sum }\nolimits }_{i=1}^{m}({x}_{i}-1)^{2}[1+\sin 2(3\pi {x}_{i}+1)]+({x}_{n}-1)^{2}[1+\sin 2(2\pi {x}_{m})]\} +{\mathop{\sum }\nolimits }_{i=1}^{m}u({x}_{i},5,100,4)$\end{document}F13x=0.1sin23πx1+i=1mxi121+sin23πxi+1+xn121+sin22πxm+i=1muxi,5,100,4 [ − 50, 50]300
Table 3

Information of fixed-dimensional multimodal objective functions.

Objective functionRangeDimensions F min
14. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{14} \left( x \right) ={ \left( \frac{1}{500} +{\mathop{\sum }\nolimits }_{j=1}^{25} \frac{1}{j+{\mathop{\sum }\nolimits }_{i=1}^{2}{ \left( {x}_{i}-{a}_{ij} \right) }^{6}} \right) }^{-1}$\end{document}F14x=1500+j=1251j+i=12xiaij61 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -65.53,65.53 \right] $\end{document}65.53,65.53 20.998
15. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{15} \left( x \right) ={\mathop{\sum }\nolimits }_{i=1}^{11}{ \left[ {a}_{i}- \frac{{x}_{1}({b}_{i}^{2}+{b}_{i}{x}_{2})}{{b}_{i}^{2}+{b}_{i}{x}_{3}+{x}_{4}} \right] }^{2}$\end{document}F15x=i=111aix1bi2+bix2bi2+bix3+x42 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -5,5 \right] $\end{document}5,5 40.00030
16. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{16} \left( x \right) =4{x}_{1}^{2}-2.1{x}_{1}^{4}+ \frac{1}{3} {x}_{1}^{6}+{x}_{1}{x}_{2}-4{x}_{2}^{2}+4{x}_{2}^{4}$\end{document}F16x=4x122.1x14+13x16+x1x24x22+4x24 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -5,5 \right] $\end{document}5,5 2−1.0316
17. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{17} \left( x \right) ={ \left( {x}_{2}- \frac{5.1}{4{\pi }^{2}} {x}_{1}^{2}+ \frac{5}{\pi } {x}_{1}-6 \right) }^{2}+10 \left( 1- \frac{1}{8\pi } \right) \cos {x}_{1}+10$\end{document}F17x=x25.14π2x12+5πx162+10118π cosx1+10 [ − 5, 10] × [0, 15]20.398
18. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{18} \left( x \right) = \left[ 1+{ \left( {x}_{1}+{x}_{2}+1 \right) }^{2} \left( 19-14{x}_{1}+3{x}_{1}^{2}-14{x}_{2}+6{x}_{1}{x}_{2}+3{x}_{2}^{2} \right) \right] \times [30+{ \left( 2{x}_{1}-3{x}_{2} \right) }^{2}\times (18-32{x}_{1}+12{x}_{1}^{2}+48{x}_{2}-36{x}_{1}{x}_{2}+27{x}_{2}^{2})]$\end{document}F18x=1+x1+x2+121914x1+3x1214x2+6x1x2+3x22×30+2x13x22×1832x1+12x12+48x236x1x2+27x22 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -5,5 \right] $\end{document}5,5 23
19. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{19} \left( x \right) =-{\mathop{\sum }\nolimits }_{i=1}^{4}{c}_{i}\exp (-{\mathop{\sum }\nolimits }_{j=1}^{3}{a}_{ij}{ \left( {x}_{j}-{P}_{ij} \right) }^{2})$\end{document}F19x=i=14ci expj=13aijxjPij2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,1 \right] $\end{document}0,1 3−3.86
20. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{20} \left( x \right) =-{\mathop{\sum }\nolimits }_{i=1}^{4}{c}_{i}\exp (-{\mathop{\sum }\nolimits }_{j=1}^{6}{a}_{ij}{ \left( {x}_{j}-{P}_{ij} \right) }^{2})$\end{document}F20x=i=14ci expj=16aijxjPij2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,1 \right] $\end{document}0,1 6−3.22
21. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{21} \left( x \right) =-{\mathop{\sum }\nolimits }_{i=1}^{5}{ \left[ \left( X-{a}_{i} \right) { \left( X-{a}_{i} \right) }^{T}+6{c}_{i} \right] }^{-1}$\end{document}F21x=i=15XaiXaiT+6ci1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,10 \right] $\end{document}0,10 4−10.1532
22. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{22} \left( x \right) =-{\mathop{\sum }\nolimits }_{i=1}^{7}{ \left[ \left( X-{a}_{i} \right) { \left( X-{a}_{i} \right) }^{T}+6{c}_{i} \right] }^{-1}$\end{document}F22x=i=17XaiXaiT+6ci1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,10 \right] $\end{document}0,10 4−10.4029
23. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${F}_{23} \left( x \right) =-{\mathop{\sum }\nolimits }_{i=1}^{10}{ \left[ \left( X-{a}_{i} \right) { \left( X-{a}_{i} \right) }^{T}+6{c}_{i} \right] }^{-1}$\end{document}F23x=i=110XaiXaiT+6ci1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,10 \right] $\end{document}0,10 4−10.5364
Table 4

Information on complex CEC 2019 objective functions.

Objective functionRangeDimensions F min
1.Storn’s Chebyshev Polynomial Fitting Problem \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -8192,8192 \right] $\end{document}8192,8192 91
2.Inverse Hilbert Matrix Problem \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -16384,16384 \right] $\end{document}16384,16384 161
3.Lennard-Jones Minimum Energy Cluster \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -4,4 \right] $\end{document}4,4 181
4.Rastrigin’s Function[ − 100, 100]101
5.Griewangk’s Function[ − 100, 100]101
6.Weierstrass Function[ − 100, 100]101
7.Modified Schwefel’s Function[ − 100, 100]101
8.Expanded Schaffer’s F6 Function[ − 100, 100]101
9.Happy Cat Function[ − 100, 100]101
10.Ackley Function[ − 100, 100]101
The quality of the EBOA optimization results is compared with ten state-of-the-art metaheuristic algorithms including (i) the most widely used and oldest methods: GA, PSO, (ii) most cited methods from 2009 to 2014: GSA, TLBO, GWO, (iii) Recently published and widely used methods from 2016 to 2021: WOA, MPA, LPB, FDO, and TSA. As noted in the literature, numerous optimization methods have been developed to date. Comparing the proposed EBOA approach with all of these methods, while possible, generates a hug deal of data. Among the metaheuristic algorithms developed, some methods have attracted more attention due to their high efficiency. For this reason, in this study, the ten mentioned metaheuristic algorithms that have been most considered and used have been selected to compare with the performance of EBOA. The values set for the control parameters of these metaheuristics are listed in Table 5.
Table 5

Adjusted values for competitor metaheuristic algorithms.

AlgorithmParameterValue
GA
TypeReal coded.
SelectionRoulette wheel (Proportionate).
CrossoverWhole arithmetic (Probability = 0.8, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\alpha \in \left[ -0.5,1.5 \right] $\end{document}α0.5,1.5)
MutationGaussian (Probability = 0.05).
PSO
TopologyFully connected.
Cognitive and social constant\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left( {C}_{1},{C}_{2} \right) = \left( 2,2 \right) $\end{document}C1,C2=2,2.
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range.
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TLBO
TF: teaching factor\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${T}_{F}=\text{round} \left[ (1+rand) \right] $\end{document}TF=round1+rand,
Random numberrand is a random number in interval \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,1 \right] .$\end{document}0,1.
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
WOA
Convergence parameter (a)a: Linear reduction from 2 to 0.
r r is a vector of random numbers in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,1 \right] .$\end{document}0,1.
l l is a random number in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ -1,1 \right] .$\end{document}1,1.
TSA
Pmin and P max1 and 4
c1c2c3Random numbers lie in the range from 0 to 1.
MPA
Constant numberP = 0.5, 
Random vectorR is a vector of uniform random numbers in the interval \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$ \left[ 0,1 \right] .$\end{document}0,1.
Fish Aggregating Devices (FADs)FADs = 0.2
Binary vectorU = 0 or 1
FDO
Weight factor wfwf is either 0 or 1
r r is a vector of random numbers in  0,1.
LPB
Crossover Percentagepc = 0.6,
Number of Offsprings (Parnets)nc = 2 ⋅round(pcnPop/2), 
Mutation Percentagepm = 0.3,
Number of Mutantsnm = round(pmnPop),
Mutation Ratemu = 0.03,
Divide probabilitydp = 0.5,
beta = 8
gamma = 0.05
The EBOA and ten competitor metaheuristics are each employed in twenty independent implementations to solve the objective functions F1 to F23, while each implementation contains 1,000 repetitions. The termination condition can be based on various criteria such as number of iterations, number of function evaluations, the error between several consecutive iterations, and other cases. In this study, the termination condition is considered based on the number of iterations. The experiments are performed in the Matlab R2020a version in the environment of Microsoft Windows 10 with 64 bits on the Core i-7 processor with 2.40 GHz and 6 GB memory. Simulation results and performance of metaheuristic algorithms are reported using five indicators: mean, best proposed solution, standard deviation (std), median, and rank.

Evaluation unimodal objective function

The results of applying EBOA and ten competitor metaheuristic algorithms to optimize F1 to F7 unimodal functions are reported in Table 6. The optimization outputs show the EBOA has provided the global optimal in solving the F1, F3, and F6 functions. EBOA is the first best optimizer in solving F2, F4, F5, and F7. The simulation results show that in handling the F1 to F7 functions, the EBOA performed better than the ten competitor metaheuristic algorithms and ranked first.
Table 6

Optimization results of EBOA and competitor metaheuristics on the unimodal function.

EBOA FDO LPB MPA TSA WOA GWO TLBO GSA PSO GA
F1Mean01.27E−310.60595391.71E−188.21E−331.59E−091.09E−581.34E−592.03E−171.77E−0513.24063
Best03.29E−340.16567265.92E−261.14E−621.09E−167.73E−619.36E−618.20E−182.00E−105.59388
std01.96E−310.28831186.76E−182.53E−323.22E−094.09E−582.05E−597.10E−185.86E−055.72729
Median02.61E−320.5527711.63E−196.83E−401.09E−091.08E−594.69E−601.78E−179.93E−0711.04546
Rank1510648327911
F2Mean1.30E−2613.26E−160.20889372.78E−095.02E−390.5381361.30E−345.55E−352.37E−080.3411392.479432
Best8.30E−2713.19E−170.10021844.25E−188.26E−430.4613081.55E−351.32E−351.59E−080.0017411.591248
std03.87E−160.07571051.08E−081.72E−380.0480652.20E−344.71E−353.96E−090.6695940.642843
Median3.50E−2651.79E−160.18311533.18E−118.26E−410.5450566.38E−354.37E−352.33E−080.1301142.463873
Rank1586210437911
F3Mean00.80160655116.42580.3770133.20E−199.94E−087.41E−157.01E−15279.3468589.49421536.915
Best00.01484242992.39720.0320387.29E−301.74E−124.75E−201.21E−1681.912421.6149371014.689
std01.85458931727.16520.2017589.90E−193.87E−071.90E−141.27E−14112.30571524.007367.2108
Median00.20279595275.20840.3786589.81E−211.74E−081.59E−161.86E−15291.44154.154451510.715
Rank1711625438910
F4Mean5.30E−2600.83090082.94532993.66E−082.01E−225.10E−051.26E−141.58E−153.25E−093.9634612.094271
Best3.10E−2660.2114672.04189943.42E−171.87E−527.34E−063.43E−166.42E−162.09E−091.6045221.389849
std00.54405580.50308386.45E−085.96E−225.74E−052.32E−147.14E−167.50E−102.2040830.337011
Med2.10E−2620.74761372.88207893.03E−083.13E−273.45E−057.30E−151.54E−153.34E−093.2607912.09854
Rank1810627435119
F5Mean25.9177145.546797163.7064242.4977828.7674641.1592326.86099145.666736.1072350.26311310.4313
Best24.9458119.18010595.10722841.5868228.5383139.308825.21377120.793225.838113.647051160.5013
std0.43326537.83693741.0432910.6155210.3647730.4895020.88408819.7390532.4625236.52379120.4473
Median26.1311222.222915166.0925642.4906828.5410241.308826.70967142.898726.0747528.69395279.5174
Rank1710635294811
F6Mean04.93E−210.050.3908723.84E−202.53E−090.6423340.45020.250214.5501
Best01.92E−2300.2745826.74E−261.95E−151.57E−050056.00042
std09.71E−210.22360680.0802821.50E−194.05E−090.3010880.510418012.773115.835177
Median01.41E−2100.4066486.74E−211.95E−090.621487001913.5
Rank125634871109
F7Mean4.77E−050.80722370.07180380.0021820.0002760.019460.0008190.003130.0206920.1134150.00568
Best9.87E−070.26797680.03186380.0014290.0001040.0020270.0002480.0013620.010060.0295930.002111
std4.40E−050.36258830.02214580.0004660.0001230.0041150.0005030.0013510.011360.0458680.002433
Median3.56E−050.82735150.07196650.002180.0003670.0202720.0006290.0029120.0169960.1078720.005365
Rank1119427358106
Sum rank745634018462832406667
Mean rank16.428595.71422.57146.571444.57145.71429.42859.5714
Total rank168527345910

Evaluation of high-dimensional multimodal objective functions

The optimization results of F8 to F13 functions obtained from the implementation of EBOA and ten competing metaheuristic algorithms are released in Table 6. EBOA is able to converge to the global optimum in handling F9 and F11 functions. In optimizing the F10, F12, F13, and F14 functions, what is evident from the simulation results is that EBOA is the first best optimizer in these functions. In optimizing the F8 function, after GA and TLBO, the proposed EBOA is the third best optimizer of this function. What can be deduced from the results of Table 7 is that EBOA has a higher capability in optimizing high-dimensional multimodal functions compared to ten competitor algorithms and is ranked first as the best optimizer in functions 8 to F13.
Table 7

Optimization results of EBOA and competitor metaheuristics on the high-dimensional multimodal function.

EPOA FDO LPB MPA TSA WOA GWO TLBO GSA PSO GA
F8Mean−7149.45−6742.4711−11057.297−3652.09−5669.56−1633.55−5885.02−7803.47−2849.03−6908.54−8184.3
Best−8600.95−7688.9971−11972.938−4419.9−5706.3−2358.57−7227.05−9103.77−3969.23−8500.59−9717.68
std720.2391385.42421340.85937474.60821.84579374.5924984.4547986.5806540.379836.6452795.1826
Median−7123.95−6794.0493−11028.692−3632.65−5669.63−1649.72−5774.63−7735.22−2671.33−7098.95−8117.25
Rank4619811731052
F9Mean012.1061480.4418522152.69340.0058883.6660258.53E−1510.6776316.2677857.0618962.41204
Best05.79727990.0558848128.23060.0047761.7809909.8739634.97479527.8588336.86623
std04.36974760.262846515.183160.0006961.071772.08E−140.3971174.65881616.5173715.21563
Median010.9395760.3549152154.62140.0058713.78099010.8865715.4224255.2246861.67858
Rank1741135268910
F10Mean1.24E−158.76E−120.23789528.31E−106.38E−110.2791621.71E−140.2632083.57E−092.1546993.221863
Best8.88E−161.22E−130.11551221.68E−188.14E−150.0131281.51E−140.1563162.64E−091.1551512.757203
std1.09E−152.47E−110.09335062.80E−092.60E−100.1469613.15E−150.0728655.27E−100.5494460.361797
Median8.88E−162.53E−120.21366251.05E−111.10E−130.3128351.51E−140.2615413.64E−092.1700833.120322
Rank1375492861011
F11Mean00.01676470.504148201.55E−060.1057020.0037530.5876893.7375980.0462931.230221
Best000.240344204.23E−150.0810700.3101171.5192887.29E−091.140551
std00.01749950.1981803.38E−060.0073450.0073440.1691171.6702820.0518340.062756
Median00.00911830.482749708.77E−070.1070100.5820263.4242680.0294731.227231
Rank147126381059
F12Mean2.71E−070.0275680.00502440.0825590.0501641.5577460.0372110.0205510.0362830.4806720.047027
Best1.63E−079.02E−230.00037210.0779120.0354280.567260.0192950.0020315.57E−200.0001450.018364
std5.25E−080.04634170.0094980.0023860.0098550.45960.0138750.0286450.0608660.6025820.028483
Median2.70E−073.21E−070.00101680.0821110.0509351.567260.0329910.0151811.48E−190.15560.04179
Rank1429811635107
F13Mean3.88E−065.49E−050.03075450.5652542.6587780.3383920.5763270.3291240.0020850.5084121.208556
Best2.00E−064.55E−210.00648120.2802952.631750.3326880.2978220.0382661.18E−189.99E−070.49809
std9.01E−070.0002020.02155770.1878170.0097960.0013430.1703590.1989390.0054761.2516810.333754
Median3.79E−062.09E−170.02764890.5798742.661750.3386880.5783230.2827842.14E−180.0439971.218096
Rank1248116953710
Sum rank926254336482933424649
Mean rank1.54.33333334.16666677.1666667684.83333335.577.66666678.1666667
Total rank1328610457911

Evaluation of fixed-dimensional multimodal objective functions

The results obtained from the implementation of EBOA and ten competitor metaheuristic algorithms on F14 to F23 functions are presented in Table 8. What emerges from the simulation output is that EBOA is the first best optimizer to handle F14 to F23 functions. Analysis and comparison of the obtained results indicate that the proposed EBOA approach has a superior performance over ten metaheuristic algorithms and among them, it has the first rank of the best optimizer.
Table 8

Optimization results of EBOA and competitor metaheuristics on the fixed-dimensional multimodal functions.

EBOA FDO LPB MPA TSA WOA GWO TLBO GSA PSO GA
F14Mean0.9980041.34591330.9980040.9989981.7987571.0437983.7408582.2642923.5914352.1736010.99867
Best0.9980040.99800380.9980040.9981370.9980040.9980040.9980040.9983910.9995080.9980040.998004
std9.23E−140.48643761.10E−100.0003240.5274140.2045283.9697261.1496212.7787912.9365360.002471
Median0.9980040.99800380.9980040.9991381.9126080.9980042.9821052.2752312.9866580.9980040.998027
Rank151364108972
F15Mean0.0003080.00031380.00852140.0039360.0004080.0037190.006370.0031690.0024020.0016840.005395
Best0.0003070.00030750.00048780.000270.0002640.0004410.0003070.0022060.0008050.0003070.000775
std3.30E−072.61E−050.00925680.0050517.59E−050.0012480.0094010.0003940.0011950.0049320.0081
Median0.0003080.00030750.00229420.00270.000390.004410.0003080.0031850.0023110.0003070.002074
Rank1211837106549
F16Mean−1.03163−1.0316285−1.0316273−1.03157−1.03158−1.03158−1.03161−1.03161−1.03161−1.03161−1.0316
Best−1.03163−1.0316285−1.0316285−1.0316−1.03161−1.0316−1.03163−1.03163−1.03163−1.03163−1.03163
std1.14E−107.75E−102.19E−064.42E−054.09E−053.78E−053.78E−053.78E−053.78E−053.78E−054.92E−05
Median−1.03163−1.0316285−1.0316281−1.0316−1.0316−1.0316−1.03163−1.03163−1.03163−1.03163−1.03162
Rank12376644445
F17Mean0.3978870.39788750.39790730.3993020.4000930.4050550.3978940.3978920.3978920.7854480.436972
Best0.3978870.39788740.39788750.397570.3980520.3994050.3978870.3978870.3978870.3978870.397888
std3.28E−093.04E−072.95E−050.0036720.004480.0036641.02E−051.02E−051.02E−050.7217520.140745
Median0.3978870.39788740.39789440.397820.3990520.404660.3978880.3978870.3978870.3979150.397925
Rank125678433109
F18Mean333.00002323.0000323.0931073.0002283.0000423.0000313.0000313.0000314.359425
Best333.000000332.9999743.00014933333.000001
std01.49E−112.64E−057.69E−050.0318510.0001267.76E−057.69E−057.69E−057.69E−056.035694
Median333.000010933.1034193.0001493.0000073333.001083
Rank11247653338
F19Mean−3.86278−3.86278−3.86278−3.86264−3.80654−3.8616−3.86211−3.86132−3.86272−3.86272−3.85428
Best−3.86278−3.86278−3.86278−3.8627−3.8366−3.86276−3.86278−3.8625−3.86278−3.86278−3.86278
std3.38E−075.74E−062.05E−070.0001420.0152570.0030620.0017040.0013740.0001420.0001420.014852
Median−3.86278−3.86278−3.86278−3.8627−3.8066−3.86266−3.86275−3.86187−3.86278−3.86278−3.86226
Rank11138546227
F20Mean−3.322−3.3219846−3.274437−3.32105−3.31947−3.23224−3.25234−3.20112−3.32195−3.2619−2.82386
Best−3.322−3.321995−3.3219951−3.3213−3.3212−3.31342−3.32199−3.26174−3.322−3.322−3.31342
std9.99E−081.16E−050.05975880.0001470.0030690.0356520.0765650.0318230.0001220.0706230.385958
Median−3.322−3.3219895−3.3219932−3.3211−3.32058−3.2424−3.26231−3.20744−3.322−3.32166−2.96828
Rank1264598103711
F21Mean−10.1532−10.150391−5.642384−9.95429−5.40202−7.40498−9.64509−9.19003−5.14855−5.38916−4.30394
Best−10.1532−10.153199−10.153196−10.1532−7.50209−7.48159−10.1532−9.66387−10.1532−10.1532−7.82781
std2.33E−060.0039423.49846090.5325570.9679220.033461.5619370.1207443.0544583.0197621.740798
Median−10.1532−10.152212−3.8690264−10.1532−5.50209−7.40159−10.1526−9.1532−3.64784−5.10077−4.16197
Rank1273864510911
F22Mean−10.4029−10.134399−7.01164−10.2858−5.9134−8.6996−10.4024−10.0485−10.0846−7.63218−5.11734
Best−10.4029−10.402939−10.402941−10.4029−9.06249−10.4029−10.4028−10.4029−10.4029−10.4029−9.11064
std2.18E−061.17835163.53586950.2453341.7549121.3561730.0004740.3983271.4231223.5416081.969599
Median−10.4029−10.401725−7.7657081−10.4027−5.06249−8.81649−10.4025−10.1836−10.4029−10.4019−5.0294
Rank1493107265811
F23Mean−10.5364−10.532661−6.4463091−10.1407−9.80971−10.0215−10.1301−9.26415−10.5363−6.16472−6.56203
Best−10.5364−10.536408−10.536409−10.5364−10.3683−10.5364−10.5363−10.534−10.5364−10.5364−10.2216
std3.63E−060.00628713.85556841.1401111.6064030.3558281.8143661.6765490.0003863.7348972.617187
Median−10.5364−10.535362−4.5055122−10.5364−10.3613−10.0003−10.5359−9.67172−10.5364−4.50535−6.5629
Rank1310476582119
Sum rank1024554567645659466582
Mean rank12.45.54.56.76.45.65.94.66.58.2
Total rank1253108674911
The performance of the EBOA and the ten competitor metaheuristic algorithms implemented on the F1 to F23 objective functions are shown in Fig. 2 as the boxplot. For visual analysis of the ability to achieve the searched solution, Figs. 3 to 11 show the convergence curves of the EBOA and ten other competing algorithms in optimizing a number of objective functions.
Figure 2

(A–W) Boxplot diagram of EPOA and ten metaheuristic algorithms performances on F1 to F23.

Figure 3

Convergence curves of EPOA and competitor algorithms on F1.

Figure 11

Convergence curves of EPOA and competitor algorithms on F23.

Statistical analysis

Capability analysis of metaheuristic algorithms in terms of mean, best, std, median, and rank indices provides valuable information to compare their performance. However, a very small probability can be considered for the chance superiority of one method over another. In this study, the Wilcoxon rank sum test (Wilcoxon, 1992) and non-parametric t-test (Kim, 2015) are used to determine whether the superiority of the EBOA over any of the competing metaheuristic algorithms was statistically significant. The results of applying Wilcoxon rank sum test and non-parametric t-test on EBOA performance and competitor metaheuristic algorithms are released in Tables 9 and 10, respectively. In cases where the p-value is less than 0.05, it can be concluded that there is a significant difference between the two compared groups. What is clear from the results of the Wilcoxon rank sum test and non-parametric t-test is that the EBOA has a significant superiority in terms of statistical analysis over all ten competing algorithms in all objective function groups.
Table 9

Results of applying Wilcoxon rank sum test on performances of EBOA and competitor metaheuristic algorithms.

Compared algorithmObjective function type
UnimodalHigh-dimensional multimodalFixed-dimensional multimodal
EBOA vs. GA1.01E−244.02E−181.04E−22
EBOA vs. PSO1.01E−242.42E−203.74E−34
EBOA vs. GSA9.78E−251.89E−211.28E−32
EBOA vs. TLBO9.3E−213.51E−124.35E−33
EBOA vs. GWO6.49E−236.96E−081.46E−24
EBOA vs. WOA1.07E−134.58E−110.018214
EBOA vs. TSA1.78E−202.37E−120.044185
EBOA vs. MPA1.01E−245.53E−061.44E−34
EBOA vs. LPB1.35E−210.00027.37E−31
EBOA vs. FDO6.98E−126.05E−076.24E−15
Table 10

Results of applying non-parametric t-test on performances of EBOA and competitor metaheuristic algorithms.

Compared algorithmObjective function type
UnimodalHigh-dimensional multimodalFixed-dimensional multimodal
EBOA vs. GA0.0181070.0190620.034177
EBOA vs. PSO6.78E−064.4E−061.45E−09
EBOA vs. GSA1.25E−062.9E−060.015419
EBOA vs. TLBO5.47E−063.01E−052.5E−13
EBOA vs. GWO1.61E−064.42E−061.17E−11
EBOA vs. WOA7.43E−060.001330.001565
EBOA vs. TSA4.73E−060.0316831.12E−10
EBOA vs. MPA9.96E−065.93E−062.5E−07
EBOA vs. LPB0.0240040.0759634.85E−11
EBOA vs. FDO6.64E−080.0014989.29E−13

Sensitivity analysis

The proposed EBOA approach is a population-based metaheuristic algorithm that addresses optimization problems in a repetitive-based process. Thus, the two parameters of EBOA, population number (N) and the maximum number of iterations of the algorithm (T), affect EBOA performance. This subsection is dedicated to the sensitivity analysis of EBOA to changes in N and T parameters. EBOA sensitivity analysis to parameter N has been studied by applying it to the handling of functions F1 to F23 for different values of parameter N equal to 20, 30, 50, and 80. The results of EBOA sensitivity analysis to parameter N are released in Table 11. The effect of the parameter N changes on EBOA convergence curves and how to achieve the solution is shown in Fig. 12. The simulation results reveal the fact that increasing the EBOA population size increases the search power of this algorithm, as it can be seen that by increasing the values of parameter N, the proposed EBOA achieves better solutions and as a result, the values of all objective functions decrease.
Table 11

Results of EBOA sensitivity analysis to parameter N.

Objective functionsNumber of population members
20305080
F10000
F22.4E−2101.3E−2611.2E−2910
F30000
F44.2E−2145.3E−2601.1E−2843.4E−304
F526.3977325.9177125.5116524.81
F60000
F76.73E−054.77E−053.04E−051.99E−05
F8−7006.16−7149.45−7477.15−7491.36
F90000
F102.49E−151.24E−158.88E−168.88E−16
F110000
F124.4E−072.71E−071.77E−071.1E−07
F130.0011253.88E−062.87E−061.6E−06
F142.4326580.9980.9980.998
F150.0003790.0003080.0003070.000307
F16−1.03163−1.03163−1.03163−1.03163
F170.3978870.3978870.3978870.397887
F183333
F19−3.86278−3.86278−3.86278−3.86278
F20−3.31004−3.322−3.322−3.322
F21−9.64339−10.1532−10.1532−10.1532
F22−10.1371−10.4029−10.4029−10.4029
F23−9.98805−10.5364−10.5364−10.5364
Figure 12

(A–T) EBOA convergence curves in the study of sensitivity analysis to population size N changes.

EBOA sensitivity analysis to the parameter T has been tested by implementing it on the handling of F1 to F23 functions for the parameter T equal to 100, 500, 800, and 1,000. The outputs of the EBOA sensitivity analysis for the T parameter value are shown in Table 12. In addition, the EBOA convergence curves, which show how to achieve the optimal solution under changes in the T parameter, are shown in Fig. 13. What can be understood from the simulation results is that increasing the number of iterations gives the EBOA more opportunity to be able to identify the main optimal area more accurately and to converge more towards the global optimal, which this reduced the values of the objective functions.
Table 12

Results of EBOA sensitivity analysis to parameter T.

Objective functionsMaximum number of iterations
1005008001,000
F13.52E−474.5E−26300
F21.09E−221.4E−1258.9E−2091.3E−261
F31.08E−413.3E−23100
F49.21E−242.6E−1273.9E−2065.3E−260
F528.4180327.1772726.5313925.91771
F60000
F70.0006659.19E−055.67E−054.77E−05
F8−6297.42−6741.15−6801.7−7149.45
F90000
F102.15E−152.14E−152.13E−151.24E−15
F110000
F120.0013962.55E−061.02E−062.71E−07
F130.0684942.27E−052.04E−053.88E−06
F140.9980.9980.9980.998
F150.0009240.0023410.0013760.000308
F16−1.03163−1.03163−1.03163−1.03163
F170.3978890.3978870.3978870.397887
F183333
F19−3.8582−3.86278−3.86278−3.86278
F20−3.27088−3.29773−3.30403−3.322
F21−9.47147−9.53358−9.64336−10.1532
F22−9.60492−10.4029−10.4029−10.4029
F23−10.2011−10.4668−10.5364−10.5364
Figure 13

(A–W) EBOA convergence curves in the study of sensitivity analysis to maximum number of iterations T changes.

Evaluation of CEC 2019 suite objective functions

The implementation of EBOA on the functions F1 to F23 indicated the high ability of EBOA in optimization applications. In this subsection, the performance of the EBOA is evaluated in addressing the CEC 2019 objective functions, which consist of ten functions of CEC01 to CEC10. The optimization results of CEC 2019 functions using EBOA and competitor algorithms are presented in Table 13. EBOA is the first best optimizer in addressing the functions cec02, cec03, cec07, cec08, cec09, and cec10. The results of the Wilcoxon rank-sum test and t-test are reported in Table 14. In cases where the p-value in this table is less than 0.05, the proposed EBOA approach has a statistically significant superiority over the corresponding algorithm. Analysis of the simulation results shows that the proposed EBOA approach has a superior performance over competitor algorithms in handling most cases of CEC 2019 test functions.
Table 13

Optimization results of EBOA and competitor metaheuristics on CEC 2019 suite test.

EBOA FDO LPB MPA TSA WOA GWO TLBO GSA PSO GA
cec01Mean31397.63152322892.89E+1031395.71.65E+082.12E+10791992164.33E+083.24E+124.6E+085.52E+10
Best31395.7187157.21.57E+0931395.739799.58177382345782.62101859.56.80E+1143748586.23E+09
std5.282375225118682.09E+100.0002924.16E+082.76E+101.1E+087.95E+082.05E+121.36E+094.6E+10
Median31395.7946467102.52E+1031395.722663378.34E+09359007621.01E+082.94E+12699391384.1E+10
Rank2391584611710
cec02mean17.3428617.3428627.7072817.3428618.3476917.3450817.3433217.3708314774.9117.3428653.56418
Best17.3428617.3428617.3489217.3428617.3480317.3430217.3431117.356927679.09217.3428619.23928
std02.61E−0913.661827.89E−120.8081270.0018670.0001240.0082924550.2323.05E−1531.00016
Median17.3428617.3428621.7408617.3428618.1757217.3446417.343317.3693214635.0817.3428641.51365
Rank1493865711210
cec03mean12.702412.702412.702412.702412.7029612.702412.702512.7024112.702412.702412.70241
Best12.702412.702412.702412.702412.702412.702412.702412.7024112.702412.702412.7024
std3.65E−151.50E−115.55E−083.65E−150.0013395.24E−070.0004159.08E−063.65E−152.08E−151.85E−06
Median12.702412.702412.702412.702412.7024712.702412.702412.7024112.702412.702412.7024
Rank13419587126
cec04mean29.4009125.884979.206548.0110054244.962253.3125264.7791230.98656.5169867.58687134.1774
Best12.934467.96035320.977030.0001774.23499110.331513.37336161.55262.98487710.9445438.25033
std14.087288.96897947.080964.3844812404.606154.3524581.996252.665752.247511116.215159.15384
Median26.3663125.3017170.530928.4705014468.768215.869357.17001225.01586.96471129.8487135.3556
Rank4362119108157
cec05mean1.1605081.1119981.2264551.0508182.9990241.5762711.2667111.8579171.009481.1930981.602569
Best1.0517281.0401381.0489861.0147721.8714211.2017341.0582021.65881411.0639781.288733
std0.0651150.0387010.1332840.0324071.0726190.2519010.2085950.1383940.0100120.2081630.214074
Median1.1377671.1217551.1934371.0442832.6320571.5762261.1893351.84881.0073961.1451251.552622
Rank4362118710159
cec06mean2.1064288.1409975.6232781.81801910.429868.97828810.4213810.597291.0001053.9932418.647723
Best1.151375.8740724.3209871.1148839.1632137.2766669.0096529.8531891.0000731.2281365.785451
std0.6607140.6617790.8075920.7443120.6921891.1242880.7833990.4347761.97E−051.9079461.341339
Median1.9297948.2175925.5169181.40409510.33538.74101710.2631910.60441.0001054.03359.229273
Rank3652108911147
cec07mean112.4807143.7672237.0939174.3536617.3397616.2879418.7292623.5747187.8737165.1417124.3327
Best14.7536990.6573861.10223110.9085246.6122148.127979.03335304.737582.1528210.3783712.09356
std79.2291832.18216140.872445.33848259.8789349.1442300.5297178.817690.5322132.600892.31307
Median112.4599144.3338215.906152.9829597.9233632.1518334.0758603.1046180.2679148.4302120.9174
Rank1375109811642
cec08mean2.5469494.3002425.4194143.8695956.1396615.8399994.7618545.3601525.3722615.0213965.04118
Best1.2897343.0832633.5591192.7089394.8954284.8410952.9256324.3010284.3634953.6337444.04053
std0.7276270.6100210.5981870.6167910.4721640.5233590.9426760.7371830.4994150.6449970.466454
Median2.7032824.3360975.4195974.0349336.2341955.8921744.9840885.2221195.350665.1120395.042585
Rank1392111047856
cec09mean2.3436082.3672723.132432.359083440.2344.5756374.41688819.605143.142722.5495433.668416
Best2.338392.3469842.7200812.3412922.8963313.5829513.5962614.6525372.5769732.3953942.819678
std0.0052930.0155510.2880090.022234591.76030.8915690.65195661.467220.4948740.1319250.509679
Median2.3413822.3626133.1445582.35069279.84534.5829474.4319765.8632263.0115122.5141023.538552
Rank1352119810647
cec10mean5.31341620.0030420.0357617.1166320.415820.1670720.4347619.5082418.6458420.0011819.3639
Best8.88E−1619.9133720.010240.00022920.2663920.0447720.291219.3489593.25E−0919.997517.534284
std8.7235220.0416170.0198287.0576550.0699390.1027940.0825892.8694684.6397210.0062763.0762
Median1.15E−1419.9998120.035352020.4056120.1499120.4439120.3809819.9908819.9997420.24951
Rank1782109115364
Sum rank1938682296817482494468
Mean rank1.93.86.82.29.68.17.48.24.94.46.8
Total rank135210879546
Table 14

Results of applying the Wilcoxon rank sum test and non-parametric t-test on CEC 2019 test functions.

Compared algorithmTest type
Wilcoxon rank sum testt-test
EBOA vs. GA1.21E−230.000446
EBOA vs. PSO1.4E−120.14247
EBOA vs. GSA1.01E−080.000112
EBOA vs. TLBO9.19E−340.028878
EBOA vs. GWO5.17E−260.007451
EBOA vs. WOA2.66E−340.005366
EBOA vs. TSA1.44E−340.09192
EBOA vs. MPA0.2449150.048768
EBOA vs. LPB3.89E−270.000215
EBOA vs. FDO4.12E−160.010523

Discussion

Exploitation and exploration are very influential on the performance of metaheuristic algorithms in finding optimal solutions to problems. Exploitation is the notion of local search capability around existing solutions that enables the algorithm to converge to better solutions that may be located in situations close to existing solutions. The impact of exploitation on the ability of metaheuristic algorithms is especially evident in dealing with problems that have only one main peak. The results of optimizing the functions F1 to F7 (with only the main peak) show that the EBOA has the high exploitation ability in local search and convergence to the global optimal solution. The high exploitation of the EBOA is especially evident in the handling of the functions F1, F3, and F6, which has converged to the global optimal. Exploration is the concept of global search capability in all areas of the problem-solving space that enables the algorithm to identify the main optimal area containing the global optimal in the presence of local optimal areas. The effect of exploration on the ability of metaheuristic algorithms is especially evident in handling problems that have several non-optimal peaks in addition to the main peak. The results of optimizing the F8 to F13 functions (with several non-optimal peaks) show that the EBOA has acceptable exploration power in the global search and identification of the main optimal area. The high exploration capability of EBOA, especially in handling F9 and F11, has led to the accurate identification of the main optimal area and the success of the algorithm in achieving the global optimum. In addition to having high capabilities in exploration and exploitation, the conditions that predispose metaheuristic algorithms to success in achieving solutions are the proper balance between these two indicators. Objective functions F14 to F23 have fewer non-optimal peaks than functions F8 to F13, and are good criteria for analyzing the ability of optimization algorithms to have the proper balance between exploration and exploitation. The results of optimization of F14 to F23 functions indicate that EBOA has a high potential for balancing exploration and exploitation to identify the main optimal region and converge towards the global optimal. An overall analysis of the results of optimizing the F1 to F23 objective functions frees the inference that the proposed EBOA approach has a high potential for exploration and exploitation as well as a balance between the two capabilities.

Conclusions

Metaheuristic algorithms are one of the most widely used and effective stochastic methods for solving optimization problems. In this study, a new human-based algorithm called the Election Based Optimization Algorithm (EBOA) was proposed. The fundamental inspiration of the EBOA is the voting and election process in which people vote for their preferred candidate to elect the leader of the population. The EBOA steps in two phases of (i) exploration, including election holding and (ii) exploitation, including raising public awareness for better decision-making are mathematically modeled. The efficiency of EBOA in providing solutions to optimization problems was tested on thirty-three standard benchmark functions of a variety of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, CEC 2019 types. The optimization results of unimodal functions indicated the high exploitation ability of EBOA in local search. The optimization results of high-dimensional multimodal functions showed the EBOA exploration capability in the global search of problem-solving space. In addition, the results obtained from the optimization of fixed-dimensional multimodal functions concluded that EBOA, by creating the proper balance between exploration and exploitation, has an effective efficiency in providing solutions to this type of problems. The implementation of EBOA on the complex CEC2019 suite test functions indicated the effectiveness of the proposed approach in dealing with complex optimization problems. The quality of the results delivered by the EBOA is compared against the performance of ten state-of-the-art metaheuristic algorithms. Comparing the simulation results, it can be found that EBOA has provided better optimization results and is much more competitive than the ten metaheuristic algorithms. The findings of simulation, statistical analysis, and sensitivity analysis indicate the high capability and efficiency of the EBOA in dealing with optimization issues. The proposed EBOA approach enables several future directions, the most specific of which are the development of the EBOA binary version for discrete space applications, and the design of the EBOA multi-objective version to handle multi-objective optimization problems. The EBOA is applied to solve optimization problems in various sciences as well as real-world applications are other suggestions for future directions. The proposed EBOA approach is a stochastic-based solving method. So, the main limitation of EBOA, similar to all stochastic-based approaches, is there is no guarantee that EBOA will achieve the optimal global solution. In addition, EBOA may fail to address some optimization applications because, according to the NFL theorem, there is no presumption that a metaheuristic algorithm is successful or not. Another limitation of EBOA is that it is always possible to develop newer algorithms that perform better than existing algorithms and EBOA. However, the optimization results show that the EBOA has provided solutions that are very close to the global optimal and, in some cases, precisely the global optimal. This EBOA capability is particularly evident in optimizing the F1, F3, F6, F9, F11, and F18 because it has made available the optimal global solution.

Matlab code of proposed algorithm EBOA

A proposed algorithm that implements EBOA steps on optimization of objective functions and publishes the optimal solution in the output. Click here for additional data file.

File of code for objective functions

Determines the information of the objective function and provides it to the main program. Click here for additional data file.

Input of the EBOA optimizer

The main implementation code that call the information of the objective functions (including: number of variables, upper and lower band of variables, and mathematical formula of the objective function). This code then puts the objective function information as the input of the EBOA optimizer to determine its optimal solution. Click here for additional data file.

Pseudocode of EBOA.

Algorithm 1.

Start EBOA.
Input problem information: variables, objective function, and constraints.
Set EBOA population size (N) and iterations (T).
Generate the initial population matrix at random.
Evaluate the objective function.
For t= 1 to T
Update best and worst population members.
Phase 1: Voting process and holding elections (exploration).
Calculate A using Eq. (4).
Determine candidates based on awareness criteria.
Simulate holding election and voting using Eq. (5).
Count the votes and determine the election winner as leader.
For i= 1 to N
Calculate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${X}_{i}^{\mathrm{new},{P}_{1}}$\end{document}Xinew,P1 using Eq. (6).
Update Xi using Eq. (7).
Phase 2: Public movement to raise awareness (exploitation).
Calculate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}${X}_{i}^{\mathrm{new},{P}_{2}}$\end{document}Xinew,P2 using Eq. (8).
Update Xi using Eq. (9).
end
Save best proposed solution so far.
end
Output best quasi-optimal solution obtained with the EBOA.
End EBOA.
  7 in total

1.  Architecture for an artificial immune system.

Authors:  S A Hofmeyr; S Forrest
Journal:  Evol Comput       Date:  2000       Impact factor: 3.277

2.  A hyper-matheuristic approach for solving mixed integer linear optimization models in the context of data envelopment analysis.

Authors:  Martin Gonzalez; Jose J López-Espín; Juan Aparicio; El-Ghazali Talbi
Journal:  PeerJ Comput Sci       Date:  2022-01-20

3.  Ant system: optimization by a colony of cooperating agents.

Authors:  M Dorigo; V Maniezzo; A Colorni
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  1996

4.  Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications.

Authors:  Pavel Trojovský; Mohammad Dehghani
Journal:  Sensors (Basel)       Date:  2022-01-23       Impact factor: 3.576

5.  Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization.

Authors:  Mohammad Dehghani; Pavel Trojovský
Journal:  Sensors (Basel)       Date:  2021-07-03       Impact factor: 3.576

Review 6.  T test as a parametric statistic.

Authors:  Tae Kyun Kim
Journal:  Korean J Anesthesiol       Date:  2015-11-25

7.  A multi-objective algorithm for virtual machine placement in cloud environments using a hybrid of particle swarm optimization and flower pollination optimization.

Authors:  Sara Mejahed; M Elshrkawey
Journal:  PeerJ Comput Sci       Date:  2022-01-12
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.