Literature DB >> 36253404

A new human-inspired metaheuristic algorithm for solving optimization problems based on mimicking sewing training.

Mohammad Dehghani1, Eva Trojovská2, Tomáš Zuščák1.   

Abstract

This paper introduces a new human-based metaheuristic algorithm called Sewing Training-Based Optimization (STBO), which has applications in handling optimization tasks. The fundamental inspiration of STBO is teaching the process of sewing to beginner tailors. The theory of the proposed STBO approach is described and then mathematically modeled in three phases: (i) training, (ii) imitation of the instructor's skills, and (iii) practice. STBO performance is evaluated on fifty-two benchmark functions consisting of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and the CEC 2017 test suite. The optimization results show that STBO, with its high power of exploration and exploitation, has provided suitable solutions for benchmark functions. The performance of STBO is compared with eleven well-known metaheuristic algorithms. The simulation results show that STBO, with its high ability to balance exploration and exploitation, has provided far more competitive performance in solving benchmark functions than competitor algorithms. Finally, the implementation of STBO in solving four engineering design problems demonstrates the capability of the proposed STBO in dealing with real-world applications.
© 2022. The Author(s).

Entities:  

Mesh:

Year:  2022        PMID: 36253404      PMCID: PMC9574811          DOI: 10.1038/s41598-022-22458-9

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.996


Introduction

Optimization problems represent challenges with several possible solutions, one of which is the best choice. Accordingly, optimization is the process of achieving the best solution to the optimization problem. An optimization problem has three main parts: decision variables, objective function, and constraints[1]. Optimization aims to determine the values of the decision variables by considering the constraints so that the objective function is optimized[2]. Optimization problem-solving methods fall into two groups, deterministic and random approaches. Deterministic approaches deal well with linear, continuous, differentiable, and convex optimization problems. However, the disadvantage of these approaches is that their ability is lost in solving nonlinear, non-convex, non-differentiable, high-dimensional, NP-hard problems and discrete search spaces. These items, which have led to the inability of deterministic approaches, are among the features of real-world optimization problems. Stochastic algorithms, especially metaheuristic algorithms, have been introduced to meet this challenge. Metaheuristic algorithms can provide suitable solutions to optimization problems by using random search in problem-solving space and relying on random operators[3]. The critical thing about metaheuristic algorithms is that there is no guarantee that the solution obtained from these methods will be the best or global optimal. This fact has led researchers to develop numerous metaheuristic algorithms to achieve better solutions. Metaheuristic algorithms are designed based on modeling ideas that exist in nature. Among the most famous metaheuristic algorithms can be mentioned Genetic Algorithm (GA)[4], Particle Swarm Optimization (PSO)[5], Ant Colony Optimization (ACO)[6], and Artificial Bee Colony (ABC)[7]. GA is based on modeling the reproductive process. PSO is developed based on modeling the swarm movement of birds and fish in nature. ACO is designed based on simulating the natural behaviors of ants, and ABC is introduced based on modeling the activities of bee colonies in search of food. Metaheuristic algorithms must have an acceptable ability for exploration and exploitation to deliver suitable optimization performance. Exploration is the concept of global search in different parts of the problem-solving space to find the main optimal area. Exploitation means a local search around candidate solutions to find better possible solutions that may be near them. In addition to having a high quality in exploration and exploitation, balancing these two indicators is the key to the success of metaheuristic algorithms[8]. The main research question is, despite the large number of metaheuristic algorithms introduced so far, is there still a need to introduce newer methods? The answer to this question lies in the concept of the No Free Lunch (NFL) theorem[9]. According to the NFL, the excellent performance of an algorithm in solving a set of optimization problems does not guarantee the same performance of that algorithm in other optimization problems. This result is due to the random nature of metaheuristic algorithms in achieving the solution. In other words, the NFL states that it is impossible to claim that a particular algorithm is the best optimizer for dealing with all optimization issues. As a result, the NFL theorem has encouraged researchers to design new algorithms to provide more appropriate solutions closer to global optimization problems. The NFL has also motivated the authors of this study to be able to solve optimization problems more effectively by designing a new metaheuristic algorithm. The novelty and innovation of this paper are in designing a new algorithm called Sewing Training-Based Optimization (STBO) for optimization applications. The main contributions of this article are as follows: A new human-based metaheuristic algorithm based on sewing training modeling is introduced. STBO is modeled in three phases: (i) training, (ii) imitation of the instructor's skills, and (iii) practice. STBO performance is tested on fifty-two benchmark functions of unimodal, high-dimensional, fixed-dimensional multimodal types and from the CEC 2017 test suite. STBO results are compared with the performance of eleven well-known metaheuristic algorithms. STBO's performance in solving real-world applications is evaluated on four engineering design issues. The rest of the paper is organized so that a literature review is presented in the section “Literature review”. Next, the proposed algorithm is introduced and modeled in the section “Sewing Training-Based Optimization”. Simulations and analysis of their results are presented in the section “Simulation Studies and Results”. The STBO's performance in solving real-world problems is shown in the section “STBO for real-world applications.” Finally, conclusions and several study proposals are provided in the section “Conclusion and future works”.

Literature review

Metaheuristic algorithms have been developed based on mathematical simulations of various natural phenomena, animal behaviors, biological sciences, physics concepts, game rules, human behaviors, and other evolution-based processes. Based on the source of inspiration used in the design, metaheuristic algorithms fall into five groups: swarm-based, evolutionary-based, physics-based, game-based, and human-based. Swarm-based algorithms are derived from the mathematical modeling of natural swarming phenomena, the behavior of animals, birds, aquatic animals, insects, and other living organisms. For example, ant colonies can find an optimal path to supply the required food resources. Simulating this behavioral feature of ants forms the basis of ACO. Fireflies' feature of emitting flashing light and the light communication between them has been a source of inspiration in the design of the Firefly Algorithm (FA)[10]. Swarming activities such as foraging and hunting among animals are intelligence processes that are employed in the design of various algorithms such as PSO, ABC, Grey Wolf Optimizer (GWO)[11], Whale Optimization Algorithm (WOA)[12], Marine Predator Algorithm (MPA)[13], Cat and Mouse based Optimizer (CMBO)[14], Tunicate Swarm Algorithm (TSA)[15,16], Reptile Search Algorithm (RSA)[17], and Orca Predation Algorithm (OPA)[18]. Other swarm-based methods are Farmland Fertility[19], African Vultures Optimization Algorithm (AVOA)[20], Artificial Gorilla Troops Optimizer (GTO)[21], Tree Seed Algorithm (TSA)[22], Spotted Hyena Optimizer (SHO)[23], and Pelican Optimization Algorithm (POA)[24]. Evolutionary-based algorithms are inspired by the biological sciences, the concept of natural selection, and random operators. For example, Differential Evolution (DE)[25] and GA are two of the most significant evolutionary algorithms developed based on the mathematization of the reproductive process, concepts of Darwin's theory of evolution, and random operators of selection, mutation, and crossover. Some other Evolutionary-based algorithms are Genetic Programming (GP)[26], Memetic Algorithm (MA)[27], Evolution Strategy (ES)[28], Evolutionary Programming (EP)[29], and Cultural Algorithm (CA)[30]. Physics-based algorithms have been developed by simulating various laws, concepts, forces, and phenomena in physics. For example, the physical phenomenon of the water cycle has been the main idea in designing Water Cycle Algorithm (WCA)[31]. The employment of physical forces to design metaheuristic algorithms has been successful in designing algorithms such as Gravitational Search Algorithm (GSA)[32], Spring Search Algorithm (SSA)[33], and Momentum Search Algorithm (MSA)[34]. GSA is based on modeling the gravitational force that exists between masses at different distances from each other. SSA is inspired by the simulation of the spring tensile force and the Hook law between the weights connected by springs. MSA is developed based on the mathematization of the force of bullets' momentum that moves toward the optimal solution. Simulated Annealing (SA)[35], Flow Regime Algorithm (FRA)[36], Equilibrium Optimizer (EO)[37], and Multi-Verse Optimizer (MVO)[38] belong, e.g., among some other physics-based metaheuristic algorithms. Game-based algorithms are formed by mathematical modeling of various game rules. For example, Volleyball Premier League (VPL) algorithm[39] and Football Game-Based Optimization (FGBO)[40] are game-based algorithms designed based on the simulation of club competitions in volleyball and football games, respectively. Likewise, the players' attempt in the tug-of-war game has been the main inspiration for the Tug of War Optimization (TWO)[41] design. Likewise, the skill and strategy of the players in completing the puzzle pieces have been the idea behind the Puzzle Optimization Algorithm (POA)[42] design. Human-based algorithms have emerged inspired by human behaviors and interactions. This group's most widely used and well-known algorithm is Teaching–Learning-Based Optimization (TLBO). TLBO is introduced based on the mathematization of educational interactions between teachers and students[43]. The treatment process that the doctor uses to treat patients has been a central idea in the design of the Doctor and Patients Optimization (DPO)[44]. The relationships and collaboration of team members to perform teamwork and achieve the planned goal have been the source of inspiration for the Teamwork Optimization Algorithm (TOA) design[45]. Some other human-based metaheuristic algorithms are Society Civilization Algorithm (SCA)[1], Seeker Optimization Algorithm (SOA)[46], Imperialist Competitive Algorithm (ICA)[47], Human-Inspired Algorithm (HIA)[48], Social Emotional Optimization Algorithm (SEOA)[49], Brain Storm Optimization (BSO)[50], Anarchic Society Optimization (ASO)[51], Human Mental Search (HMS)[52], Gaining Sharing Knowledge based Algorithm (GSK)[53], Coronavirus Herd Immunity Optimizer (CHIO)[54], Ali Baba and the Forty Thieves (AFT)[55], Human Mental Search (HMS)[52], Multi-Leader Optimizer (MLO), Poor and Rich Optimization (PRO)[56], Following Optimization Algorithm (FOA)[57], and Election-Based Optimization Algorithm (EBOA)[58]. Scientists' research in metaheuristic algorithm studies also includes improving existing algorithms[59-63], extending hybrid algorithms by combining different algorithms to increase their efficiency[64], developing binary versions of optimization algorithms[65-68], and comprehensive survey studies[69-71]. Several more recent or well-known metaheuristic algorithms published by researchers are listed in Table 1. In addition, this table specifies these algorithms' categories and sources of inspiration.
Table 1

A brief review of metaheuristic algorithms.

CategoryAlgorithmInspiration
Swarm-basedParticle Swarm Optimization (PSO)[5]Searching flocks of birds and fish for food
Ant Colony Optimization (ACO)[6]Ant colony behavior in identifying the shortest path
Artificial Bee Colony (ABC)[7]Colony behavior of honey bees in holding food resources
Firefly Algorithm (FA)[10]Social behavior of fireflies
Grey Wolf Optimizer (GWO)[11]Hierarchical behavior of gray wolves during hunting
Whale Optimization Algorithm (WOA)[12]Social behavior of humpback whales
Marine Predator Algorithm (MPA)[13]The strategy of marine predators in hunting
Cat and Mouse based Optimizer (CMBO)[14]The process of chasing mice by cats
Tunicate Swarm Algorithm (TSA)[15,16]Jet propulsion and swarm intelligence of tunicate swarm during the searching for a food source
Reptile Search Algorithm (RSA)[17]Hunting behavior of Reptiles
Orca Predation Algorithm (OPA)[18]Predatory behavior of orcas
Farmland Fertility[19]Farmland fertility in nature
African Vultures Optimization Algorithm (AVOA)[20]African vultures’ lifestyle
Artificial Gorilla Troops Optimizer (GTO)[21]Gorilla troops' social intelligence in nature
Tree Seed Algorithm (TSA)[22],Relations between trees and their seeds
Spotted Hyena Optimizer (SHO)[23]Social behavior of spotted hyenas
Pelican Optimization Algorithm (POA)[24]The strategy of pelicans when hunting prey
Evolutionary-basedGenetic Algorithm (GA)[4]Evolutionary concepts
Differential Evolution (DE)[25]Darwin’s theory of evolution
Genetic Programming (GP)[26]Biological evolution
Memetic Algorithm (MA)[27]Darwinian principles and Dawkins’s notion of a meme
Evolution Strategy (ES)[28]Biological evolution
Evolutionary Programming (EP)[29]Finite state machine
Cultural Algorithm (CA)[30]Human cultural evolution process
Physics-basedWater Cycle Algorithm (WCA)[31]The natural cycle of water
Gravitational Search Algorithm (GSA)[32]Gravitational attraction force
Spring Search Algorithm (SSA)[33]The tensile force of spring and Hooke's law
Momentum Search Algorithm (MSA)[34]The momentum of the impact of the bullets
Simulated Annealing (SA)[35]Metal annealing process
Flow Regime Algorithm (FRA)[36]Classical fluid mechanics and flow regimes
Equilibrium Optimizer (EO)[37]Mass balance models
Multi-Verse Optimizer (MVO)[38]Multi-verse theory
Game-basedVolleyball Premier League (VPL)[39]Competition among volleyball teams during a season and coaching process during a volleyball match
Football Game-Based Optimization (FGBO)[40]Holding football league matches
Tug of War Optimization (TWO)[41]Game tug of war
Puzzle Optimization Algorithm (POA)[42]The effort of the players in completing the puzzle
Ring Toss Game Based Optimizer (RTGBO)[72]The effort of the players in throwing the ring towards the score rings
Orientation Search Algorithm (OSA)[73]Changing the direction of movement of players on the playground to the direction determined by the referee
Dice Game Optimizer (DGO)[74]Rules of the dice game
Darts Game Optimizer (DGO)[75]The effort of the players to earn points in the darts game
Human-basedTeaching–Learning-Based Optimization (TLBO)[43]Teaching and learning in a classroom
Society Civilization Algorithm (SCA)[1]Leadership phenomena of humans
Seeker Optimization Algorithm (SOA)[46]The action of human randomized search
Imperialist Competitive Algorithm (ICA)[47]Imperialistic competition
Human-Inspired Algorithm (HIA)[48]People’s intelligence
Social Emotional Optimization Algorithm (SEOA)[49]Human social behaviors
Brain Storm Optimization (BSO)[50]Brainstorming process
Anarchic Society Optimization (ASO)[51]A social group behaving in a chaotic way to improve its situation
Human Mental Search (HMS)[52]Exploration strategies of the bid space in online auctions
Gaining Sharing Knowledge based Algorithm (GSK)[53]Acquisition and exchange of knowledge during a person’s lifespan
Coronavirus Herd Immunity Optimizer (CHIO)[54]Herd immunity concept to respond to COVID-19
Ali Baba and the Forty Thieves (AFT)[55]The tale of Ali Baba and the forty thieves
Doctor and Patients Optimization (DPO)[44]Interactions between doctor and patient
Teamwork Optimization Algorithm (TOA)[45]Teamwork of individuals in presenting their work
Multi-Leader Optimizer (MLO)[76]The presence of several leaders to guide the society
Poor and Rich Optimization (PRO)[56]Efforts of the two groups of the poor and the rich to achieve wealth and improve their economic situation
Following Optimization Algorithm (FOA)[57]Society people follow the successful person of the society
Election-Based Optimization Algorithm (EBOA)[58]The process of holding elections in society
A brief review of metaheuristic algorithms. Based on the best knowledge from the literature review, modeling the sewing training process has not been applied to designing any metaheuristic algorithm. However, sewing training by a training instructor to beginner tailors is an intelligent human activity that has the potential to simulate an optimizer. Therefore, a new human-based metaheuristic algorithm based on mathematical modeling of sewing training is designed in this paper to address this research gap. The design of this algorithm will be discussed in the next section.

Sewing training-based optimization

This section introduces the proposed Sewing Training-Based Optimization (STBO) algorithm and presents its mathematical model.

Inspiration and main idea of STBO

The activity of teaching sewing skills by a training instructor to beginner tailors is an intelligent process. The first step for a beginner is to choose a training instructor. Selecting the training instructor is essential in improving a beginner's sewing skills. Next, the instructor teaches sewing techniques to the beginner tailor. The second step in this process is the beginner tailor's efforts to mimic the skills of the training instructor. The beginner tailor tries to bring his skills to the level of the instructor as much as possible. The third step in the sewing training process is practice. The beginner tailors try to improve their skills in sewing by practicing. The interactions between beginner tailors and training instructors indicate the high potential of the sewing training process to be considered for designing an optimizer. Mathematical modeling of these intelligent interactions is the fundamental inspiration in the design of STBO.

Mathematical model of STBO

The proposed STBO algorithm is a population-based metaheuristic algorithm whose members are beginner tailors and training instructors. Each member of the STBO population refers to a candidate solution to the problem that represents the proposed values for the decision variables. As a result, each STBO member can be mathematically modeled with a vector and the STBO population using a matrix. The STBO population is specified by a matrix representation in Eq. (1). where is the STBO population matrix, is the ith STBO’s member, is the number of STBO population members, and is the number of problem variables. At the beginning of the STBO implementation, all population members are randomly initialized using Eq. (2). where is the value of the jth variable determined by the ith STBO’s member , is a random number in the interval , and are the lower and upper bound of the jth problem variable, respectively. Each STBO member represents a candidate solution to the given problem. Therefore, the problem's objective function can be evaluated based on the values specified by each candidate solution. Based on the placement of candidate solutions in the problem variables, the values calculated for the objective function can be modeled using a vector by Eq. (3). where is the objective function vector and is the objective function value for the ith candidate solution. The values of the objective function are the main criterion for comparing candidate solutions with each other. The solution with the best value for the objective function is identified as the best candidate solution or the best member of the population . Updating the algorithm's population in each iteration leads to finding new objective function values. Accordingly, in each iteration, the best candidate solution must be updated. The design of the algorithm guarantees that the best candidate solution at the end of each iteration is also the best candidate solution from all previous iterations. The process of updating candidate solutions in STBO is performed in three phases: (i) training, (ii) imitation of the instructor’s skills, and (iii) practice.

Phase 1: Training (exploration)

The first phase of updating STBO members is based on simulating the process of selecting a training instructor and acquiring sewing skills by beginner tailors. For each STBO member as a beginner tailor, all other members with a better value for the objective function are considered training instructors for that member. The set of all candidate members as the group of possible training instructors for each STBO member is defined using the following identity where is the set of all possible candidate training instructors for the ith STBO member. In the case the only possible candidate training instructor is itself, i. e., Then, for each , a member from the set is randomly selected as the training instructor of the ith member of STBO, and it is denoted as . This selected instructor teaches the ith STBO member to sewing skills. Guiding members of the population under the guidance of instructors allows the STBO population to scan different areas of the search space to identify the main optimal area. This STBO update phase demonstrates the proposed approach's exploration ability in global search. At first, a new position for each population member is generated using Eq. (5) to update population members based on this phase of the STBO. where is its dth dimension, is its objective function value, are numbers that are selected randomly from the set , and are random numbers from the interval . Then, if this new position improves the objective function value, it replaces that population member's previous position. This update condition is modeled using Eq. (6). where is the new position of the ith STBO member based on the first phase of STBO.

Phase 2: Imitation of the instructor skills (exploration)

The second phase of updating STBO members is based on simulating beginner tailors trying to mimic the skills of instructors. In the design of STBO, it is assumed that the beginner tailor tries to bring his sewing skills to the level of the instructor as much as possible. Given that each STBO member is a vector of the dimension and each component represents a decision variable thus, in this phase of STBO, it is assumed that each decision variable represents a sewing skill. Each STBO member imitates skills of the chosen instructor, This process moves the population of the algorithm to different areas in the search space, which indicates the STBO exploration ability. The set of variables that each STBO member imitates (i.e., the set of skills of the training instructor) is specified in Eq. (7). where is an combination of the set , which represents the set of the indexes of decision variables (i.e., skills) identified to imitate by the ith member from the instructor and is the number of skills selected to mimic, is the iteration counter, and is the total number of iterations. The new position for each STBO member is calculated based on the simulation of imitating these instructor skills, using the following identity where is the newly generated position for the ith STBO member based on the second phase of STBO, is the dth dimension of This new position replaces the previous position of the corresponding member if it improves the value of the objective function where is the objective function value of .

Phase 3: Practice (exploitation)

The third phase of updating STBO members is based on simulating beginner tailoring practices to improve sewing skills. In fact, in this phase of STBO design, a local search is performed around candidate solutions with the goal to find the best possible solutions near these candidate solutions. This phase of the STBO represents the exploitation capability of the proposed algorithm in local search. In order to mathematically model this STBO phase (with a correction to stay the all newly computed population members in the given search space), a new position around each member of the STBO is first generated using Eq. (10). where and is a random number from the interval Then, if the value of the objective function improves, it replaces the previous position of the STBO member according to Eq. (11). where is the new generated position for the ith STBO member based on second phase of STBO, is its dth dimension, and is its objective function value.

Repetition process and pseudo-code of STBO

The first STBO iteration is completed after updating all candidate solutions based on the first to third phases. Then the update process is repeated until the last iteration of the algorithm, based on Eqs. (4) to (11). After the full implementation of the STBO on the given problem, the best candidate solution recorded during the algorithm iteration is introduced as the solution. Finally, STBO implementation steps are presented as pseudo-code in Algorithm 1.

Computational complexity of STBO

In this subsection, the computational complexity of STBO is investigated. Since the most time-consuming step in the entire algorithm is calculating the values of the objective function, which are very complicated in most real applications, the computational complexity of STBO can be estimated based on the number of population members generated in the algorithm. STBO initialization has a computational complexity equal to where is the number of STBO members and is the number of problem variables. In each STBO iteration, the candidate solution is updated in three phases. Thus, the computational complexity of the STBO update process is equal to where is the number of iterations of the algorithm. As a result, the total computational complexity of STBO is equal to .

Simulation Studies and Results

In this section, the ability of the proposed STBO algorithm in optimization applications and solution presentation is evaluated. In this regard, fifty-two standard benchmark functions consisting of twenty-three objective functions of unimodal, high-dimensional multimodal, fixed-dimensional multimodal types and twenty-nine benchmark functions from the CEC 2017 test suite[77] are employed to test the STBO optimization capability[29]. The performance of DTBO is compared with the performance of eleven well-known metaheuristic algorithms GA, PSO, GSA, MPA, WOA, TLBO, RSA, MVO, GWO, AVOA, and TSA. Each of the competing metaheuristic algorithms and STBO is used in twenty independent runs, where each run contains 1000 iterations. The implementation results of metaheuristic algorithms are reported using six statistical indicators: mean, standard deviation (std), best, worst, median, and rank. The mean of rank is considered a ranking criterion of the performance of optimization algorithms for each objective function. The values of the control parameters of competitor metaheuristic algorithms are listed in Table 2.
Table 2

Assigned values to the control parameters of competitor algorithms.

AlgorithmParameterValue
AVOAL1, L20.8, 0.2
W2.5
P1, P2, P30.6, 0.4, 0.6
RSASensitive parameterβ = 0.01
Sensitive parameterα = 0.1
Evolutionary Sense (ES)ES: randomly decreasing values between 2 and − 2
MPABinary vectorU = 0 or 1
Random vectorR is a vector of uniform random numbers in [0, 1]
Constant numberP = 0.5
Fish Aggregating Devices (FADs)FADs = 0.2
TSAc1, c2, c3Random numbers lie in the interval [0,1]
Pmin1
Pmax4
WOAl is a random number in [− 1,1]
r is a random vector in [0, 1]
Convergence parameter (a)a: Linear reduction from 2 to 0
GWOConvergence parameter (a)a: Linear reduction from 2 to 0
Wormhole existence probability (WEP)Min(WEP) = 0.2 and Max(WEP) = 1
MVOExploitation accuracy over the iterations (p)p = 1
TLBOrandom numberrand is a random number from interval [0,1]
TF: teaching factorTF = round [(1 + rand)]
GSAAlpha20
G0100
Rnorm2
Rnorm1
PSOVelocity limit10% of dimension range
TopologyFully connected
Inertia weightLinear reduction from 0.9 to 0.1
Cognitive and social constant(C1, C2) = (2, 2)
GATypeReal coded
MutationGaussian (Probability = 0.05)
CrossoverWhole arithmetic (Probability = 0.8)
SelectionRoulette wheel (Proportionate)
Assigned values to the control parameters of competitor algorithms.

Evaluation of unimodal benchmark functions

The results of optimization of unimodal functions F1 to F7 using STBO and competitor algorithms are reported in Table 3. The optimization results show that STBO provides the exact optimal solution for functions F1 to F6. For optimization of function F7, STBO is the best optimizer compared to competing algorithms. The simulation results show that STBO has outperformed competitor algorithms in handling the F1 to F7 unimodal functions and has been ranked first among the compared algorithms.
Table 3

Evaluation results on unimodal functions.

GAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
F1Mean33.351360.1815991.09E−169.67E−740.1608297.38E−591.50E−1542.30E−478.66E−50000
Best19.784383.14E−055.38E−174.45E−770.0923664.54E−612.90E−1694.07E−503.59E−52000
Worst56.285093.2746642.92E−161.07E−720.2528815.46E−582.60E−1531.26E−465.74E−49000
Std8.3952920.7296575.21E−172.84E−730.0407251.36E−585.80E−1544.15E−471.43E−49000
Median33.268990.0073791.02E−162.37E−750.1606071.43E−591.80E−1613.45E−482.18E−50000
Rank1097384265111
F2Mean3.1881121.4197265.33E−081.11E−380.2528667.55E−357.60E−1042.32E−285.72E−2801.85E−2670
Best1.8691520.0808763.98E−081.43E−390.1502567.14E−367.70E−1131.68E−303.30E−30000
Worst4.51536113.779338.14E−086.19E−380.4717032.23E−345.80E−1033.62E−273.98E−2703.71E−2660
Std0.6933343.0344881.02E−081.58E−380.0694037.49E−351.60E−1038.02E−289.79E−28000
Med3.322710.4346115.30E−084.37E−390.2393833.76E−356.30E−1072.23E−292.02E−2801.61E−2890
Rank11108495367121
F3Mean2125.7521094.128480.89682.60E−2412.600859.67E−1422,774.892.13E−118.45E−13000
Best1111.13929.63701218.1053.17E−284.8668972.26E−198159.2128.49E−201.16E−19000
Worst2997.685273.106804.01852.31E−2320.881111.92E−1237,690.632.08E−107.29E−12000
Std495.79541618.416149.92435.49E−244.4347064.29E−138502.1496.29E−111.75E−12000
Median2238.566428.5796458.52332.42E−2513.204463.96E−1721,453.442.66E−143.20E−14000
Rank9872631054111
F4Mean3.2107946.5076230.9932251.93E−300.6111512.22E−1437.288550.0035142.71E−1901.29E−2640
Best2.1135633.5116751.27E−081.32E−310.2848536.78E−161.3310011.18E−057.45E−2006.83E−3060
Worst4.71276610.140964.1834095.42E−301.4084591.06E−1380.189790.0165656.16E−1902.09E−2630
Std0.649831.9707931.2301111.65E−300.2939513.30E−1429.047850.0047621.37E−19000
Med3.1354066.1300210.5798691.67E−300.5632377.57E−1526.402470.0016832.83E−1903.21E−2860
Rank91083751164121
F5Mean420.6112.291643.5264726.53115308.808126.8328127.027828.4681823.5821510.135761.667E−050
Best227.490630.0796725.0516226.0330227.2903825.3272926.4948427.1354123.007821.72E−283.862E−070
Worst688.7775400.1077177.790328.750632557.85428.548127.9793729.253724.9588428.990116.931E−050
Std122.016585.244139.123940.592674622.69380.9479230.3283320.5707570.51611414.171611.81E−050
Median386.062184.0670426.3831726.3527131.6727226.582926.9686428.654523.391525.80E−269.586E−060
Rank121095116784321
F6Mean34.03230.0285871.13E−161.180220.1592220.6409910.1195773.835222.08E−096.6174444.418E−080
Best14.518849.98E−064.11E−170.5728610.0840591.14E−050.0117972.8231199.15E−103.6247964.244E−090
Worst71.070240.3247852.43E−161.7543550.2473051.71840.3669344.7745426.27E−097.4988431.142E−070
Std15.198740.0746924.95E−170.3169970.0462910.3719460.1109960.5666971.13E−091.0189882.515E−080
Med29.140390.000871.02E−161.2237010.160680.7482350.0821923.9342631.91E−097.1541334.048E−080
Rank125297861031141
F7Mean0.0098470.1669430.0671120.0015890.009650.000830.0011590.0048610.0005180.0001036.378E−051.24E−05
Best0.0062830.0824980.0199880.0003380.0049020.0002622.80E−060.0022510.0001141.21E−052.674E−062.33E−06
Worst0.019050.2894930.2554510.0041460.0202830.0021680.0056680.0108010.0016540.0003040.0002423.79E−05
Std0.0030840.0527960.0502270.0009380.0038260.000540.0015390.0024180.0003658.26E−056.295E−059.87E−06
Median0.0094850.1592350.054780.0015830.0086950.0006380.0006320.0042810.000419.22E−054.037E−058.63E−06
Rank101211795684321
Sum rank73645233573645493121147
Mean rank10.4285719.14285717.42857144.71428578.14285715.14285716.428571474.4285714321
Total rank121195106784321
Evaluation results on unimodal functions.

Evaluation of high dimensional multimodal benchmark functions

The results obtained using STBO and competitor algorithms in optimizing high-dimensional multimodal functions F8 to F13 are presented in Table 4. Based on the results, STBO has provided the exact optimal solution for optimizing functions F9 and F11. Furthermore, in solving the functions F8, F10, F12, and F13, the STBO has performed better than all competitor algorithms. Analysis of the simulation results indicates the superiority of STBO over competing algorithms in handling the high-dimensional multimodal functions of F8 to F13.
Table 4

Evaluation results on high-dimensional multimodal functions.

GAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
F8Mean− 8551.34− 6891.6− 2463.53− 5320.45− 7820.6− 6233.81− 11,107.1− 5989.44− 9641.09− 5392.18− 12,027.8769− 12,269.7
Best− 9693.59− 8047.43− 3421.12− 6267.63− 9486.17− 7747.76− 12,569.3− 6812.45− 10,493.2− 5729.76− 12,566.471− 12,569.3
Worst− 7299.22− 5394.56− 1914.24− 4471.98− 6625.33− 4899.26− 8492.23− 4918.97− 8788.53− 4359.71− 10,717.161− 11,262.6
Std761.0887874.6514363.8174473.6453741.5584812.34971587.459638.8189434.3437350.8895499.0349695362.218
Median− 8611.96− 6843.87− 2484.38− 5243.46− 7737.48− 6303.59− 11,928.6− 6010.79− 9651.7− 5542.27− 12,232.4668− 12,395.1
Rank46111057283921
F9Mean58.6814169.5759126.018160108.6830.4647050170.19130000
Best32.8075432.8546613.92943075.696920098.829440000
Worst79.7012114.419940.793270166.25955.021230249.76710000
Std12.7011820.948236.554449024.780611.310176044.864060000
Med56.3089565.8437625.371440100.124700169.02330000
Rank453162171111
F10Mean3.6590852.8692418.55E−094.26E−150.9406691.72E−144.09E−151.5200554.26E−158.88E−168.88178E−168.88E−16
Best3.0456160.9789486.02E−098.88E−160.0878741.15E−148.88E−161.51E−148.88E−168.88E−168.88178E−168.88E−16
Worst4.3667784.1215091.33E−084.44E−152.9152342.22E−147.99E−153.5003474.44E−158.88E−168.88178E−168.88E−16
Std0.4116620.789121.91E−097.94E−160.9384973.53E−152.55E−151.5729117.94E−16000
Median3.7203953.0254068.15E−094.44E−150.7030591.51E−144.44E−151.2700024.44E−158.88E−168.88178E−168.88E−16
Rank985364273111
F11Mean1.5248980.3085638.93467600.4072930.000970.0030620.0088660000
Best1.2513050.0124465.46395300.2591120000000
Worst1.7912884.09066917.8842300.6039160.0194080.061240.0173660000
Std0.1436640.8996223.18025600.077690.004340.0136940.0057760000
Med1.4882230.0718777.92333700.415251000.009820000
Rank758162341111
F12Mean0.1553491.3913280.217310.0810320.8077350.0387260.0243896.4098071.64E−101.4084873.3717E−091.57E−32
Best0.0422390.0005584.30E−190.0403710.0016230.0129460.0008130.3337817.55E−110.9401086.677E−101.57E−32
Worst0.3453473.2043171.4946570.1381413.0019190.1053210.33071913.963283.53E−101.6297011.12808E−081.57E−32
Std0.0759241.033660.3921160.0255740.8228070.0273690.0726233.7615998.51E−110.241392.36553E−092.81E−48
Median0.14591.5004850.0576470.07870.4400930.0294340.0063515.5332011.38E−101.514962.84526E−091.57E−32
Rank710869541221131
F13Mean2.1602873.089760.0154480.9059290.0319280.5939070.2276613.07850.0016740.262.1597E−081.35E−32
Best0.9936630.0568656.03E−180.5396640.0102810.2747150.0390341.8725471.24E−091.73E−312.0205E−091.35E−32
Worst4.18669212.95620.2540171.5773930.136471.1056150.4581683.932040.0109872.91.07149E−071.35E−32
Std0.7018653.2286470.0570340.2300280.0269550.2254430.1430380.5580130.0040150.806162.57234E−082.81E−48
Med2.1940292.1466091.23E−170.8863160.0260030.588150.2394233.0527453.12E−091.46E−301.02427E−081.35E−32
Rank101249586113721
Sum rank42474031382919501431106
Mean rank77.83346.66675.16666.33334.83333.16678.33332.33335.16661.66671
Total rank91086754113621
Evaluation results on high-dimensional multimodal functions.

Evaluation of fixed dimensional multimodal benchmark functions

The results of the implementation of STBO and competitor algorithms on fixed-dimensional multimodal functions F14 to F23 are released in Table 5. Compared to competitor algorithms, the optimization results show that STBO is the best optimizer in optimizing benchmark functions F14, F15, and F18. In optimizing functions F16, F17, and F19 to F23, the proposed STBO, and some competitor algorithms have a similar value in the "mean" index. However, STBO provides more efficient performance in these functions by providing better values of the "std" index. The simulation results show that STBO performs better than competitor algorithms in solving fixed-dimensional functions F14 to F23.
Table 5

Evaluation results on fixed-dimensional multimodal functions.

GAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
F14Mean0.9981023.2129193.5644560.9980070.9980044.2214223.885826.565280.9980044.4695221.146910.998004
Best0.9980040.9980040.9980040.9980040.9980040.9980040.9980040.9980040.9980041.0023090.9980040.998004
Worst0.99872110.763188.8408360.9980340.99800412.6705110.7631812.670510.99800410.763182.9821050.998004
Std0.0002132.885482.1896736.96E−067.30E−124.6528664.1468984.891711.82E−103.0846370.4856510
Median0.9980042.4870682.8068960.9980040.9980041.9900541.4950174.9485480.9980042.9821050.9980040.998004
Rank578421091231161
F15Mean0.012730.0016380.0021560.0033750.0075760.0013570.0006930.0085090.0003070.0013050.0004290.000307
Best0.0007670.0003070.0009230.0003090.0003360.0003070.0003080.0003080.0003070.0007270.0003080.000307
Worst0.0260920.0203630.003520.0203640.0203630.0203630.0022520.0209420.0003070.0026010.0012230.000307
Std0.0105790.0044420.0004830.0073250.0096290.0044780.0005230.0100562.92E−190.0005360.0002312.99E−19
Med0.0125940.0003070.0020760.000320.0007550.0003080.0004750.0007790.0003070.0011490.0003220.000307
Rank127891064112531
F16Mean− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.02846− 1.03163− 1.02844− 1.03163− 1.03163
Best− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03162− 1.03163− 1.03163
Worst− 1.03161− 1.03163− 1.03163− 1.03162− 1.03163− 1.03163− 1.03163− 1− 1.03163− 1− 1.03163− 1.03163
Std4.37E−061.14E−161.25E−162.49E−064.97E−083.27E−095.93E−110.0097352.10E−100.0072567.20E−157.75E−16
Median− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03163− 1.03067− 1.03163− 1.03163
Rank611754283911
F17Mean0.5244110.5390130.3978870.4222070.3978870.3978880.3978880.3979060.3978870.6383070.3978870.397887
Best0.3978870.3978870.3978870.397890.3978870.3978870.3978870.3978880.3978870.3979010.3978870.397887
Worst2.7911862.7911840.3978870.8822910.3978880.3978940.3978910.3979710.3978875.0401080.3978870.397887
Std0.5343430.53870100.1082936.52E−081.63E−068.40E−072.48E−052.71E−091.03618700
Med0.3978920.3978870.3978870.3979720.3978870.3978880.3978870.3978940.3978870.4025510.3978870.397887
Rank8917354621011
F18Mean5.729191333.00000133.0000113.00000515.1500536.1696423.0000023
Best3.00004433333333333
Worst30.53809333.0000033.0000013.0000333.00003284.00069339.235783.0000123
Std8.392912.76E−152.92E−158.96E−072.54E−079.56E−068.89E−0629.674265.39E−149.8657713.08E−061.81E−16
Median3.00162833333.0000093.0000013.00000833.0000863.0000013
Rank102365981241171
F19Mean− 3.86228− 3.86278− 3.86278− 3.86203− 3.86278− 3.86096− 3.8602− 3.86273− 3.86278− 3.80846− 3.86278− 3.86278
Best− 3.86278− 3.86278− 3.86278− 3.8627− 3.86278− 3.86278− 3.86278− 3.86278− 3.86278− 3.85487− 3.86278− 3.86278
Worst− 3.85745− 3.86278− 3.86278− 3.8548− 3.86278− 3.8549− 3.85378− 3.86256− 3.86278− 3.68429− 3.86278− 3.86278
Std0.0014312.09E−151.87E−150.0017167.41E−080.0029480.0029535.09E−052.28E−150.047893.22E−132.78E−15
Med− 3.86277− 3.86278− 3.86278− 3.86249− 3.86278− 3.86246− 3.86139− 3.86275− 3.86278− 3.82667− 3.86278− 3.86278
Rank511637841921
F20Mean− 3.19552− 3.29822− 3.322− 3.23822− 3.2446− 3.23965− 3.2753− 3.23237− 3.322− 2.63831− 3.28617− 3.322
Best− 3.3214− 3.322− 3.322− 3.31043− 3.32199− 3.32199− 3.32198− 3.32137− 3.322− 3.15625− 3.322− 3.322
Worst− 2.99692− 3.2031− 3.322− 3.08169− 3.20259− 3.02064− 3.10782− 2.84− 3.322− 1.30322− 3.19994− 3.322
Std0.0935310.0487933.95E−160.0652460.0582640.0957190.0742620.1469054.20E−160.4172280.0561512.49E−16
Median− 3.18946− 3.322− 3.322− 3.1998− 3.20302− 3.26252− 3.32127− 3.31998− 3.322− 2.73165− 3.322− 3.322
Rank9217564811031
F21Mean− 5.89083− 5.77879− 6.17737− 5.84595− 8.51163− 9.64743− 8.36636− 6.52198− 10.1532− 5.0552− 10.1532− 10.1532
Best− 9.0381− 10.1532− 10.1532− 9.44872− 10.1532− 10.1531− 10.1529− 10.138− 10.1532− 5.0552− 10.1532− 10.1532
Worst− 2.34247− 2.63047− 2.63047− 3.80037− 2.63047− 5.10034− 5.05374− 2.60298− 10.1532− 5.0552− 10.1532− 10.1532
Std2.5125643.7035663.6991261.7695662.6237261.555062.4930263.3372971.95E−152.48E−076.57E−153.65E−15
Med− 6.83679− 2.68286− 3.51696− 5.02319− 10.1531− 10.1527− 10.1469− 5.04462− 10.1532− 5.0552− 10.1532− 10.1532
Rank81079435611121
F22Mean− 7.21825− 6.31807− 10.4029− 8.09591− 9.6056− 10.4024− 8.0395− 7.53629− 10.4029− 5.08767− 10.4029− 10.4029
Best− 10.1952− 10.4029− 10.4029− 9.92173− 10.4029− 10.4028− 10.4028− 10.3998− 10.4029− 5.08767− 10.4029− 10.4029
Worst− 2.62184− 1.83759− 10.4029− 4.21215− 5.08765− 10.4018− 2.76572− 1.82822− 10.4029− 5.08767− 10.4029− 10.4029
Std2.4724413.8370312.97E−151.6992951.9472210.000283.0343413.4830423.65E−155.80E−072.41E−142.88E−15
Median− 7.89012− 4.40599− 10.4029− 8.81648− 10.4029− 10.4025− 10.3974− 10.1566− 10.4029− 5.08767− 10.4029− 10.4029
Rank91026547811131
F23Mean− 5.78525− 5.62285− 10.5364− 8.30576− 9.99556− 9.72457− 8.77684− 5.46265− 10.5364− 5.12847− 10.5364− 10.5364
Best− 10.417− 10.5364− 10.5364− 9.98248− 10.5364− 10.5364− 10.5364− 10.4691− 10.5364− 5.12848− 10.5364− 10.5364
Worst− 2.38428− 2.42173− 10.5364− 4.06348− 5.12846− 2.42172− 2.42169− 1.67573− 10.5364− 5.12847− 10.5364− 10.5364
Std2.9668293.7558171.73E−151.4918561.6645112.4975223.1963533.7536242.51E−151.29E−067.08E−156.93E−16
Med− 6.05259− 3.35328− 10.5364− 8.76487− 10.5364− 10.536− 10.5338− 2.84687− 10.5364− 5.12847− 10.5364− 10.5364
Rank89174561031121
Sum rank805833684659578521983010
Mean rank85.83.36.84.65.95.78.52.19.831
Total rank107495861121231
Evaluation results on fixed-dimensional multimodal functions. The performance of STBO and competitor algorithms in optimizing F1 to F23 functions is presented as a boxplot in Fig. 1. Intuitive analysis of these boxplots shows that the proposed STBO approach has provided superior and more effective performance compared to competing algorithms by providing better results in statistical indicators in most of the benchmark functions.
Figure 1

Boxplot of performance of STBO and competitor algorithms in solving F1 to F23.

Boxplot of performance of STBO and competitor algorithms in solving F1 to F23.

Statistical analysis

In this subsection, statistical analysis is presented to further evaluate the performance of the STBO compared to competitor algorithms. Wilcoxon sum rank test[78] has been employed to determine whether there is a statistically significant difference between the results obtained from STBO and competing algorithms. In the Wilcoxon sum rank test, the p-value index determines the significant difference between the two data samples. The results of the Wilcoxon sum rank test on the performance of STBO and competitor algorithms are reported in Table 6. Based on these results, in cases where the -value is calculated as less than 0.05, STBO has a statistically significant superiority over the competitor algorithm.
Table 6

Wilcoxon sum rank test results.

Compared AlgorithmsTest function type
UnimodalHigh-multimodalFixed-multimodal
STBO vs. AVOA1.01E−241.96E−210.000145
STBO vs. RSA1.01E−241.97E−210.001816
STBO vs. MPA1.01E−241.97E−213.29E−11
STBO vs. TSA1.01E−241.97E−210.000299
STBO vs. WOA1.01E−241.04E−147.98E−21
STBO vs. MVO1.01E−241.97E−214.09E−13
STBO vs. GWO1.01E−247.8E−165.01E−07
STBO vs. TLBO2.44E−249.08E−090.358845
STBO vs. GSA1.01E−241.31E−201.44E−34
STBO vs. PSO1.01E−241.04E−146.4E−10
STBO vs. GA3.64E−111.63E−111.78E−12
Wilcoxon sum rank test results.

Convergence analysis

In this subsection, the convergence analysis of the proposed STBO is presented in comparison with competitor algorithms. The convergence curves of STBO and competitor algorithms during the optimization of F1 to F23 functions are drawn in Fig. 2. In the optimization of unimodal functions F1 to F7, which lack local optima, it can be seen that STBO has converged towards better solutions with its high ability in local search and exploitation after identifying the position of the optimal solution. Especially in solving functions F1 to F6, STBO has converged to the global optimal of these functions.
Figure 2

Convergence curves of STBO and competitor algorithms in solving F1 to F23.

Convergence curves of STBO and competitor algorithms in solving F1 to F23. In the optimization of high-dimensional multimodal functions F8 to F13, which have a large number of local optima, it can be seen that STBO with high capability in global search and exploration has been able to identify the optimal global position well without getting stuck in local areas. With increasing iterations of the algorithm, it can be seen that STBO has converged towards better solutions. Especially in optimizing F9 and F11 functions, the proposed approach, with high ability in exploration and exploitation, has converged to the global optima. In the optimization of fixed-dimension multimodal functions F14 to F23, which have a smaller number of local optima (in comparison to F8 to F13 functions), it can be seen that STBO with high ability in balancing exploration and exploitation has provided a good performance in handling these functions. STBO first identified the main optimal area in solving these functions by providing an optimal global search. Then, by increasing the number of iterations of the algorithm, using local search, it converged towards suitable solutions. Convergence analysis shows that the proposed STBO approach, with its high ability to explore and exploit and balance during algorithm iterations, has better performance in handling functions F1 to F23 compared to competitor algorithms.

Scalability analysis

In this subsection, scalability analysis is presented to evaluate the proposed STBO approach and competitor algorithms in solving optimization problems under changes in the dimensions of the problem. In this analysis, the proposed STBO and each competing algorithm are used in optimizing the functions F1 to F13 for different dimensions equal to 50, 100, 250, and 500. The results of the scalability analysis are reported in Table 7. These found simulation results show that the efficiency of the STBO's performance does not decrease much with the increase in the dimensions of the problem. Furthermore, the scalability analysis reveals that the performance of the proposed STBO is least affected by the increase in the dimensionality of search space in comparison to competitor algorithms. This superiority is due to the proposed STBO approach's better ability to balance exploitation and exploration during the search process than competing algorithms.
Table 7

The results of the scalability analysis of STBO.

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m$$\end{document}mIndexGAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
F150Mean285.004334.269466.91E−165.57E−681.1710198.13E−441.80E−1519.37E−361.87E−46000
Std71.4438353.815184.58E−167.14E−680.2477241.43E−435.50E−1512.08E−354.78E−46000
100Mean2782.4962511.523673.41918.95E−6218.864532.57E−293.10E−1502.63E−253.16E−43000
Std322.43191007.265358.38971.83E−612.3052942.84E−299.10E−1503.68E−255.78E−43000
250Mean21,206.339,344.9413,078.493.85E−58798.46785.56E−182.00E−1481.69E−151.31E−40000
Std2180.1068289.2221581.3296.19E−5880.707533.34E−188.80E−1481.96E−152.05E−40000
500Mean69,817.39165,84540,569.36.55E−5612,935.451.39E−121.80E−1451.00E−104.64E−39000
Std3479.44512,186.532491.6121.55E−55846.74816.09E−135.50E−1451.08E−104.39E−39000
F250Mean12.20368.3598580.1526333.04E−3535.122524.52E−262.10E−1042.23E−222.14E−2601.10E−2020
Std1.3845194.6600240.4650523.51E−3569.378353.78E−264.00E−1046.44E−222.59E−26000
100Mean46.20699123.14236.5970082.04E−325.90E+154.70E−181.10E−1011.53E−165.52E−2501.50E−2080
Std4.24457388.202332.7180621.38E−322.49E+162.32E−183.00E−1011.72E−169.87E−25000
250Mean190.8026544.459977.087944.03E−306.06E+853.31E−111.07E−982.21E−111.03E−2307.90E−2110
Std9.9677720.692419.3520652.34E−302.63E+861.12E−114.63E−981.84E−112.51E−23000
500Mean485.24981112.2863.50E+2683.60E−293.10E+2115.96E−081.30E−992.02E−086.87E−2103.30E−2060
Std17.0865442.2750665,5352.37E−2965,5351.15E−085.90E−992.31E−083.02E−20000
F350Mean5048.5886169.9141821.1151.29E−19650.72836.38E−06124,414.60.0723522.50E−07000
Std1078.1294769.136448.10444.71E−19154.07392.16E−0529,498.350.1252046.11E−07000
100Mean18,325.0550,954.417927.233.10E−1529,633.4711.04603853,493.72220.5550.00407801.60E−2800
Std3075.16621,471.281361.3415.06E−154515.67219.81332168,872.81432.7560.011144000
250Mean136,756.2336,52857,829.843.66E−10322,673.69257.0316,878,229206,030.5107.533603.20E−2410
Std23,655.1475,834.618985.5851.56E−0921,061.85349.6611,098,41653,625.46202.0424000
500Mean668,892.51,223,667275,481.21.47E−081,317,906116,541.828,062,4411,035,7821525.20901.10E−1920
Std106,978.1359,536.268,207.095.34E−08103,418.559,137.889,301,240165,807.81673.877000
F450Mean5.96396619.538378.7779581.06E−274.0669821.82E−0966.507832.7316841.24E−1706.70E−1960
Std0.9789083.0719241.9282067.94E−281.3858322.88E−0927.761522.9124628.45E−18000
100Mean13.8037837.5155516.065941.35E−2541.922720.00337877.5996434.244774.35E−1609.20E−1930
Std1.3592873.832951.5037491.33E−256.7747040.00370623.4646312.327712.17E−16000
250Mean26.506353.8820122.085276.02E−2480.6653821.296177.1677796.014813.04E−1401.80E−2040
Std1.609055.3687871.8488651.18E−233.3414417.81648722.704113.3840942.07E−14000
500Mean36.9772467.4391426.760133.20E−2392.4321857.8216378.3449799.243455.46E−1301.80E−2010
Std1.9115974.6904841.6453122.84E−231.6463926.13867320.249630.223988.70E−13000
F550Mean2713.7191085.306131.546347.42296603.672347.4018347.4397248.3242144.0380846.492490.0001620
Std1305.1121169.98673.265520.9332920.30050.798050.542160.6566190.59063210.943660.0001660
100Mean125,110.4287,904.917,204.6697.811961237.21997.6369797.8016398.2933995.13598.963540.000460
Std43,089.33264,617.710,886.60.677834844.5550.6366750.379340.4886290.8439970.0666640.0005090
250Mean4,083,46821,300,7861,044,548248.078235,979.2247.8359247.1009247.9578245.7841248.98520.0075090
Std1,041,6845,177,053294,478.50.29910214,532.750.426630.3191890.5613260.7061120.0156490.0189460
500Mean21,818,1551.66E+085,695,001498.06253,316,483497.6689495.7569498.2494495.6662498.98860.0052120
Std2,517,88725,976,011825,086.10.269248471,390.80.1737360.4423750.486650.5287310.0030210.0060830
F650Mean305.61542.040670.0033163.6609921.0738962.2439010.3787366.4412740.00339511.465924.58E−050
Std77.128853.902690.0148310.5974250.2126810.4819150.2301270.9579140.0151411.6567542.24E−050
100Mean2716.1833158.425649.542412.3483118.441579.2868861.93771913.802420.81591924.706580.0146340
Std493.75572883.38352.07080.750962.2454340.7877820.9281621.008610.3629870.1669740.0557990
250Mean21,479.1440,123.412,867.1944.38734782.229838.471298.90879541.3349113.6531362.195090.1011210
Std1771.6786381.7231821.691.27103265.414191.2752852.2365281.6014171.084960.1201690.1831890
500Mean68,512.54166,510.440,708.14102.264713,141.7192.6148918.3225595.3828852.989124.72060.116780
Std4438.75814,455.793093.2431.840089725.78061.571355.1841061.8504981.9972870.063750.2343660
F750Mean0.0294690.5638450.1921320.0022510.0392890.0016230.0032680.0091270.0007868.80E−050.000121.21E−05
Std0.0060660.2456820.0708870.0015210.0121980.000810.0042750.0045420.0003959.50E−050.0001221.39E−05
100Mean0.2772416.3599272.0817730.0023650.1979490.0025090.0018040.0225330.000694.95E−050.0001281.04E−05
Std0.1092516.0514140.9595120.0013680.0280320.0012040.0016040.0080740.0002717.53E−050.0001091.01E−05
250Mean14.51263127.25569.683760.0030422.2681390.006450.0032570.0930880.0008858.38E−050.0001571.36E−05
Std2.09599748.3471315.682230.0013920.3435460.002410.0033050.0352960.0004217.11E−050.0001791.18E−05
500Mean152.93071238.266685.49450.00343141.511780.0107190.0022060.5358330.0009247.47E−050.0001711.67E−05
Std26.25327198.3499.688450.0023196.3511420.0033350.0029820.1229460.0004127.07E−050.000151.53E−05
F850Mean− 12,272.8− 10,802.7− 3403.91− 7204.86− 12,793.7− 9338.85− 19,090.3− 8745.76− 14,938.1− 9022.4− 20,353.6− 20,413
Std1103.9111130.198507.9481852.21931008.7641287.4062667.391807.3113715.9989202.8098524.941176.142
100Mean− 18,876.3− 18,284.3− 4539.53− 9438.36− 24,721− 16,403.4− 38,730.4− 14,103.8− 28,018.7− 17,227.2− 40,235− 41,150
Std1754.8962043.847746.9655961.53541632.2162974.6114599.436889.5166861.1371182.7741140.1191847.596
250Mean− 31,100.3− 36,587.6− 7604.78− 15,364.1− 55,415.3− 35,384.6− 87,802.5− 23,017.4− 57,688.4− 37,088.8− 96,183.8− 100,730
Std3266.8452315.2381353.5981984.6532685.692404.49414,060.161501.6851813.6122064.0866413.8154650.807
500Mean− 45,608.7− 55,847.5− 10,442.3− 22,573− 96,126.1− 61,887.1− 188,510− 32,328.1− 96,995.4− 66,008.1− 176,969− 201,718
Std3441.2773362.2342117.9962518.762539.4435243.80826,903.391972.4142350.4964893.54114,449.977420.386
F950Mean187.09116.643154.672950242.29610.497030359.07990000
Std25.4750930.121979.988632049.287471.530593059.853110000
100Mean591.5714340.1137133.93630572.94280.1892090897.13590000
Std39.832840.6896918.74462096.951930.846170115.53910000
250Mean2086.1631457.558693.86302156.6021.9525862.27E−142711.3630000
Std64.2299885.3137776.614620112.02583.1104181.02E−13271.960000
500Mean4646.9333828.0662231.81705422.0973.54494805356.4860000
Std112.9087150.4184112.01550163.68034.4026650486.94640000
F1050Mean5.3895975.5072520.2860754.44E−151.7548423.27E−143.91E−151.0133654.44E−158.88E−168.88E−168.88E−16
Std0.3791230.9495120.44991900.4795743.15E−152.09E−151.4248650000
100Mean7.9693410.843413.1556744.44E−155.0283151.11E−134.62E−157.55E−104.44E−158.88E−168.88E−168.88E−16
Std0.38311.366950.87076204.9733986.71E−152.15E−153.37E−090000
250Mean10.7428915.853867.1686244.44E−1518.977481.40E−104.62E−154.47E−094.44E−158.88E−168.88E−168.88E−16
Std0.3348660.6761880.37137404.0768665.24E−112.44E−153.19E−090000
500Mean12.2558917.352269.1458294.44E−1520.708525.21E−083.38E−155.11E−074.44E−158.88E−168.88E−168.88E−16
Std0.1749130.2157270.30323100.0708911.22E−082.33E−153.81E−070000
F1150Mean4.0524981.45734930.3382100.7613310.00097600.005810000
Std0.8035840.6486177.42728300.0610950.00436500.0074830000
100Mean26.3589721.7993297.6324601.1698930.00151400.0047430000
Std4.0943589.64788510.2071100.0240390.00475500.0075440000
250Mean193.6629353.57761104.1507.8949670.0013755.55E−180.0122570000
Std12.6307243.859247.367800.7130360.0061512.48E−170.0178720000
500Mean627.78521540.0374647.8650121.60260.00104700.0104330000
Std32.5443104.1384113.739309.1103490.00468300.021460000
F1250Mean1.7949427.9309161.7422680.1470942.7192250.0824470.0135836.8882168.49E−091.2548825.36E−079.42E−33
Std0.7112643.5282950.8946470.0366610.8411770.0316640.0142974.2827541.17E−080.2298053.80E−072.81E−48
100Mean8.18864832.715125.1339080.361748.2762080.241330.02139810.245390.0094441.2589810.0001984.71E−33
Std1.64097924.674391.3631920.0408641.8435430.047630.0229614.5404950.00390.1032010.0008791.40E−48
250Mean47,652.444,716,91721.886490.63351542.148830.5424160.02752265.16720.0726911.2250529.41E−051.88E−33
Std65,626.152,935,2106.0967150.0404277.2255880.049060.010868115.85120.0114640.0073450.0002843.51E−49
500Mean2,535,94490,916,183884.64640.824569116,501.90.7477530.03932646,7540.2034891.202299.54E−059.42E−34
Std1,052,76025,963,807986.51590.01669867,940.080.0332030.01802354,125.950.0190650.0028560.0002051.76E−49
F1350Mean12.0807739.2541614.538833.0174190.1842151.7746490.5854675.235220.0835631.6932851.48E−071.35E−32
Std3.2199588.19550311.086450.4448560.0543790.3352020.2639840.7163970.0689241.9794022.09E−072.81E−48
100Mean199.702129,222.48125.15328.16498880.097126.3242131.98603312.014466.0403189.8901242.54E−071.35E−32
Std225.071843,015.7644.230440.38157134.619980.4243630.9605951.0932162.5873550.030412.16E−072.81E−48
250Mean1,881,79429,219,35798,379.9624.24311546.753721.063855.335894137.225322.972824.900920.0050261.35E−32
Std1,046,9528,331,513103,971.40.37500571.907370.4195271.264721102.95330.3983690.0294120.0224762.81E−48
500Mean24,409,1014.10E+081,078,72649.817332,251,20645.9228311.089143764.66647.6337549.897660.0184811.35E−32
Std5,989,2481.06E+08392,617.90.067117781,787.80.5401983.4862423208.6320.4750350.0111920.0461672.81E−48
The results of the scalability analysis of STBO.

Evaluation of the CEC 2017 test suite benchmark functions

In this subsection, the performance of STBO in solving complex optimization problems of the CEC 2017 test suite is evaluated. This test suite has thirty standard benchmark functions consisting of three unimodal functions, C17-F1 to C17-F3, seven multimodal functions, C17-F4 to C17-F10, ten hybrid functions, C17-F11 to C17-F20, and ten composition functions C17- F21 to C17-F30. The C17-F2 function has been removed from this test suite due to unstable behavior. Full descriptions and details of the CEC 2017 test suite are available in the report[78]. The optimization results of the CEC 2017 test suite using the proposed STBO approach and competitor algorithms are reported in Table 8. Based on the optimization results, STBO is the first best optimizer in solving functions C17-F1, C17-F4 to C17-F6, C17-F8, C17-F10 to C17-F21, C17-F23 to C17-F25, and C17-F27 to C17-F30. The analysis of the simulation results found shows that the proposed STBO approach gives better results for most of the CEC 2017 test set features. It can be concluded that it performs better in solving this feature test set than the competing algorithms. Also, the results obtained from the Wilcoxon sum rank test show that the superiority of STBO against competitor algorithms in handling the CEC 2017 test suite is significant from a statistical point of view. The performance of STBO and competitor algorithms in solving the CEC 2017 test suite is presented as boxplot diagrams in Fig. 3. These diagrams intuitively show that STBO has performed more effectively in solving most of the benchmark functions of the CEC 2017 test suite by providing better results compared to competitor algorithms.
Table 8

Evaluation results on the CEC 2017 test suite functions.

GAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
C17-F1Mean1.00E+021.48E+031.32E+101.14E+052.87E+098.69E+069.43E+032.30E+057.67E+072.45E+023.33E+031.65E+07
Best1.00E+021.05E+029.16E+091.76E+023.93E+082.04E+064.43E+031.93E+045.38E+071.00E+027.03E+021.05E+07
Worst1.00E+023.34E+031.82E+104.52E+055.36E+092.43E+071.48E+046.81E+051.19E+085.29E+025.63E+032.48E+07
Std1.02E−051.38E+033.95E+092.30E+052.21E+091.07E+074.47E+033.18E+053.04E+072.03E+022.73E+036.38E+06
Median1.00E+021.24E+031.27E+101.82E+032.87E+094.22E+069.23E+031.11E+056.69E+071.76E+023.49E+031.53E+07
Rank131261185710249
C17-F3Mean3.00E+023.65E+021.01E+043.39E+021.27E+043.54E+033.00E+024.66E+037.63E+028.81E+033.00E+022.48E+04
Best3.00E+023.00E+026.88E+033.00E+029.43E+031.24E+033.00E+022.58E+035.86E+024.05E+033.00E+021.80E+04
Worst3.00E+024.17E+021.74E+043.95E+021.54E+047.10E+033.00E+028.46E+039.28E+021.50E+043.00E+024.00E+04
Std1.57E−104.91E+014.98E+034.48E+013.04E+032.66E+035.88E−022.76E+031.80E+024.65E+032.49E−121.04E+04
Med3.00E+023.72E+028.11E+033.30E+021.31E+042.91E+033.00E+023.80E+037.69E+028.11E+033.00E+022.06E+04
Rank251041173869112
C17-F4Mean4.00E+024.23E+021.09E+034.04E+026.38E+024.24E+024.05E+024.18E+024.13E+024.07E+024.07E+024.16E+02
Best4.00E+024.00E+026.81E+024.00E+024.08E+024.08E+024.04E+024.07E+024.10E+024.07E+024.01E+024.13E+02
Worst4.00E+024.79E+021.92E+034.06E+021.09E+034.41E+024.06E+024.39E+024.20E+024.07E+024.11E+024.24E+02
Std7.08E−093.83E+015.78E+022.78E+003.25E+021.83E+019.96E−011.52E+015.12E+001.79E−014.35E+005.41E+00
Median4.00E+024.06E+028.77E+024.05E+025.26E+024.22E+024.05E+024.12E+024.11E+024.07E+024.08E+024.14E+02
Rank191221110386457
C17-F5Mean5.09E+025.43E+025.71E+025.20E+025.55E+025.57E+025.17E+025.15E+025.39E+025.48E+025.39E+025.32E+02
Best5.08E+025.36E+025.61E+025.12E+025.26E+025.30E+025.11E+025.09E+025.31E+025.37E+025.24E+025.27E+02
Worst5.11E+025.62E+025.90E+025.24E+025.91E+025.96E+025.23E+025.20E+025.50E+025.62E+025.72E+025.38E+02
Std1.31E+001.28E+011.34E+015.77E+003.20E+012.94E+015.24E+004.78E+007.78E+001.10E+012.24E+014.70E+00
Med5.09E+025.37E+025.67E+025.23E+025.52E+025.51E+025.17E+025.16E+025.38E+025.47E+025.31E+025.32E+02
Rank181241011326975
C17-F6Mean6.00E+026.21E+026.49E+026.00E+026.28E+026.32E+026.01E+026.01E+026.05E+026.25E+026.03E+026.08E+02
Best6.00E+026.11E+026.43E+026.00E+026.13E+026.17E+026.00E+026.00E+026.04E+026.14E+026.01E+026.05E+02
Worst6.00E+026.36E+026.56E+026.01E+026.48E+026.49E+026.02E+026.05E+026.07E+026.39E+026.06E+026.11E+02
Std3.09E−041.17E+015.42E+006.82E−011.61E+011.52E+017.69E−012.33E+001.33E+001.07E+012.33E+003.41E+00
Median6.00E+026.19E+026.49E+026.00E+026.26E+026.31E+026.01E+026.00E+026.05E+026.24E+026.03E+026.08E+02
Rank181221011346957
C17-F7Mean7.22E+027.65E+028.07E+027.26E+027.92E+027.93E+027.28E+027.41E+027.59E+027.18E+027.46E+027.37E+02
Best7.19E+027.51E+027.97E+027.14E+027.69E+027.66E+027.23E+027.32E+027.55E+027.14E+027.31E+027.29E+02
Worst7.24E+027.82E+028.12E+027.42E+028.23E+028.12E+027.37E+027.49E+027.66E+027.22E+027.76E+027.40E+02
Std2.01E+001.37E+017.26E+001.22E+012.42E+012.18E+016.17E+007.53E+004.73E+003.18E+002.14E+015.52E+00
Med7.22E+027.64E+028.09E+027.23E+027.89E+027.96E+027.26E+027.41E+027.57E+027.17E+027.39E+027.39E+02
Rank291231011468175
C17-F8Mean8.08E+028.29E+028.59E+028.10E+028.49E+028.33E+028.31E+028.16E+028.33E+028.19E+028.25E+028.23E+02
Best8.05E+028.21E+028.56E+028.07E+028.44E+028.14E+028.20E+028.13E+028.28E+028.16E+028.13E+028.17E+02
Worst8.10E+028.44E+028.62E+028.12E+028.55E+028.53E+028.62E+028.21E+028.37E+028.23E+028.39E+028.37E+02
Std2.25E+001.06E+013.14E+002.67E+005.22E+001.61E+012.11E+013.76E+004.03E+003.03E+001.13E+019.46E+00
Median8.09E+028.25E+028.58E+028.10E+028.49E+028.32E+028.21E+028.15E+028.35E+028.19E+028.24E+028.20E+02
Rank171221198310465
C17-F9Mean9.00E+021.35E+031.44E+039.26E+021.60E+031.56E+039.00E+029.01E+029.49E+029.00E+029.59E+029.05E+02
Best9.00E+029.42E+021.14E+039.01E+021.01E+031.04E+039.00E+029.00E+029.28E+029.00E+029.02E+029.02E+02
Worst9.00E+021.80E+031.85E+039.93E+022.52E+032.51E+039.01E+029.03E+029.89E+029.00E+021.03E+039.07E+02
Std2.65E−083.61E+023.04E+024.53E+017.29E+026.63E+024.44E−011.34E+002.76E+010.00E+005.22E+012.20E+00
Med9.00E+021.34E+031.38E+039.05E+021.43E+031.35E+039.00E+029.00E+029.39E+029.00E+029.54E+029.06E+02
Rank291061211347185
C17-F10Mean1.45E+032.25E+032.47E+031.84E+032.33E+032.25E+031.61E+031.71E+032.16E+032.67E+031.98E+031.78E+03
Best1.34E+031.92E+032.20E+031.12E+031.52E+031.89E+031.50E+031.61E+032.07E+032.23E+031.84E+031.53E+03
Worst1.61E+032.49E+032.78E+032.22E+032.74E+032.75E+031.78E+031.79E+032.23E+033.04E+032.28E+032.02E+03
Std1.22E+022.73E+022.78E+025.23E+025.64E+024.17E+021.34E+027.88E+018.62E+013.58E+022.11E+022.25E+02
Median1.42E+032.31E+032.46E+032.01E+032.53E+032.19E+031.59E+031.71E+032.16E+032.70E+031.90E+031.78E+03
Rank191151082371264
C17-F11Mean1.10E+031.25E+033.99E+031.12E+031.29E+031.22E+031.13E+031.14E+031.15E+031.17E+031.14E+035.16E+03
Best1.10E+031.17E+032.13E+031.11E+031.15E+031.12E+031.11E+031.13E+031.13E+031.13E+031.11E+031.36E+03
Worst1.10E+031.36E+035.72E+031.12E+031.49E+031.41E+031.14E+031.16E+031.17E+031.20E+031.15E+031.05E+04
Std1.11E+008.99E+011.76E+034.12E+001.44E+021.28E+021.49E+011.01E+011.77E+013.22E+011.71E+014.07E+03
Med1.10E+031.23E+034.05E+031.11E+031.26E+031.18E+031.12E+031.14E+031.15E+031.17E+031.14E+034.38E+03
Rank191121083567412
C17-F12Mean1.21E+031.24E+066.77E+073.04E+038.96E+079.35E+065.18E+051.52E+052.30E+068.96E+051.47E+041.69E+06
Best1.20E+033.82E+043.05E+071.67E+033.30E+055.80E+048.01E+034.19E+044.93E+059.83E+031.56E+031.72E+05
Worst1.24E+034.05E+061.04E+085.43E+033.54E+082.04E+071.29E+064.67E+053.64E+062.59E+062.45E+045.80E+06
Std1.84E+011.92E+063.39E+071.70E+031.80E+089.95E+066.30E+052.15E+051.45E+061.21E+061.02E+042.79E+06
Median1.20E+034.39E+056.79E+072.53E+031.92E+068.48E+063.84E+054.93E+042.53E+064.92E+051.64E+044.01E+05
Rank171121210549638
C17-F13Mean1.31E+031.22E+044.09E+071.34E+031.62E+042.04E+041.35E+041.18E+047.10E+031.25E+045.58E+037.05E+04
Best1.30E+034.63E+031.15E+051.31E+037.21E+038.27E+032.36E+037.43E+034.00E+037.36E+032.15E+031.22E+04
Worst1.31E+032.02E+041.23E+081.36E+032.19E+043.46E+042.77E+041.87E+041.17E+041.60E+049.59E+031.69E+05
Std3.93E+006.58E+035.78E+072.26E+017.12E+031.12E+041.07E+045.14E+033.46E+033.86E+033.12E+037.46E+04
Med1.31E+031.19E+042.05E+071.35E+031.79E+041.95E+041.20E+041.05E+046.35E+031.33E+045.29E+035.06E+04
Rank161229108547311
C17-F14Mean1.40E+032.55E+035.47E+031.43E+034.77E+031.58E+031.44E+032.83E+031.52E+035.57E+037.00E+036.76E+03
Best1.40E+032.01E+032.19E+031.40E+032.57E+031.50E+031.43E+031.48E+031.48E+032.07E+033.68E+031.87E+03
Worst1.40E+032.91E+038.08E+031.45E+035.60E+031.69E+031.44E+034.80E+031.56E+039.48E+031.15E+041.16E+04
Std1.92E+003.95E+022.57E+032.26E+011.49E+038.89E+015.16E+001.61E+033.77E+013.12E+033.32E+035.26E+03
Median1.40E+032.63E+035.80E+031.43E+035.44E+031.58E+031.44E+032.53E+031.53E+035.36E+036.43E+036.80E+03
Rank169285374101211
C17-F15Mean1.50E+035.27E+039.25E+031.51E+031.47E+045.77E+031.56E+035.41E+031.79E+031.54E+044.46E+034.36E+03
Best1.50E+031.95E+034.96E+031.50E+034.16E+032.03E+031.54E+031.81E+031.69E+036.55E+032.27E+031.87E+03
Worst1.50E+038.28E+031.65E+041.52E+032.43E+041.51E+041.58E+037.23E+032.01E+032.03E+046.87E+037.09E+03
Std6.65E−022.67E+035.30E+037.02E+001.11E+046.36E+031.87E+012.54E+031.51E+026.25E+031.95E+032.87E+03
Med1.50E+035.43E+037.77E+031.51E+031.52E+042.98E+031.56E+036.30E+031.73E+031.74E+044.35E+034.23E+03
Rank171021193841265
C17-F16Mean1.60E+031.84E+032.05E+031.69E+032.14E+031.86E+031.88E+031.75E+031.70E+032.21E+031.87E+031.82E+03
Best1.60E+031.75E+032.02E+031.60E+031.99E+031.66E+031.72E+031.61E+031.64E+032.16E+031.72E+031.75E+03
Worst1.60E+031.96E+032.07E+031.84E+032.37E+032.09E+032.03E+032.00E+031.86E+032.30E+031.97E+031.85E+03
Std2.75E−019.69E+012.05E+011.17E+021.74E+022.12E+021.28E+021.78E+021.05E+026.08E+011.20E+024.95E+01
Median1.60E+031.82E+032.05E+031.66E+032.10E+031.85E+031.88E+031.70E+031.66E+032.19E+031.90E+031.85E+03
Rank161021179431285
C17-F17Mean1.72E+031.76E+031.86E+031.74E+031.86E+031.87E+031.80E+031.77E+031.76E+031.97E+031.86E+031.75E+03
Best1.71E+031.72E+031.82E+031.73E+031.80E+031.82E+031.73E+031.74E+031.76E+031.76E+031.77E+031.75E+03
Worst1.72E+031.81E+031.93E+031.75E+031.97E+031.92E+031.86E+031.80E+031.76E+032.15E+031.98E+031.76E+03
Std7.86E+004.00E+015.17E+018.37E+007.34E+014.15E+016.32E+012.90E+011.03E+001.64E+029.71E+012.51E+00
Med1.72E+031.75E+031.85E+031.74E+031.84E+031.86E+031.80E+031.77E+031.76E+031.98E+031.85E+031.75E+03
Rank148210117651293
C17-F18Mean1.80E+032.35E+049.48E+071.83E+032.89E+042.46E+041.74E+042.31E+044.03E+041.20E+041.13E+049.29E+03
Best1.80E+038.83E+031.57E+061.81E+031.13E+043.35E+034.18E+037.85E+031.54E+048.16E+033.07E+034.53E+03
Worst1.80E+033.76E+043.69E+081.85E+033.77E+043.86E+043.82E+043.58E+045.63E+041.85E+041.74E+041.76E+04
Std5.82E−011.52E+041.87E+081.77E+011.25E+041.59E+041.66E+041.19E+041.80E+044.91E+036.76E+036.04E+03
Median1.80E+032.37E+044.22E+061.84E+033.34E+042.82E+041.36E+042.43E+044.48E+041.06E+041.24E+047.52E+03
Rank181221096711543
C17-F19Mean1.90E+039.13E+036.79E+051.91E+037.35E+042.93E+052.02E+036.14E+032.16E+032.89E+041.14E+042.01E+04
Best1.90E+033.26E+031.61E+051.90E+032.01E+037.90E+031.93E+031.93E+032.04E+038.74E+035.50E+038.22E+03
Worst1.90E+031.53E+041.87E+061.92E+032.76E+051.12E+062.27E+031.20E+042.36E+035.38E+042.00E+042.99E+04
Std6.62E−025.07E+038.20E+056.11E+001.38E+055.62E+051.71E+025.14E+031.46E+022.03E+046.42E+031.15E+04
Med1.90E+038.96E+033.41E+051.91E+038.08E+032.23E+041.95E+035.29E+032.11E+032.66E+041.01E+042.11E+04
Rank161221011354978
C17-F20Mean2.01E+032.12E+032.30E+032.02E+032.21E+032.26E+032.16E+032.05E+032.08E+032.33E+032.24E+032.05E+03
Best2.00E+032.03E+032.25E+032.02E+032.09E+032.07E+032.03E+032.03E+032.06E+032.20E+032.20E+032.04E+03
Worst2.02E+032.16E+032.36E+032.04E+032.43E+032.35E+032.26E+032.08E+032.12E+032.42E+032.27E+032.08E+03
Std1.04E+016.44E+015.25E+019.05E+001.52E+021.32E+029.92E+012.46E+012.61E+019.56E+013.57E+011.88E+01
Median2.01E+032.14E+032.30E+032.02E+032.17E+032.30E+032.17E+032.06E+032.08E+032.35E+032.24E+032.05E+03
Rank161128107451293
C17-F21Mean2.20E+032.30E+032.31E+032.29E+032.35E+032.34E+032.30E+032.29E+032.30E+032.36E+032.32E+032.31E+03
Best2.20E+032.20E+032.25E+032.21E+032.34E+032.32E+032.20E+032.20E+032.21E+032.36E+032.31E+032.22E+03
Worst2.20E+032.36E+032.39E+032.32E+032.38E+032.35E+032.34E+032.32E+032.34E+032.37E+032.34E+032.34E+03
Std1.24E−056.89E+016.02E+015.53E+011.93E+011.50E+016.49E+016.17E+016.81E+015.99E+001.16E+015.84E+01
Med2.20E+032.32E+032.30E+032.31E+032.34E+032.34E+032.33E+032.32E+032.34E+032.36E+032.32E+032.33E+03
Rank157211104361298
C17-F22Mean2.30E+032.31E+032.96E+032.31E+032.39E+032.32E+032.30E+032.31E+032.32E+032.30E+032.31E+032.32E+03
Best2.30E+032.30E+032.76E+032.30E+032.31E+032.31E+032.30E+032.30E+032.32E+032.30E+032.30E+032.32E+03
Worst2.30E+032.31E+033.25E+032.31E+032.47E+032.33E+032.30E+032.32E+032.33E+032.30E+032.35E+032.33E+03
Std4.86E−013.68E+002.15E+023.55E+009.17E+017.91E+009.78E−016.71E+007.38E+001.81E−012.38E+015.84E+00
Median2.30E+032.31E+032.92E+032.31E+032.39E+032.32E+032.30E+032.30E+032.33E+032.30E+032.30E+032.32E+03
Rank261251183410179
C17-F23Mean2.61E+032.67E+032.70E+032.65E+032.69E+032.66E+032.61E+032.62E+032.64E+032.73E+032.64E+032.66E+03
Best2.61E+032.65E+032.67E+032.62E+032.67E+032.61E+032.61E+032.61E+032.62E+032.72E+032.61E+032.65E+03
Worst2.61E+032.69E+032.72E+032.67E+032.72E+032.69E+032.62E+032.64E+032.65E+032.75E+032.66E+032.68E+03
Std2.10E+001.90E+012.05E+012.11E+011.81E+013.45E+016.24E+001.63E+011.11E+011.00E+012.06E+011.47E+01
Med2.61E+032.67E+032.70E+032.65E+032.69E+032.66E+032.61E+032.62E+032.64E+032.73E+032.65E+032.66E+03
Rank191161072341258
C17-F24Mean2.50E+032.78E+032.89E+032.64E+032.82E+032.80E+032.75E+032.75E+032.77E+032.74E+032.78E+032.77E+03
Best2.50E+032.76E+032.84E+032.50E+032.80E+032.75E+032.75E+032.74E+032.76E+032.50E+032.77E+032.77E+03
Worst2.50E+032.80E+032.93E+032.78E+032.85E+032.82E+032.76E+032.78E+032.77E+032.86E+032.78E+032.79E+03
Std4.88E−051.46E+013.77E+011.66E+022.29E+013.01E+014.03E+002.06E+015.94E+001.66E+023.92E+001.05E+01
Median2.50E+032.77E+032.90E+032.64E+032.82E+032.81E+032.75E+032.74E+032.77E+032.80E+032.78E+032.77E+03
Rank181221110546397
C17-F25Mean2.90E+032.94E+033.38E+032.93E+033.13E+032.95E+032.90E+032.94E+032.93E+032.93E+032.93E+032.95E+03
Best2.90E+032.90E+033.35E+032.90E+032.94E+032.95E+032.90E+032.92E+032.91E+032.90E+032.90E+032.95E+03
Worst2.90E+032.95E+033.46E+032.95E+033.50E+032.96E+032.90E+032.95E+032.95E+032.94E+032.95E+032.96E+03
Std3.10E−072.38E+015.00E+012.38E+012.59E+025.22E+002.51E−011.57E+011.63E+012.24E+012.41E+012.74E+00
Med2.90E+032.95E+033.36E+032.95E+033.03E+032.95E+032.90E+032.95E+032.93E+032.94E+032.95E+032.95E+03
Rank171251192834610
C17-F26Mean2.88E+032.97E+034.24E+033.26E+033.88E+033.26E+032.90E+032.96E+033.29E+034.13E+032.85E+033.02E+03
Best2.80E+032.82E+033.82E+032.90E+032.91E+032.83E+032.90E+032.90E+032.99E+033.57E+032.60E+032.91E+03
Worst2.90E+033.14E+034.79E+033.96E+034.77E+033.97E+032.90E+032.98E+034.17E+034.43E+033.02E+033.13E+03
Std5.10E+011.84E+024.62E+024.81E+027.79E+025.06E+024.02E−023.84E+015.98E+023.89E+021.98E+021.01E+02
Median2.90E+032.97E+034.17E+033.10E+033.93E+033.12E+032.90E+032.97E+033.00E+034.25E+032.89E+033.03E+03
Rank251281073491116
C17-F27Mean3.09E+033.10E+033.17E+033.11E+033.17E+033.17E+033.09E+033.09E+033.11E+033.30E+033.12E+033.15E+03
Best3.09E+033.10E+033.14E+033.10E+033.14E+033.13E+033.09E+033.09E+033.09E+033.22E+033.10E+033.13E+03
Worst3.09E+033.10E+033.23E+033.13E+033.20E+033.21E+033.10E+033.09E+033.15E+033.38E+033.14E+033.18E+03
Std2.28E−013.21E+004.20E+011.62E+013.29E+013.43E+013.14E+002.68E+003.02E+017.96E+011.92E+011.79E+01
Med3.09E+033.10E+033.17E+033.11E+033.17E+033.18E+033.09E+033.09E+033.10E+033.29E+033.11E+033.15E+03
Rank141169103251278
C17-F28Mean3.03E+033.33E+033.91E+033.31E+033.47E+033.31E+033.33E+033.24E+033.44E+033.47E+033.32E+033.20E+03
Best2.80E+033.10E+033.87E+033.10E+033.40E+033.19E+033.15E+033.18E+033.23E+033.42E+033.18E+033.17E+03
Worst3.10E+033.41E+033.95E+033.44E+033.60E+033.41E+033.41E+033.40E+033.73E+033.52E+033.41E+033.21E+03
Std1.53E+021.59E+023.76E+011.54E+029.20E+011.23E+021.25E+021.07E+022.16E+024.14E+011.05E+022.22E+01
Median3.10E+033.41E+033.91E+033.34E+033.44E+033.32E+033.38E+033.20E+033.40E+033.47E+033.34E+033.20E+03
Rank181241157391062
C17-F29Mean3.14E+033.38E+033.34E+033.17E+033.31E+033.32E+033.23E+033.19E+033.23E+033.52E+033.29E+033.28E+03
Best3.13E+033.30E+033.21E+033.15E+033.29E+033.26E+033.18E+033.17E+033.18E+033.33E+033.21E+033.22E+03
Worst3.15E+033.44E+033.46E+033.19E+033.34E+033.40E+033.30E+033.20E+033.32E+033.70E+033.34E+033.34E+03
Std8.71E+006.37E+011.17E+021.68E+012.48E+016.04E+015.29E+011.18E+016.16E+011.64E+026.07E+015.18E+01
Med3.15E+033.38E+033.35E+033.16E+033.31E+033.32E+033.22E+033.19E+033.22E+033.52E+033.30E+033.27E+03
Rank111102894351276
C17-F30Mean3.41E+035.29E+055.68E+065.60E+034.72E+063.20E+063.79E+058.36E+053.11E+041.81E+066.31E+052.25E+06
Best3.40E+031.05E+049.84E+053.64E+032.49E+062.86E+041.47E+048.10E+032.12E+043.06E+053.87E+032.28E+05
Worst3.43E+031.20E+061.85E+071.10E+048.48E+066.05E+061.47E+061.70E+064.45E+045.57E+061.85E+064.21E+06
Std1.83E+015.90E+058.71E+063.70E+032.79E+062.95E+067.39E+059.73E+051.09E+042.57E+068.87E+052.04E+06
Median3.40E+034.52E+051.63E+063.87E+033.95E+063.37E+061.77E+048.19E+052.94E+046.93E+053.34E+052.29E+06
Rank151221110473869
Sum rank3420032096298261125141181228177201
Mean rank1.17246.896511.03443.310310.275894.31034.86206.24137.86206.10346.9310
Total rank171221110346958
P-value6.882E−211.972E−211.289E−191.972E−211.972E−213.406E−203.881E−211.972E−211.803E−207.408E−201.972E−21
Figure 3

Boxplot of performance of STBO and competitor algorithms in solving the CEC 2017 test suite.

Evaluation results on the CEC 2017 test suite functions. Boxplot of performance of STBO and competitor algorithms in solving the CEC 2017 test suite.

STBO for real-world applications

STBO's ability to optimize optimization problems in real-world applications is evaluated in this section. To this end, STBO and competitor algorithms have been implemented on four engineering optimization challenges. These engineering challenges are pressure vessel design (PVD)[79], speed reducer design (SRD)[80], welded beam design (WBD)[12], and tension/compression spring design (TCSD)[12]. Schematics of these problems are presented in Fig. 4.
Figure 4

Schematics of four real-world applications: (A) PVD, (B) SRD, (C) WBD, (D) TCSD.

Schematics of four real-world applications: (A) PVD, (B) SRD, (C) WBD, (D) TCSD. The optimization results of the four mentioned challenges are reported in Table 9. The simulation results show that STBO performs superior to competitor algorithms in optimizing all four studied engineering challenges. What is clear from the analysis of the simulation results is that STBO has an effective capability in dealing with real-world optimization applications. The convergence curves of STBO while optimizing the mentioned optimization challenges are presented in Fig. 5. The convergence curves show that STBO has identified the main optimal area in the initial iterations by providing a desirable global search. Then, by increasing the iterations of the algorithm based on the local search, it tries to get better solutions. Intuitive analysis of convergence curves shows that STBO has converged to suitable solutions with a high ability to balance exploration and exploitation.
Table 9

Evaluation results of four real-world applications.

GAPSOGSATLBOMVOGWOWOATSAMPARSAAVOASTBO
PVDMean6645.5626265.496842.1646328.2616478.8416066.4555892.9215888.846117.7636038.6525963.4055888.170
Best6581.0435918.22411,6056166.4386039.9845919.2895917.2615913.4516109.886031.3645958.1175884.882
Worst8007.3377007.4117160.9886513.8987252.6357396.345896.0215893.7186129.0696042.5135968.945895.379
Std657.679496.24575791.998126.639327.084666.6343913.9133128.9368638.2416131.1869827.45165823.71639
Median7587.8086114.1396839.2546319.8156398.9976417.6355892.0465887.6246115.5786036.7445962.31955887.907
Rank118129106327541
SRDMean3190.6663174.4573069.9043032.783109.293009.7543003.5413011.733001.8643000.1713000.1973000.029
Best3070.6293054.1733033.5943005.9313008.7693004.2913001.553004.8372996.2162996.1712995.77752995.39
Worst3317.5083368.2473108.8163064.9383215.3493012.6653007.7953027.3163007.0933002.1733001.8973001.627
Std17.1408692.6929818.097713.0355379.741665.8455311.93444310.368085.2190982.0150321.81937371.623719
Med3202.3463160.8573069.5953030.9683109.293008.4263003.0873010.343000.4312999.8362999.44552999.061
Rank121198106574231
WBDMean1.965952.1230052.548761.8208861.7327542.2342731.7301981.7288961.8920961.7250251.72481331.724605
Best1.838411.8761762.1754141.7612421.7275021.8225361.7290271.7276911.8661571.7272961.72520981.723127
Worst2.0388642.3242473.0089941.8767381.7447463.0536481.7306341.7291322.0164181.7277261.72720731.726692
Std0.1397330.0348820.2563140.0275920.0048750.3251020.0011590.0002870.007960.0051240.0047240.004324
Median1.9391882.1007752.4995481.8233621.730492.2486521.7301571.7288551.8835781.7259971.72493681.72388
Rank910127611548321
TCSDMean0.0131920.0141660.0135640.012960.0145990.0149560.0128160.0128030.0138980.01280.0127370.012674
Best0.0128890.0131510.0129870.0128220.012930.0133090.012790.0127860.0132180.0127680.012710.012652
Worst0.0153560.0164030.0143450.013120.0180060.0180290.012840.0128340.015830.0128120.01274750.012683
Std0.0003780.0020920.0002890.0078310.0016370.0022930.0041930.0056710.0061410.0074170.0042190.001021
Med0.0130730.0131230.0134920.0129650.0141510.0133160.0128190.0128060.0137760.012790.01273050.012671
Rank710861112549321
Sum rank39394130373518172813114
Mean rank9.759.7510.257.59.258.754.54.2573.252.751
Total rank811127109546321
Figure 5

Convergence curves of STBO on four real-world applications.

Evaluation results of four real-world applications. Convergence curves of STBO on four real-world applications.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed consent

Informed consent was not required as no human or animals were involved.

Conclusion and future works

This paper introduced a new metaheuristic algorithm called Sewing Training-Based Optimization (STBO) to solve optimization problems. The interactions between the training instructor and the beginner tailors are the main inspiration in the design of STBO. The proposed STBO was modeled and designed in three phases: (i) training, (ii) imitation of the instructor's skills, and (iii) practice. The STBO’s performance was tested on fifty-two objective functions of unimodal, high-dimensional multimodal, fixed-dimensional multimodal, and the CEC 2017 test suite. The optimization results of the benchmark functions showed that the proposed STBO approach has a good ability in exploration, exploitation, and balancing their proportion during the search process in the problem-solving space. Eleven well-known metaheuristic algorithms were employed to compare the performance of STBO in optimization. The simulation results showed that STBO has superior and competitive performance compared to some well-known metaheuristic algorithms, providing better results in most of the objective functions studied in this paper. STBO implementation on four engineering design challenges demonstrated the capability of the proposed algorithm in real-world applications. Although the proposed STBO has provided good performance in most of the benchmark functions studied in this article, the proposed algorithm has some limitations. The first limitation of STBO is that it is always possible to devise newer algorithms that perform better than the proposed approach. The second limitation of STBO is that there is a possibility that the implementation of the proposed algorithm will fail in some optimization applications. Finally, the third limitation of STBO is that there is no guarantee that STBO can always provide a globally optimal solution since the proposed algorithm is based on a random search. Also, based on the concept of the NFL theorem, it is not claimed that STBO is the best optimizer for all optimization applications. Introducing the STBO activates several research tasks for future studies. Developing binary and multimodal versions is a possible specific STBO research proposal. Employing STBO in various applications of optimization in science as well as in real-world applications are other suggestions for further studies.
  9 in total

1.  Optimization by simulated annealing.

Authors:  S Kirkpatrick; C D Gelatt; M P Vecchi
Journal:  Science       Date:  1983-05-13       Impact factor: 47.728

2.  On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget.

Authors:  Ya D Sergeyev; D E Kvasov; M S Mukhametzhanov
Journal:  Sci Rep       Date:  2018-01-11       Impact factor: 4.379

3.  Coronavirus herd immunity optimizer (CHIO).

Authors:  Mohammed Azmi Al-Betar; Zaid Abdi Alkareem Alyasseri; Mohammed A Awadallah; Iyad Abu Doush
Journal:  Neural Comput Appl       Date:  2020-08-27       Impact factor: 5.606

4.  An Improved Grey Wolf Optimizer Based on Differential Evolution and Elimination Mechanism.

Authors:  Jie-Sheng Wang; Shu-Xia Li
Journal:  Sci Rep       Date:  2019-05-09       Impact factor: 4.379

5.  Pelican Optimization Algorithm: A Novel Nature-Inspired Algorithm for Engineering Applications.

Authors:  Pavel Trojovský; Mohammad Dehghani
Journal:  Sensors (Basel)       Date:  2022-01-23       Impact factor: 3.576

6.  Advances in Sparrow Search Algorithm: A Comprehensive Survey.

Authors:  Farhad Soleimanian Gharehchopogh; Mohammad Namazi; Laya Ebrahimi; Benyamin Abdollahzadeh
Journal:  Arch Comput Methods Eng       Date:  2022-08-22       Impact factor: 8.171

7.  Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization.

Authors:  Mohammad Dehghani; Pavel Trojovský
Journal:  Sensors (Basel)       Date:  2021-07-03       Impact factor: 3.576

8.  Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm.

Authors:  Mohammad Dehghani; Štěpán Hubálovský; Pavel Trojovský
Journal:  Sensors (Basel)       Date:  2021-07-31       Impact factor: 3.576

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.