Literature DB >> 34721567

Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization.

Lei Xie1, Tong Han1, Huan Zhou1, Zhuo-Ran Zhang2, Bo Han3, Andi Tang1.   

Abstract

In this paper, a novel swarm-based metaheuristic algorithm is proposed, which is called tuna swarm optimization (TSO). The main inspiration for TSO is based on the cooperative foraging behavior of tuna swarm. The work mimics two foraging behaviors of tuna swarm, including spiral foraging and parabolic foraging, for developing an effective metaheuristic algorithm. The performance of TSO is evaluated by comparison with other metaheuristics on a set of benchmark functions and several real engineering problems. Sensitivity, scalability, robustness, and convergence analyses were used and combined with the Wilcoxon rank-sum test and Friedman test. The simulation results show that TSO performs better compared to other comparative algorithms.
Copyright © 2021 Lei Xie et al.

Entities:  

Mesh:

Year:  2021        PMID: 34721567      PMCID: PMC8550856          DOI: 10.1155/2021/9210050

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Real-world optimization problems have become more challenging, which requires more efficient solution methods. Different scholars have studied various approaches to solve these complex and difficult problems from the real world. A part of researchers solve these optimization problems using traditional methods such as quasi-Newton, conjugate gradient, and sequential quadratic programming methods. However, owing to the nonlinear, nonproductivity characteristics of most real-world optimization problems and the involvement of multiple decision variables and complex constraints, these traditional algorithms are difficult to be solved effectively [1, 2]. The metaheuristic algorithm has the advantages of not relying on the problem model, not requiring gradient information, having strong search capability and wide applicability, and can achieve a good balance between solution quality and computational cost [3]. Therefore, the metaheuristic algorithms have been proposed to solve real-world optimization problems, such as image segmentation [4, 5], feature selection [6, 7], mission planning [8, 9], parameter optimization [10, 11], job shop scheduling [12, 13], etc. Metaheuristic algorithms are usually classified into three categories [14]: evolution-based algorithms, physical-based algorithms, and swarm-based algorithms. The evolution-based algorithm is inspired by the laws of evolution in nature. Genetic algorithm (GA) [15], inspired by Darwin's theory of superiority and inferiority, is a well-known evolution-based algorithm. With the popularity of GA, several other widely used evolution-based algorithms have been proposed, including differential evolution (DE) [16], genetic programming (GP) [17], evolutionary strategies (ES) [18], and evolutionary programming (EP) [19]. In addition, several new evolution-based algorithms have been proposed, such as artificial algae algorithm (AAA) [20], biogeography-based optimization (BBO) [21], and monkey king evolutionary (MKE) [22]. The physical-based algorithms are inspired by various laws of physics. One of the most famous algorithms of this category is simulated annealing (SA) [23]. SA is inspired by the law of thermodynamics in which a material is heated up and then cooled slowly. There are other physical-based algorithms proposed, including gravitational search algorithm (GSA) [24], nuclear reaction optimization (NRO) [25], water cycle algorithm (WCA) [26], and sine cosine algorithm (SCA) [27]. The swarm-based algorithms are inspired by the social behavior of different species in natural groups. Particle swarm optimization (PSO) [28] and ant colony optimization (ACO) [29] are two typical swarm-based algorithms. PSO and ACO mimic the aggregation behavior of bird colonies and the foraging behavior of ant colonies, respectively. Some other algorithms of this category include: grey wolf optimizer (GWO) [30], monarch butterfly optimization (MBO) [31], elephant herding optimization (EHO) [32], moth search algorithm (MSA) [33], manta ray foraging optimization (MRFO) [34],earthworm optimization algorithm (EOA) [35], etc. With the development of metaheuristics, a type of human-based metaheuristic algorithm is also emerging. These algorithms are inspired by the characteristics of human activity. Teaching-learning-based optimization (TLBO) [36], inspired by traditional teaching methods, is a typical example of this category among metaheuristic algorithms. Other human-based metaheuristics include: social evolution and learning optimization (SELO) [37], group teaching optimization algorithm (GTOA) [38], heap-based optimizer (HBO) [39], political optimizer (PO) [40], etc. There is a common feature of all these metaheuristic algorithms that rely on exploration and exploitation in the search space to find the optimal solution [41, 42]. Exploration means that the algorithm searches for promising regions in a wide search space and exploitation is a further search for the best solution in the promising regions. The balance of the two search behaviors affects the quality of the solution. When exploration dominates, exploitation declines, and vice versa. Therefore, it is a big challenge to balance exploration and exploitation for metaheuristics. Although there are constantly new algorithms being developed, the no free lunch (NFL) [43] theory states that no particular algorithm can solve all optimization problems perfectly. The NFL has motivated researchers to develop effective metaheuristic algorithms to solve various fields of optimization problems. In this paper, a novel swarm-based metaheuristic is presented called tuna swarm optimization (TSO). It is inspired by two types of swarm foraging behavior of tunas. The TSO is evaluated in 23 benchmark functions and 3 engineering design problems. Test results reveal that the method proposed in this paper significantly outperforms those popular and recent metaheuristics. This paper is structured as follows: Section 2 describes the inspiration for TSO and builds the corresponding mathematical model. A benchmark function set and three engineering design problems are employed to check the performance of TSO in Sections 3 and 4, respectively. Section 5 concludes the overall work and provides an outlook for the future.

2. Tuna Swarm Optimization

2.1. Inspiration

Tuna, scientifically named Thunnini, is a marine carnivorous fish. There are many species of tuna, and the size varies greatly. Tuna are top marine predators, feeding on a variety of midwater and surface fish. Tunas are continuous swimmers, and they have a unique and efficient way of swimming (called fishtail shape) in which the body stays rigid while the long, thin tail swings rapidly. Although the single tuna swims very fast, it is still not as fast as the nimble small fish response. Therefore, the tuna will use the “ group travel “ method for predation. They use their intelligence to find and attack their prey. These creatures have evolved a variety of effective and intelligent foraging strategies. The first strategy is spiral foraging. When tuna are feeding, they swim by forming a spiral formation to drive their prey into shallow water where they can be attacked more easily. The second strategy is parabolic foraging. Each tuna swims after the previous individual, forming a parabolic shape to enclose its prey. Tuna successfully forage by the above two methods. In this paper, a new swarm-based metaheuristic optimization algorithm, namely, tuna swarm optimization, is proposed based on modeling these natural foraging behaviors.

2.2. Mathematical Model

In this section, the mathematical model of the proposed algorithm is described in detail.

2.2.1. Initialization

Similar to most swarm-based metaheuristics, TSO starts the process of optimization by generating initial populations at random uniformly in the search space,where Xint is the i initial individual, ub and lb are the upper and lower boundaries of the search space, NP is the number of tuna populations, and rand is a uniformly distributed random vector ranging from 0 to 1.

2.2.2. Spiral Foraging

When sardines, herring, and other small schooling fish encounter predators, the entire school of fish forms a dense formation constantly changing the swimming direction, making it difficult for predators to lock a target. At this time, the tuna group chase the prey by forming a tight spiral formation. Although most of the fish in the school have little sense of direction, when a small group of fish swim firmly in a certain direction, the nearby fish will adjust their direction one after another and finally form a large group with the same goal and start to hunt. In addition to spiraling after their prey, schools of tuna also exchange information with each other. Each tuna follows the previous fish, thus enabling information sharing among neighboring tuna. Based on the above principles, the mathematical formula for the spiral foraging strategy is as follows:where X is the i individual of the t+1 iteration, X is the current optimal individual (food), α1 and α2 are weight coefficients that control the tendency of individuals to move towards the optimal individual and the previous individual, a is a constant used to determine the extent to which the tuna follow the optimal individual and the previous individual in the initial phase, t denotes the number of current iteration, tmax is the maximum iterations, and b is a random number uniformly distributed between 0 and 1. When all tuna forage spirally around the food, they have good exploitation ability for the search space around the food. However, when the optimal individual fails to find food, blindly following the optimal individual to forage is not conducive to group foraging. Therefore, we consider generating a random coordinate in the search space as a reference point for spiral search. This facilitates each individual to search a wider space and makes TSO with global exploration ability. The specific mathematical model is described as follows:where X is a randomly generated reference point in the search space. In particular, metaheuristic algorithms usually perform extensive global exploration in the early stage and then gradually transition to precise local exploitation. Therefore, TSO changes the reference points of spiral foraging from random individuals to optimal individuals as the iteration increases. In summary, the final mathematical model of the spiral foraging strategy is as follows:

2.2.3. Parabolic Foraging

In addition to feeding by forming a spiral formation, tunas also form a parabolic cooperative feeding. Tuna forms a parabolic formation with food as a reference point. In addition, tuna hunt for food by searching around themselves. These two approaches are performed simultaneously, with the assumption that the selection probability is 50% for both. The specific mathematical model is described as follows:where TF is a random number with a value of 1 or −1. Tuna hunt cooperatively through two foraging strategies and then find their prey. For the optimization process of TSO, the population is first randomly generated in the search space. In each iteration, each individual randomly chooses one of the two foraging strategies to execute, or chooses to regenerate the position in the search space according to probability z. The value of parameter z will be discussed in the parameter setting simulation experiments. During the entire optimization process, all individuals of TSO are continuously updated and calculated until the end condition is met, and then the optimal individual and the corresponding fitness value are returned. The TSO pseudocode is shown in Algorithm 1. The detailed process of TSO is shown in Figure 1.
Algorithm 1

Pseudocode of TSO.

Figure 1

Flowchart of TSO.

3. Numerical Experiment and Discussion

3.1. Benchmark Function Set and Compared Algorithms

In this section, in order to evaluate the performance of the TSO proposed in this paper, a set of well-known benchmark functions are employed for testing. This set of benchmark functions include 7 unimodal functions, 6 multimodal functions, and 10 multimodal functions with fixed dimensions. The unimodal functions F1–F7 have only one global optimal solution and are, therefore, often employed to evaluate the local exploitation capability of an algorithm. Besides the global optimal solution, the multimodal functions F8–F23 also have multiple local optimal solutions and are, therefore, used to challenge the global exploration capability and local optimal avoidance capability of an algorithm. The mathematical formulas and characteristics of these functions are shown in Table 1. A three-dimensional visualization of these functions is given in Figure 2.
Table 1

Description of benchmark functions.

Test functionNameType Dim RangeOptimum
f 01(x)=∑i=1Dxi2SphereUS30[−100, 100]0
f 02(x)=∑i=1D|xi|+∏i−1D|xi|Schwefel 2.22UN30[−10, 10]0
f 03(x)=∑i=1D(∑j−1Dxi)2Schwefel 1.2UN30[−100, 100]0
f 04(x)=maxi{|xi|,  1 ≤ iD}Schwefel 2.21US30[−100, 100]0
f 05(x)=∑i=1D100(xi+12xi2)2+(xi − 1)2RosenbrockUN30[−30, 30]0
f 06(x)=∑i=1D(⌊xi+0.5⌋)2StepUS30[−100, 100]0
f 07(x)=∑i=1Dixi4+random[0,1)QuarticUS30[−1.28, 1.28]0

f08x=i=1Dxisinxi Schwefel 2.26MS30[−500, 500]−418.9829D
f 09(x)=∑i=1D(xi2 − 10cos(2πxi)+10sin)RastriginMS30[−5.12, 5.12]0
f10x=20+e20exp201/Di=1Dxi2exp1/Di=1Dcos2πxi AckleyMS30[−32, 32]8.8818e−16
f11x=1/4000i=1Dxi2i=1Dcosxi/i+1 GriewankMN30[−600, 600]0
f12x=π/D10sin2πyi+i=1Dyi121+10sin2πyi+1+yD1+i=1Duxi,10,100,4,yi=1+xi+1/4uxi,a,,km=kxiam,xi>a0,a<xi<akxiam,xi<a PenalizedMN30[−50, 50]0
f 13(x)=0,1{sin2(3πxi)+∑i=1D(xi − 1)2[1+sin2(3πxi)]+(xD − 1)2[1+sin2(2πxD)]}+∑i−1Du(xi, 5,100,4)(π/D){10sin2(πyi)+}+∑i=1Du(xi, 10,100,4)Penalized2MN30[−50, 50]0

f 14(x)=((1/500)+∑j=125(1/(j+∑i=12(xiaij)6)))−1FoxholesMS2[−65.53, 65.53]0.998004
f 15(x)=∑i=111(ai − (x1(bi2+bix2)/bi2+bix3+x4))−1KowalikMS4[−5, 5]0.0003075
f 16(x)=(4x12 − 2.1x14+1/3x16+x1x2 − 4x22+x24)Six hump camel backMN2[−5, 5]−1.03163
f 17(x)=(x2 − (5.1/4π2)x12+(5/π)x1 − 6)2+10(1 − (1/8π)cosx1+10)BraninMS2[−5, 10]×[0, 15]0.398
f 18(x)=[1+(x1+x2+1)2(19 − 14x1+3x12 − 14x2+6x1x2+3x22)] × [30+(2x1 − 3x2)2(18 − 32x1+12x12+48x2 − 36x1x2+27x22)]Goldstein priceMN2[−5, 5]3
f 19(x)=−∑i=14(ciexp(−∑j−13aij(xipij))2)Hartman 3MN3[0, 1]−3.8628
f 20(x)=−∑i=14(ciexp(−∑j−16aij(xipij))2)Hartman 6MN6[0, 1]−3.32
f 21(x)=−∑i=15[(Xai)(Xai)T+ci]−1Langermann 5MN4[0, 10]−10.1532
f 22(x)=−∑i=17[(Xai)(Xai)T+ci]−1Langermann 7MN4[0, 10]−10.4029
f 23(x)=−∑i=110[(Xai)(Xai)T+ci]−1Langermann 10MN4[0, 10]−10.5364
Figure 2

3D visualization for 2D benchmark functions.

3.2. Compared Algorithms and Experimental Setup

The results of the proposed TSO are compared with seven well-regarded and recent metaheuristics. These algorithms include particle swarm optimization (PSO) [28], grey wolf optimizer (GWO) [30], whale optimization algorithm (WOA) [44], and Salp swarm algorithm (SSA) [45], which are the more frequently used algorithms in the optimization field, and Harris Hawks optimization (HHO) [46], equilibrium optimizer (EO) [47], and tunicate swarm algorithm (TSA) [48], which are three new algorithms recently proposed. All algorithms were implemented under MATLAB R2016b on a computer with Windows 10 64 bit Professional and 16 GB RAM. The population size and the maximum number of iterations for all optimizers were set to 50 and 1000, respectively. All results were recorded and compared based on the performance of each optimizer on 30 independent runs. It is well known that the parameter settings of an algorithm have a huge impact on the performance of the algorithm. For the fair comparison, the parameter settings of all compared algorithms are based on the parameters used by the authors of the original article. Table 2 lists the parameters used by each algorithm.
Table 2

Parameter settings for algorithms.

AlgorithmParameters
PSO c 1=c2=2, wMax=0.9, wMin=0.2
GWO a=2 (linearly decreased over iterations)
WOA a 1=2 (linearly decreased over iterations)
SSA c 1=rand, c2=rand
HHO
EO a 1=2, a2=1
TSA P max=4, Pmin=1
TSO z=0.05, a=0.7

3.3. Analysis of TSO for Variable-Dimensional Functions

Table 3 presents the results of TSO and other comparison algorithms for solving F1–F13 with Dim = 30. In addition, the performance of TSO is also evaluated using test functions with different dimensions, which is beneficial to recognize the ability of TSO to solve high-dimensional functions. Table 4–Table 6 show the results of TSO and other comparison algorithms for processing F1–F13 with dimensions of 100, 500, and 1000.
Table 3

Comparison of results on F1–F13 with 30D.

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F1Ave0.00 E+003.92E−1935.36E−1022.59E−524.76E−709.09E−091.78E−094.37E−172
Std0.00 E+000.00 E+007.54E−1024.91E−521.16E−691.37E−098.10E−090.00 E+00

F2Ave1.47E−2353.54E−1021.41E−571.15E−303.93E−414.75E−011.33 E+001.13E−108
Std0.00 E+001.17E−1011.70E−574.74E−303.90E−416.71E−014.34 E+004.69E−108

F3Ave0.00 E+001.98E−1554.95E−271.52E−155.97E−204.01 E+011.02 E+031.31 E+04
Std0.00 E+001.09E−1542.01E−267.35E−151.95E−192.60 E+011.89 E+036.96 E+03

F4Ave2.39E−2363.11E−982.59E−252.91E−041.92E−174.00 E+002.40 E+002.60 E+01
Std0.00 E+001.14E−976.69E−257.90E−042.98E−172.61 E+007.09E−012.81 E+01

F5Ave1.22E−049.96E−042.39 E+012.85 E+012.65 E+018.80 E+013.16 E+032.65 E+01
Std3.16E−041.09E−031.51E−016.96E−016.99E−011.84 E+021.64 E+043.61E−01

F6Ave1.77E−089.32E−061.87E−133.59 E+004.09E−019.62E−091.20E−094.17E−03
Std9.08E−081.44E−056.11E−137.04E−013.04E−012.38E−093.07E−092.22E−03

F7Ave1.15E−043.67E−054.28E−043.00E−034.73E−045.67E−022.13E−021.18E−03
Std7.56E−053.20E−052.34E−041.34E−032.14E−042.49E−028.59E−031.33E−03

F8Ave−1.26 E+04−1.26 E+04−9.11 E+03−6.39 E+03−6.05 E+03−7.61 E+03−9.08 E+03−1.19 E+04
Std1.64E−068.89E−027.24 E+026.83 E+028.07 E+028.70 E+025.44 E+021.24 E+03

F9Ave0.00 E+000.00 E+000.00 E+001.51 E+025.41E−014.78 E+014.85 E+010.00 E+00
Std0.00 E+000.00 E+000.00 E+003.54 E+012.14 E+001.13 E+011.37 E+010.00 E+00

F10Ave8.88E−168.88E−164.56E−151.49 E+001.28E−141.82 E+008.86E−024.56E−15
Std0.00 E+000.00 E+006.49E−161.63 E+002.87E−158.07E−013.40E−012.38E−15

F11Ave0.00 E+000.00 E+000.00 E+006.57E−031.40E−031.07E−021.11E−027.35E−03
Std0.00 E+000.00 E+000.00 E+007.05E−034.58E−031.28E−021.43E−021.91E−02

F12Ave3.16E−108.06E−073.46E−038.00 E+002.53E−023.90 E+003.11E−021.17E−03
Std8.13E−101.06E−061.89E−024.19 E+001.75E−021.97 E+007.28E−021.53E−03

F13Ave1.93E−095.48E−061.78E−022.83 E+002.97E−017.56E−032.93E−034.86E−02
Std4.41E−095.87E−063.38E−026.31E−011.06E−011.18E−024.94E−035.55E−02
Table 4

Comparison of results on F1–F13 with 100D.

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F1Ave0.00 E+005.76E−1901.08E−721.04E−273.53E−349.88E−037.10 E+021.15E−168
Std0.00 E+000.00 E+004.12E−722.59E−275.28E−341.10E−022.55 E+030.00 E+00

F2Ave1.96E−2313.05E−1002.45E−427.52E−187.04E−211.41 E+013.98 E+016.29E−103
Std0.00 E+001.30E−992.64E−421.09E−173.19E−214.47 E+002.40 E+013.45E−102

F3Ave0.00 E+002.81E−1452.12E−067.99 E+021.67E−012.21 E+048.11 E+047.18 E+05
Std0.00 E+001.54E−1445.59E−061.02 E+033.53E−011.05 E+041.91 E+041.23 E+05

F4Ave1.49E−2291.30E−975.51E−112.86 E+011.38E−041.96 E+013.25 E+017.08 E+01
Std0.00 E+004.59E−971.75E−109.58 E+002.18E−042.29 E+003.62 E+002.99 E+01

F5Ave1.15E−013.74E−039.41 E+019.79 E+019.70 E+017.13 E+021.05 E+049.73 E+01
Std4.32E−015.74E−033.34E−018.33E−018.86E−014.63 E+022.31 E+044.64E−01

F6Ave1.08E−033.41E−051.43E−011.38 E+017.36 E+008.23E−031.04 E+035.30E−01
Std2.11E−035.76E−052.04E−019.64E−011.16 E+008.37E−033.06 E+031.70E−01

F7Ave1.17E−044.11E−056.99E−041.31E−021.56E−037.61E−014.21 E+009.37E−04
Std1.62E−046.24E−053.25E−044.63E−037.36E−041.72E−017.59 E+001.42E−03

F8Ave−4.19 E+04−4.19 E+04−2.89 E+04−1.45 E+04−1.67 E+04−2.41 E+04−2.35 E+04−3.79 E+04
Std8.33E−023.21E−011.54 E+037.90 E+022.56 E+031.51 E+031.51 E+034.58 E+03

F9Ave0.00 E+000.00 E+000.00 E+009.20 E+021.49E−011.34 E+023.14 E+020.00 E+00
Std0.00 E+000.00 E+000.00 E+001.38 E+028.18E−013.79 E+015.10 E+010.00 E+00

F10Ave8.88E−168.88E−167.52E−155.96E−127.03E−145.15 E+004.61 E+004.44E−15
Std0.00 E+000.00 E+001.23E−153.24E−114.73E−151.25 E+003.08 E+002.64E−15

F11Ave0.00 E+000.00 E+000.00 E+002.71E−037.62E−041.25E−017.31 E+000.00 E+00
Std0.00 E+000.00 E+000.00 E+006.26E−032.94E−032.90E−022.28 E+010.00 E+00

F12Ave3.03E−063.47E−071.89E−039.22 E+001.93E−019.90 E+001.02 E+016.08E−03
Std8.46E−065.00E−075.68E−033.74 E+006.19E−022.58 E+004.20 E+003.07E−03

F13Ave1.44E−041.19E−052.12 E+001.21 E+015.68 E+001.54 E+024.54 E+026.00E−01
Std2.32E−041.74E−051.15 E+001.61 E+003.89E−011.57 E+014.27 E+023.17E−01
Table 5

Comparison of results on F1–F13 with 500D.

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F1Ave0.00 E+006.83E−1925.35E−594.74E−122.61E−143.20 E+041.32 E+052.99E−165
Std0.00 E+000.00 E+006.41E−594.95E−121.42E−142.21 E+032.05 E+040.00 E+00

F2Ave1.24E−2304.93E−965.52E−359.54E−095.31E−093.39 E+029.89 E+021.12E−104
Std0.00 E+002.70E−953.87E−357.88E−091.32E−091.38 E+011.25 E+025.58E−104

F3Ave0.00 E+001.08E−874.43 E+029.00 E+058.42 E+045.87 E+052.20 E+062.63 E+07
Std0.00 E+005.91E−871.54 E+031.17 E+053.48 E+042.88 E+052.95 E+056.32 E+06

F4Ave2.22E−2287.26E−948.36 E+019.91 E+015.10 E+013.24 E+017.55 E+017.29 E+01
Std0.00 E+003.97E−931.93 E+012.47E−016.00 E+002.31 E+002.98 E+002.37 E+01

F5Ave9.10E−011.53E−024.96 E+024.98 E+024.97 E+025.49 E+061.81 E+084.95 E+02
Std1.41 E+001.80E−027.74E−012.08E−014.29E−019.42 E+051.14 E+082.54E−01

F6Ave1.68E−011.01E−045.95 E+019.17 E+018.83 E+013.31 E+041.37 E+059.03 E+00
Std2.27E−011.27E−042.04 E+001.92 E+002.00 E+002.02 E+032.41 E+041.83 E+00

F7Ave1.20E−043.74E−051.44E−032.99E−017.95E−035.27 E+011.91 E+031.17E−03
Std1.13E−043.03E−055.54E−041.14E−012.42E−037.61 E+007.33 E+021.32E−03

F8Ave−2.09 E+05−2.09 E+05−1.01 E+05−3.44 E+04−6.29 E+04−8.68 E+04−7.85 E+04−1.99 E+05
Std7.35E−011.36 E+006.70 E+032.48 E+039.55 E+035.27 E+033.02 E+031.66 E+04

F9Ave0.00 E+000.00 E+000.00 E+005.66 E+032.09 E+001.92 E+033.51 E+030.00 E+00
Std0.00 E+000.00 E+000.00 E+004.74 E+023.15 E+001.15 E+021.78 E+020.00 E+00

F10Ave8.88E−168.88E−168.70E−151.10E−076.67E−091.22 E+011.62 E+013.97E−15
Std0.00 E+000.00 E+002.17E−155.29E−081.33E−093.43E−017.40E−012.23E−15

F11Ave0.00 E+000.00 E+000.00 E+005.89E−031.79E−032.87 E+021.15 E+030.00 E+00
Std0.00 E+000.00 E+000.00 E+001.57E−026.96E−032.17 E+012.43 E+020.00 E+00

F12Ave2.02E−052.78E−072.65E−011.06 E+047.01E−011.01 E+021.34 E+081.42E−02
Std5.06E−053.95E−071.85E−021.22 E+043.05E−021.52 E+021.59 E+083.73E−03

F13Ave4.21E−034.36E−054.86 E+011.25 E+034.51 E+011.01 E+065.88 E+084.72 E+00
Std1.21E−027.01E−053.40E−018.75 E+025.29E−013.96 E+054.43 E+081.45 E+00
Table 6

Comparison of results on F1–F13 with 1000D.

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F1Ave0.00 E+005.76E−1901.08E−721.04E−273.53E−349.88E−037.10 E+021.15E−168
Std0.00 E+000.00 E+004.12E−722.59E−275.28E−341.10E−022.55 E+030.00 E+00

F2Ave1.96E−2313.05E−1002.45E−427.52E−187.04E−211.41 E+013.98 E+016.29E−103
Std0.00 E+001.30E−992.64E−421.09E−173.19E−214.47 E+002.40 E+013.45E−102

F3Ave0.00 E+002.81E−1452.12E−067.99 E+021.67E−012.21 E+048.11 E+047.18 E+05
Std0.00 E+001.54E−1445.59E−061.02 E+033.53E−011.05 E+041.91 E+041.23 E+05

F4Ave1.49E−2291.30E−975.51E−112.86 E+011.38E−041.96 E+013.25 E+017.08 E+01
Std0.00 E+004.59E−971.75E−109.58 E+002.18E−042.29 E+003.62 E+002.99 E+01

F5Ave1.15E−013.74E−039.41 E+019.79 E+019.70 E+017.13 E+021.05 E+049.73 E+01
Std4.32E−015.74E−033.34E−018.33E−018.86E−014.63 E+022.31 E+044.64E−01

F6Ave1.08E−033.41E−051.43E−011.38 E+017.36 E+008.23E−031.04 E+035.30E−01
Std2.11E−035.76E−052.04E−019.64E−011.16 E+008.37E−033.06 E+031.70E−01

F7Ave1.17E−044.11E−056.99E−041.31E−021.56E−037.61E−014.21 E+009.37E−04
Std1.62E−046.24E−053.25E−044.63E−037.36E−041.72E−017.59 E+001.42E−03

F8Ave−4.19 E+04−4.19 E+04−2.89 E+04−1.45 E+04−1.67 E+04−2.41 E+04−2.35 E+04−3.79 E+04
Std8.33E−023.21E−011.54 E+037.90 E+022.56 E+031.51 E+031.51 E+034.58 E+03

F9Ave0.00 E+000.00 E+000.00 E+009.20 E+021.49E−011.34 E+023.14 E+020.00 E+00
Std0.00 E+000.00 E+000.00 E+001.38 E+028.18E−013.79 E+015.10 E+010.00 E+00

F10Ave8.88E−168.88E−167.52E−155.96E−127.03E−145.15 E+004.61 E+004.44E−15
Std0.00 E+000.00 E+001.23E−153.24E−114.73E−151.25 E+003.08 E+002.64E−15

F11Ave0.00 E+000.00 E+000.00 E+002.71E−037.62E−041.25E−017.31 E+000.00 E+00
Std0.00 E+000.00 E+000.00 E+006.26E−032.94E−032.90E−022.28 E+010.00 E+00

F12Ave3.03E−063.47E−071.89E−039.22 E+001.93E−019.90 E+001.02 E+016.08E−03
Std8.46E−065.00E−075.68E−033.74 E+006.19E−022.58 E+004.20 E+003.07E−03

F13Ave1.44E−041.19E−052.12 E+001.21 E+015.68 E+001.54 E+024.54 E+026.00E−01
Std2.32E−041.74E−051.15 E+001.61 E+003.89E−011.57 E+014.27 E+023.17E−01
As shown in the results of the unimodal functions F1–F7 in Table 3–Table 6, TSO achieves the best results in most of the functions, significantly outperforming almost all the comparison algorithms. In addition, TSO outperforms the comparison algorithms still when dealing with high-dimensional problems. On the other hand, the results obtained by TSO are not much fluctuated as the dimensionality increases, which can also be observed in the convergence curves in Figure 3. Specifically, TSO performs best on F1–F5 when Dim = 30. In particular, TSO can consistently obtain the theoretical optimal solution on F1 and F3. In F7, HHO is the best optimizer, and TSO follows the best. The TSO performs poorly for the F6. For high-dimensional functions, TSO and HHO perform in the top 2. TSO gives the most satisfactory results for F1–F4. HHO performs best on F5–F7, with TSO ranking behind it. Overall, TSO performs the best exploitation ability among all the algorithms involved in the test for unimodal functions on different dimensions.
Figure 3

Convergence curves of TSO on 13 test functions in 4 different dimensional cases.

The results for solving the multimodal functions F8–F13 in different dimensions for each algorithm are also given in Table 3–Table 6. The analysis shows that TSO performs best in all dimensions when solving F8–F11. The TSO ranks behind the HHO in solving F12 and F13. Notably, TSO can stably obtain the theoretical optimal solution for F9–F11. As the convergence curves show, the TSO performance does not degrade too much as the dimensionality increases, showing the superior performance of TSO in solving high-dimensional multimodal functions.

3.4. Analysis of TSO for Fixed Dimensional Functions

The test results of TSO applied to fixed dimensional functions are shown in 7. The means in the table show that TSO is superiorly competitive on the fixed dimensional functions, performing best on eight of the ten functions. TSO ranks second and third on F8 and F15. In order to analyze the distribution characteristics of TSO when solving fixed dimensional functions, box plots of F14–F23 are drawn based on the results of 30 runs, as shown in Figure 4. It can be observed that TSO outperforms the comparison algorithm in most functions in terms of maximum, minimum and median values, and the distribution of solutions is more concentrated, thus, TSO performs better compared to other algorithms.
Table 7

Comparison of results on F14–F23.

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F14Ave9.98E−019.98E−019.98E−018.12 E+003.87 E+009.98E−019.98E−011.49 E+00
Std2.80E−168.98E−111.01E−164.59 E+003.96 E+002.08E−164.12E−171.97 E+00

F15Ave3.99E−043.54E−041.17E−031.03E−021.71E−031.48E−031.90E−035.35E−04
Std2.79E−041.65E−043.65E−032.39E−025.08E−033.58E−035.03E−032.82E−04

F16Ave−1.03 E+00−1.03 E+00−1.03 E+00−1.03 E+00−1.03 E+00−1.03 E+00−1.03 E+00−1.03 E+00
Std5.61E−169.30E−146.58E−169.65E−033.20E−098.96E−156.78E−162.13E−11

F17Ave3.98E−013.98E−013.98E−013.98E−013.98E−013.98E−013.98E−013.98E−01
Std0.00 E+001.12E−080.00 E+006.82E−061.64E−073.83E−150.00 E+004.78E−07

F18Ave3.00 E+003.00 E+003.00 E+009.30 E+003.00 E+003.00 E+003.00 E+003.00 E+00
Std1.75E−152.51E−099.69E−162.09 E+015.86E−063.90E−141.39E−151.96E−06

F19Ave−3.86 E+00−3.86 E+00−3.86 E+00−3.86 E+00−3.86 E+00−3.86 E+00−3.86 E+00−3.86 E+00
Std2.46E−156.38E−042.70E−151.41E−032.73E−031.84E−142.71E−152.35E−03

F20Ave−3.30 E+00−3.18 E+00−3.26 E+00−3.21 E+00−3.25 E+00−3.23 E+00−3.26 E+00−3.22 E+00
Std4.84E−028.68E−026.37E−021.79E−017.38E−024.87E−026.68E−028.34E−02

F21Ave−1.02 E+01−5.22 E+00−9.81 E+00−6.78 E+00−9.65 E+00−9.06 E+00−6.98 E+00−9.81 E+00
Std5.68E−159.28E−011.29 E+003.19 E+001.54 E+002.26 E+003.53 E+001.29 E+00

F22Ave−1.04 E+01−5.44 E+00−1.04 E+01−8.24 E+00−1.04 E+01−9.62 E+00−8.42 E+00−9.00 E+00
Std8.08E−161.35 E+009.90E−163.21 E+001.39E−042.06 E+003.14 E+002.54 E+00

F23Ave−1.05 E+01−5.31 E+00−1.00 E+01−7.87 E+00−1.05 E+01−9.49 E+00−8.84 E+00−9.15 E+00
Std1.98E−159.78E−011.65 E+003.68 E+001.46E−042.43 E+003.18 E+002.79 E+00
Figure 4

Boxplot analysis for fixed dimensional functions.

3.5. Wall-Clock Time Analysis of TSO

Computational efficiency is also an important measure of algorithm performance. Table 8 records the average computational time consumed by these algorithms for 30 independent runs in each function. It can be seen that the computation of TSO does not take much time, only longer than WOA and TSA. Although TSO takes more time, the performance is better than WOA and TSA. Moreover, TSO takes less time with better performance than other comparison algorithms, thus TSO has a huge efficiency advantage. Figure 5 illustrates the ranking of computational time consumption of each algorithm, and it can be visually seen that WOA, TSA, and TSO rank in the top three.
Table 8

Wall-clock time costs of TSO and other algorithms on 23 benchmarks (unit: s).

FunctionTSOHHOEOTSAGWOSSAPSOWOA
F10.04360.07920.07260.04660.05490.07000.09230.0365
F20.03860.07110.06980.04830.05360.06790.07850.0349
F30.18780.64710.20780.19380.20250.21680.21870.1804
F40.03150.07650.06550.04420.04860.06130.06870.0286
F50.03870.10700.07230.04940.05740.06790.07450.0366
F60.03130.08960.06330.04340.05140.07430.07320.0301
F70.05020.11980.08530.06190.06900.08130.09530.0468
F80.04330.12700.07380.05350.06130.07890.08400.0428
F90.03410.09870.06520.04740.05430.06580.07850.0305
F100.04020.10960.07030.05210.05580.07450.07640.0354
F110.04780.13350.07530.05570.06080.08050.08200.0436
F120.10550.27350.13590.11540.12910.13720.14940.1062
F130.10400.28810.13380.11970.13500.13670.14820.1064
F140.28170.72290.33010.28550.27960.31020.32520.2855
F150.03370.07920.06330.02810.02940.04650.06510.0271
F160.02630.06190.05210.02020.02110.03850.05250.0195
F170.02290.05770.05010.01810.01770.03430.05050.0181
F180.02200.05850.05190.01750.01810.03430.05200.0171
F190.04530.11820.07450.04050.04200.05810.07350.0397
F200.04650.11430.07390.04270.04310.06040.07560.0407
F210.07390.18030.10410.06800.06840.08800.10280.0682
F220.09460.22640.11660.08220.08440.10530.11660.0820
F230.11610.28730.15020.11450.11150.13130.14820.1133
Figure 5

Time cost ranking result.

3.6. Parameter Sensitivity Analysis

This section focuses on the analysis of the values of the two control parameters (z and a) of the TSO. The first parameter is z, which controls the probability of randomly generated individuals. The second parameter is a, which controls the extent to which each individual follows the optimal individual and the neighboring individuals. The 13 variable dimensional functions (F1–F13) and 10 fixed dimensional functions (F14–F23) are used for analyzing the effect of the values of the control parameters on the TSO performance. The values of each parameter are defined as a={0.1, 0.2, 0.3, ..., 0.9}, z={0,0.01, 0.02, ..., 0.09, 0.1}, and there are 9 × 11=99 combinations in total. Each combination solves the test functions 30 times independently, and a total of 68310 data are obtained. Due to the large amount of data, no specific comparison of the experimental results was performed, but the differences in the experimental results were reflected by sorting the simulation results under different parameter settings by Friedman's test. According to the results, the Friedman test results of unimodal functions F1–F7, multimodal functions F8–F23, and all functions F1–F23 are given respectively, as shown in Table 9–Table 11. From the results in Table 9, it is clear that the smaller the value of z taken, the better the TSO performance. The larger the value of a taken, the better the results obtained by TSO. This is because the smaller the value ofzis, the smaller the probability of randomly generating new individuals, while the larger the value of a is, the higher the degree to which each individual follows the optimal individual, all of which are beneficial for improving exploitation ability and accelerating convergence. For the multipeaked functions F14–F23, we can get almost the opposite conclusion from Table 10 compared to unimodal functions. The rankings considering the results of all functions are given in Table 11. The results show that TSO has the best performance when z=0.05, a=0.7.
Table 9

Ranking of results for F1–F7 with varied values of parameter z and a.

z a
0.10.20.30.40.50.60.70.80.9Average
061.2664.7257.1755.0754.4655.5458.0257.0056.0457.70
0.0152.7849.1347.0951.0246.5748.1546.9348.7449.3048.86
0.0250.1349.7254.2450.8955.3545.7846.6544.2645.6149.18
0.0355.2451.9654.4148.6545.3357.9151.1546.1145.2650.67
0.0455.5052.1551.5049.3947.2446.3743.8346.8944.7448.62
0.0550.7449.2247.5246.1149.5443.4641.8347.9145.7446.90
0.0653.1152.5047.6546.1545.9653.1348.4843.6348.3048.77
0.0748.3050.5048.5046.4351.9650.9145.0246.4852.7048.98
0.0850.5753.1143.6145.0251.0745.3545.1753.3546.1548.15
0.0951.5755.9147.9350.9852.3554.8952.0252.5048.5251.85
0.151.5051.8357.4151.7254.6149.3353.3943.2839.8350.32
Average52.7952.7950.6449.2250.4050.0848.4148.2047.47
Table 10

Ranking of the results for F8–F23 with varied values of parameter z and a.

z a
0.10.20.30.40.50.60.70.80.9Average
036.5743.1436.1428.7138.8635.8640.5733.5739.0036.94
0.0137.0045.0034.0037.0032.0030.7136.0030.8634.7135.25
0.0241.4343.2948.8644.5750.2943.7140.1439.1436.5743.11
0.0354.4339.4350.8640.8634.0047.8651.4344.7137.7144.59
0.0447.5749.5749.7144.7149.0042.7146.1439.2935.2944.89
0.0552.0042.1452.4358.1457.5740.7143.2953.2940.2948.87
0.0658.5751.7147.0046.0049.4357.4344.5739.1452.0049.54
0.0757.8660.7157.2949.1472.1462.8647.8653.7148.5756.68
0.0863.4360.8645.8656.5767.4357.8662.1462.4352.5758.79
0.0958.4373.1459.5768.0069.8672.2966.5760.1458.4365.16
0.162.7171.8673.5769.1467.0065.5763.1467.7154.8666.17
Average51.8252.8150.4849.3553.4250.6949.2647.6444.55
Table 11

Ranking of results for F1–F23 with varied values of parameter z and a.

z a
0.10.20.30.40.50.60.70.80.9Average
072.0674.1666.3866.5961.2864.1665.6667.2563.5066.78
0.0159.6950.9452.8157.1652.9455.7851.7256.5655.6954.81
0.0253.9452.5356.5953.6657.5646.6949.5046.5049.5651.84
0.0355.5957.4455.9752.0650.2862.3151.0346.7248.5653.33
0.0458.9753.2852.2851.4446.4747.9742.8150.2248.8850.26
0.0550.1952.3145.3840.8446.0344.66 41.19 45.5648.1346.03
0.0650.7252.8447.9446.2244.4451.2550.1945.5946.6948.43
0.0744.1346.0344.6645.2543.1345.6943.7843.3154.5045.61
0.0844.9449.7242.6339.9743.9139.8837.7549.3843.3443.50
0.0948.5648.3842.8443.5344.6947.2845.6649.1644.1946.03
0.146.5943.0650.3444.0949.1942.2249.1332.5933.2543.39
Average53.2252.7950.7149.1649.0849.8148.0448.4448.75

3.7. Statistical Analysis of TSO

This section further analyses the differences between TSO and other algorithms statistically using the Wilcoxon rank-sum test and Friedman test. The Wilcoxon rank-sum test is a paired test that checks for significant differences between two algorithms. The results of the test between TSO and each algorithm at significance level α=0.05 are given in Table 12–Table 16, where the symbols “+/ = /-” indicate that TSO performs better, similar, or worse than the comparison algorithm. Table 17 gives the statistical results of TSO in different dimensions and functions that are better than, similar to, and worse than the comparison algorithm. TSO outperforms other comparative algorithms in different cases and achieves results of 32/15/15, 42/13/7, 62/0/0, 61/1/0, 61/1/0, 51/6/5,and 55/7/0, confirming the significant superiority of MSMA in most cases compared to other algorithms.
Table 12

Wilcoxon rank-sum test on F1–F13 with Dim = 30.

TSO vsHHOEOTSAGWOSSAPSOWOA
Function p Win p Win p Win p Win p Win p Win p Win
F11.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F23.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F31.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F43.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F52.20E−07+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F65.49E−11+7.39E−113.02E−11+3.02E−11+8.48E−09+0.022363.02E−11+
F71.34E−051.43E−08+3.02E−11+6.72E−10+3.02E−11+3.02E−11+1.47E−07+
F84.62E−10+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F9 NaN =  NaN = 1.21E−12+0.160802+1.21E−12+1.21E−12+NaN= 
F10 NaN = 2.71E−14+1.09E−12+5.65E−13+1.21E−12+1.21E−12+1.08E−09+
F11 NaN =  NaN = 5.37E−06+0.081523+1.21E−12+1.21E−12+0.041926+
F124.20E−10+5.97E−093.02E−11+3.02E−11+3.02E−11+ 0.340288 = 3.02E−11+
F133.02E−11+ 0.137323 = 3.02E−11+3.02E−11+7.22E−06+0.030317+3.02E−11+
Table 13

Wilcoxon rank-sum test on F1–F13 with Dim = 100.

TSO vsHHOEOTSAGWOSSAPSOWOA
Function p Win p Win p Win p Win p Win p Win p Win
F11.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F23.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F31.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F43.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F51.37E−033.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F63.59E−051.75E−05+3.02E−11+3.02E−11+1.70E−08+3.02E−11+3.02E−11+
F71.52E−034.62E−10+3.02E−11+5.49E−11+3.02E−11+3.02E−11+1.86E−06+
F83.16E−05+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F9 NaN = NaN= 1.21E−12+7.58E−07+1.21E−12+1.21E−12+ NaN = 
F10 NaN = 8.64E−14+1.00E−12+9.19E−13+1.21E−12+1.21E−12+1.22E−08+
F11 NaN = NaN= 6.51E−05+ 0.160802 = 1.21E−12+1.21E−12+ NaN = 
F12 1.05E−01 = 1.41E−09+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F132.96E−053.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
Table 14

Wilcoxon rank-sum test on F1–F13 with Dim = 500.

TSO vsHHOEOTSAGWOSSAPSOWOA
Function p Win p Win p Win p Win p Win p Win p Win
F11.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F23.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F31.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F43.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F51.07E−093.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F64.08E−113.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F71.29E−063.69E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+7.74E−06+
F8 1.67E−01 = 3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F9 NaN =  NaN = 1.21E−12+1.21E−12+1.21E−12+1.21E−12+ NaN = 
F10 NaN = 6.12E−14+1.21E−12+1.21E−12+1.21E−12+1.21E−12+9.16E−09+
F11 NaN =  NaN = 1.21E−12+1.20E−12+1.21E−12+1.21E−12+ NaN = 
F127.04E−073.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F138.88E−063.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
Table 15

Wilcoxon rank-sum test on F1–F13 with Dim = 1000.

TSO vsHHOEOTSAGWOSSAPSOWOA
Function p Win p Win p Win p Win p Win p Win p Win
F11.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F23.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F31.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+1.21E−12+
F43.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F55.87E−043.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F63.50E−093.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F71.86E−033.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.37E−05+
F8 2.23E−01 = 3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.69E−11+
F9 NaN =  NaN = 1.21E−12+1.21E−12+1.21E−12+1.21E−12+ 0.333711 = 
F10 NaN = 2.90E−13+1.21E−12+1.21E−12+1.21E−12+1.21E−12+7.21E−07+
F11 NaN = 8.99E−11+1.21E−12+1.21E−12+1.21E−12+1.21E−12+ 0.333711 = 
F122.00E−053.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
F131.25E−053.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+3.02E−11+
Table 16

Wilcoxon rank-sum test on F14–F23.

TSO vsHHOEOTSAGWOSSAPSOWOA
Function p Win p Win p Win p Win p Win p Win p Win
F147.27E−11+2.81E−041.99E−11+1.99E−11+ 5.43E−02 = 1.80E−051.99E−11+
F158.30E−08+4.84E−08+8.43E−09+3.94E−08+1.10E−08+4.27E−08+5.06E−08+
F161.07E−09+1.49E−041.41E−11+1.41E−11+1.39E−11+1.43E−061.40E−11+
F171.21E−12+ NaN = 1.21E−12+1.21E−12+5.07E−06+ NaN = 1.21E−12+
F181.46E−10+1.85E−022.09E−11+2.09E−11+2.09E−11+2.88E−032.09E−11+
F191.75E−11+5.29E−05+1.75E−11+1.75E−11+2.35E−11+1.10E−051.75E−11+
F201.09E−08+ 3.07E−01 = 3.07E−08+8.39E−08+7.12E−10+ 4.65E−01 = 1.09E−08+
F211.07E−11+ 3.81E−01 = 1.07E−11+1.07E−11+1.07E−11+ 1.84E−01 = 1.07E−11+
F226.43E−12+0.0196056.43E−12+6.43E−12+6.43E−12+ 2.70E−01 = 6.43E−12+
F232.14E−11+2.20E−022.14E−11+2.14E−11+2.14E−11+ 2.31E−01 = 2.14E−11+
Table 17

Statistical results of the Wilcoxon rank-sum test.

TSO VS.F1–F13 (Dim = 30)F1–F13 (Dim = 100)F1–F13 (Dim = 500)F1–F13 (Dim = 1000)F14–F23Sum
Wilcoxon's rank-sum test (+/ = /-)HHO9/3/15/4/44/4/54/4/510/0/032/15/15
EO8/3/211/2/011/2/012/1/02/3/542/13/7
TSA13/0/013/0/013/0/013/0/010/0/062/0/0
GWO13/0/012/1/013/0/013/0/010/0/061/1/0
SSA13/0/013/0/013/0/013/0/09/1/061/1/0
PSO11/1/113/0/013/0/013/0/01/5/451/6/5
WOA12/1/011/2/011/2/011/2/010/0/055/7/0
Table 18 shows the statistics of F1–F13 in different dimensions and the fixed dimensional functions F14–F23. The statistics show that TSO ranks first in all cases. Therefore, it can be considered that TSO has the best performance compared to other algorithms.
Table 18

Statistical results of the Friedman test.

TSOHHOEOTSAGWOSSAPSOWOA
F1–F13Dim = 30Friedman value1.542.233.236.545.156.466.154.69
Friedman rank 1 2385764
Dim = 100Friedman value1.651.733.386.315.006.317.544.08
Friedman rank 1 2365684
Dim = 500Friedman value1.651.734.006.544.926.237.543.38
Friedman rank 1 2475683
Dim = 1000Friedman value1.541.624.086.465.006.237.463.62
Friedman rank 1 2475683

F14–F23Fixed dimFriedman value1.755.902.357.305.404.303.605.40
Friedman rank 1 7285435

F1–F23All dimFriedman value1.622.483.466.605.085.986.604.18
Friedman rank 1 2375674

4. TSO for Engineering Design Problems

This section uses three engineering design problems to assess TSO's ability to solve real-world problems. These problems include the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem. TSO uses the same number of iterations (1000) and populations (50) in solving these engineering design problems. Each problem is run 30 times independently, and the statistical results are compared with other algorithms in the literature.

4.1. Pressure Vessel Design

The pressure vessel design problem shown in Figure 6 is a well-known benchmark test design problem with the goal of reducing total cost, including forming cost, material cost, and welding cost. There are four different variables: vessel thickness Ts (x1), head thickness Th (x2), inner diameter R (x3), and vessel cylindrical cross-section length L (x4). The problem is described as follows:
Figure 6

Schematic of the pressure vessel design problem (Figure 6 is reproduced from [49]).

Subject to The results of TSO for solving this problem are compared with other algorithms such as DDSCA, ISCA, MBA, CPSO, TEO, hHHO-SCA, HPSO, MVO, and AFA, and the comparison is shown in Table 19. The results show that the TSO solution is superior to the solutions provided by the comparison algorithms with optimal solutions for each parameter [0.7782, 0.3846, 40.3196, and 199.9999], corresponding to a minimum cost of 5885.3327.
Table 19

Comparisons of the best solutions offered by reported optimizers for pressure vessel design.

AlgorithmOptimal values for variablesOptimal cost
x 1 x 2 x 3 x 4
DDSCA [50]0.77820.385540.3198176.63895888.3366
ISCA [51]0.81250.437542.0982176.63896059.7410
MBA [52]0.78020.385640.4292198.49645889.3216
CPSO [53]0.81250.437542.0912176.74656061.0777
TEO [54]0.77910.385240.3698199.30185887.5110
hHHO-SCA [55]0.94590.447146.8513125.4686393.0927
HPSO [56]0.81250.437542.0984176.63666059.7143
MVO [57]0.82150.437542.0907176.73866060.8066
AFA [58]0.81250.437542.0984176.63666059.7143
TSO 0.7782 0.3846 40.3196 199.9999 5885.3327

4.2. Tension/Compression Spring Design

The tension/compression spring design problem is a mechanical engineering design optimization problem. As shown in Figure 7, the goal of this problem is to reduce the weight of the spring. It includes four nonlinear inequalities and three continuous variables: wire diameter w(x1), average coil diameter d(x2), and coil length or number L(x3). This problem can be described by the following equation:
Figure 7

Schematic of tension/compression spring design problem.

Subject to The solution of TSO is compared with other methods given in the literature, including GA3, CPSO, CDE, DDSCA, GSA, hHHO-SCA, AEO, and MVO. Table 20 shows the parameters and costs corresponding to the optimal solution of each algorithm. As can be seen from Table 10, TSO is the best algorithm for solving the problem. The optimal solution for each parameter corresponding to the lowest cost of 1.724852 is [0.205729, 3.470488, 9.036623, 0.205729].
Table 20

Comparisons of best solutions offered by reported optimizers for tension/compression spring design.

AlgorithmOptimal values for variablesOptimal cost
x 1 x 2 x 3
GA3 [59]0.0519890.36396510.8905220.0126810
CPSO [53]0.0517280.35764411.2445430.0126747
CDE [60]0.0516090.35471411.4108310.0126702
DDSCA [50]0.0526690.38067310.01530.012688
GSA [24]0.0502760.32368013.5254100.0127022
hHHO-SCA [55]0.0546930.4333787.8914020.0128229
AEO [61]0.0518970.36175110.8798420.0126662
MVO [57]0.052510.376210.335130.012970
TSO 0.051642 0.355609 11.354247 0.0126652

4.3. Welded Beam Design

The welded beam design problem is the classical structural optimization problem. As shown in Figure 8, the objective of this design problem is to minimize the fabrication cost of the welded beam. The optimization variables include welding thickness h(x1), joint beam length l(x2), beam height t(x3), and beam thickness b(x4). The mathematical model is as follows:
Figure 8

Schematic of the welded beam design problem.

Subject towhere This problem has been solved by different algorithms such as DDSCA, HGA, MGWO-III, IAPSO, TEO, hHHO-SCA, HPSO, CPSO, and WCA. Table 21 summarizes the results of the above algorithms and compares them with the best results of TSO. The results show that TSO can provide a parameter design plan with lower cost compared to other algorithms. TSO generates the best solution at design variables of 0.205729, 3.470490, 9.036626, and 0.205729 with a minimum cost of 1.724854.
Table 21

Comparisons of best solutions offered by reported optimizers for welded beam design problem.

AlgorithmOptimal values for variablesOptimal cost
x 1 x 2 x 3 x 4
DDSCA [50]0.205163.47599.07970.205521.7305
HGA [62]0.2057123.4703919.0396930.2057161.725236
MGWO-III [63]0.2056673.4718999.0366790.2057331.724984
IAPSO [64]0.2057293.4708869.0366230.2057291.724852
TEO [54]0.2056813.4723059.0351330.2057961.725284
hHHO-SCA [55]0.1900863.6964969.3863430.2041571.779032
HPSO [56]0.205733.4704899.0366240.205731.724852
CPSO [53]0.2023693.5442149.0482100.2057231.728024
WCA [26]0.2057283.4705229.0366200.2057291.724856
TSO 0.205729 3.470490 9.036626 0.205729 1.724854

5. Conclusions

This work presents a novel swarm-based metaheuristic algorithm: tuna swarm optimization. The algorithm is inspired by the cooperative foraging mechanisms of tuna, including spiral foraging and parabolic foraging. The method has few adjustable parameters and can be implemented easily. TSO was comprehensively evaluated using a set of benchmark functions in different dimensions and was compared with other state-of-the-art algorithms. The results show that TSO is superior to the comparative algorithms. In addition, the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem are investigated. The statistical results show that TSO has a high potential for solving real-world optimization problems compared to the reported methods. A major factor in TSO's success is the balance of exploitation and exploration achieved through the two foraging strategies. Meanwhile, fewer iterative steps bring less time costs, which is one of the strengths of TSO. However, while TSO performs excellently in most functions, there is still potential for enhancement regarding the small percentage of functions. This can be done by further enhancing TSO's ability to get rid of local optimum, using methods such as hybridisation of algorithms, adaptive parameters, etc. For future work, binary and multiobjective versions of TSO can be developed for discrete problems and multiobjective optimization problems. Moreover, TSO will be applied to solve UAV mission planning problems such as trajectory planning problems, target allocation problems, etc. A further interesting direction would be to investigate the performance of different constraint handling methods in solving constrained optimization problems.
  2 in total

1.  Optimization by simulated annealing.

Authors:  S Kirkpatrick; C D Gelatt; M P Vecchi
Journal:  Science       Date:  1983-05-13       Impact factor: 47.728

2.  An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning.

Authors:  An-Di Tang; Tong Han; Huan Zhou; Lei Xie
Journal:  Sensors (Basel)       Date:  2021-03-05       Impact factor: 3.576

  2 in total
  6 in total

1.  Application of Improved Manta Ray Foraging Optimization Algorithm in Coverage Optimization of Wireless Sensor Networks.

Authors:  Fang Zhu; Wenhao Wang; Shan Li
Journal:  Comput Intell Neurosci       Date:  2022-06-30

2.  A Hybrid Butterfly Optimization Algorithm for Numerical Optimization Problems.

Authors:  Huan Zhou; Hao-Yu Cheng; Zheng-Lei Wei; Xin Zhao; An-Di Tang; Lei Xie
Journal:  Comput Intell Neurosci       Date:  2021-12-24

3.  Dimensional Learning Strategy-Based Grey Wolf Optimizer for Solving the Global Optimization Problem.

Authors:  Xinyang Liu; Yifan Wang; Miaolei Zhou
Journal:  Comput Intell Neurosci       Date:  2022-01-30

4.  A Tent Marine Predators Algorithm with Estimation Distribution Algorithm and Gaussian Random Walk for Continuous Optimization Problems.

Authors:  Chang-Jian Sun; Fang Gao
Journal:  Comput Intell Neurosci       Date:  2021-12-28

5.  Ladybug Beetle Optimization algorithm: application for real-world problems.

Authors:  Saadat Safiri; Amirhossein Nikoofard
Journal:  J Supercomput       Date:  2022-09-06       Impact factor: 2.557

6.  A Modified Reptile Search Algorithm for Numerical Optimization Problems.

Authors:  Qihang Yuan; Yongde Zhang; Xuesong Dai; Shu Zhang
Journal:  Comput Intell Neurosci       Date:  2022-10-10
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.