Literature DB >> 30073023

A Modified Sine-Cosine Algorithm Based on Neighborhood Search and Greedy Levy Mutation.

Chiwen Qu1, Zhiliu Zeng2, Jun Dai3, Zhongjun Yi4, Wei He1.   

Abstract

For the deficiency of the basic sine-cosine algorithm in dealing with global optimization problems such as the low solution precision and the slow convergence speed, a new improved sine-cosine algorithm is proposed in this paper. The improvement involves three optimization strategies. Firstly, the method of exponential decreasing conversion parameter and linear decreasing inertia weight is adopted to balance the global exploration and local development ability of the algorithm. Secondly, it uses the random individuals near the optimal individuals to replace the optimal individuals in the primary algorithm, which allows the algorithm to easily jump out of the local optimum and increases the search range effectively. Finally, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. The experimental results show that the proposed algorithm can effectively avoid falling into the local optimum, and it has faster convergence speed and higher optimization accuracy.

Entities:  

Mesh:

Year:  2018        PMID: 30073023      PMCID: PMC6057408          DOI: 10.1155/2018/4231647

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Many problems in the field of engineering practice and scientific research come down to the global optimization problems. The traditional methods which purely lie upon the exactly mathematical mode have unsatisfactory effect in solving such optimization problems. These problems need to be continuous and derivable when the traditional methods are used for solving such practical engineering optimization problems, and these methods do not have the ability of global optimization for the multimodal, strong-nonlinearity, and dynamic change problems [1]. Accordingly, many scholars begin to explore new solution methods. The swarm intelligence optimization algorithm is a kind of global optimization algorithm designed by simulating the mutual cooperation behavior mechanism of gregarious biology in nature. Compared with the traditional optimization methods, the swarm intelligence optimization algorithm is characterized by simple principle and fewer adjustment parameters, and the gradient information and strong global optimization algorithm of problems are not required. So it is widely used in the engineering field of function optimization [2-4], combinatorial optimization [5], neural network training [6, 7], and image processing. At present, many swarm intelligence optimization algorithms are proposed [2, 8–15] like particle swarm optimization (PSO) [8], differential evolution (DE) [9, 10], artificial bee colony algorithm (ABC) [2, 11], cuckoo search (CS) [12, 13], and flower pollination algorithm (FPA) [14, 15]. Sine-cosine algorithm (SCA) is a new swarm intelligence optimization algorithm proposed by Mirjalili in 2016 [16]. This algorithm has been concerned and studied by many scholars due to its simple implementation and less parameter setting, and its optimization search can be realized through simple variation of sine and cosine function values. It has been successfully applied to solving the parameter optimization of support vector regression [17], short-term hydrothermal scheduling [18], and other engineering fields at present. However, as with other swarm intelligence algorithms, this algorithm also has the disadvantage of low optimization precision and slow convergence speed. Many scholars have put forward various improved sine-cosine algorithms from different perspectives in order to overcome this disadvantage in last two years. Elaziz et al. [19] proposed a sine-cosine algorithm based on the opposition method, and the more accurate solutions is obtained. Nenavath et al. [20] adopted a hybrid algorithm by combining differential evolution with sine-cosine to solve the problem of global optimization and target tracking. This algorithm has faster convergence speed and ability of seeking the optimal solution compared with the basic sine-cosine algorithm and differential evolution algorithm. Reddy et al. [21] applied a new binary variant of sine-cosine algorithm to solve the PBUC (profit-based unit commitment) problem. Sindhu et al. [22] used the elitism strategy and new updating mechanism to improve the sine-cosine algorithm, which improved the accuracy of classification in the selection of features or attributes. Kumar et al. [23] proposed a new sine-cosine optimization algorithm with the hybrid Cauchy and Gaussian mutations in order to track MPP (maximum power point) quickly and efficiently. Mahdad et al. [24] presented a sine-cosine algorithm coordinated with the interactive process to improve the security of the power system aimed at loading margin stability and faults at specified important branches. Bureerat et al. [25] adopted an adaptive differential sine-cosine algorithm to solve the problem of structural damage detection. Turgut et al. [26] combined backtracking search algorithm (BSA) and sine-cosine algorithm (SCA) to obtain the optimal design for the shell and tube evaporator. Attia et al. [27] embed Levy's flight into the original sine-cosine algorithm to increase the local search ability of the algorithm and avoided the algorithm being trapped in a local optimal defect. Tawhid et al. [28] used elite nondominated sorting to obtain different nondominated grades and applied crowd distance method to maintain the diversity of optimal solution sets, putting forward a multiobjective SCA algorithm. Issa et al. [29] presented an enhanced version of SCA by embedding the particle swarm optimization algorithm in SCA(ASCA-PSO). The ASCA-PSO algorithm makes full use of developing ability of the particle swarm optimization algorithm in the search space, which is stronger than that of the SCA. In the tests of some functions, it is found that the search performance of ASCA-PSO is apparently superior to that of SCA and other recently proposed basic metaheuristic algorithms. Rizk-Allah et al. [30] proposed a multiorthogonal sine-cosine algorithm (MOSCA) based on a multiorthogonal search strategy (MOSS) to solve the problem of engineering designs. The MOSCA algorithm eliminated the disadvantages which are that the basic SCA lacked exploitability and it was easily trapped in local optimum. The modified sine-cosine algorithm (MSCA) based on neighborhood search and the greedy Levy mutation has been proposed in order to better balance the global exploration ability and local exploitation ability. The improved algorithm makes improvements in the following three aspects. Firstly, both the linear decreasing inertia weight and exponential declining conversion parameters are used to balance the global exploration and local exploitation ability, which achieves the smooth transition of algorithm from global exploration to local development. Secondly, the guidance of random individuals near the optimal solution is fully used to allow the algorithm easily jump out of the local optimum, which effectively prevents the algorithm premature convergence and increases the diversity of population. Thirdly, the greedy Levy mutation strategy is used for the optimal individuals to enhance the local development ability of the algorithm. Compared with other swarm intelligence algorithms, the improved sine-cosine algorithm has better performance in terms of searching precision, convergence speed, and stability.

2. Basic Sine-Cosine Algorithm

In the basic sine-cosine algorithm, the simple variation of sine and cosine function values is used to achieve the optimization search. In this paper, the population size is n. The dimension of search space is d, and the ith individual in the population is p. In each iteration, the update mode of p can be obtained by the following equation:where t is the current iteration, p  is the jth dimension value of the optimal individual at iteration t, and p is the jth dimension value of the individual i at iteration t. r1, r2, r3, and r4 are the random numbers. r1 and r3 obey uniform distribution between 0 and 2. r2 obey uniform distribution between 0 and 2π, and r4 obey uniform distribution between 0 and 1. In (1), r1 · sin(r2) or r1 · cos(r2) jointly lead the global exploration and local development ability of the algorithm. When the value of r1 · sin(r2) or r1 · cos(r2) is greater than 1 or less than -1, the algorithm conducts a global exploration search. When the value of r1 · sin(r2) or r1 · cos(r2) is within the range of [-1  1], the algorithm conducts a local development search. The value of sin(r2)or cos(r2) is within the range of [-1  1]. So the control parameter r1 plays a crucial role in the global exploration, which controls the transition of the algorithm from global exploration to local development. In the basic algorithm, the control parameter r1 adopts the linear decreasing method of (2) to guide the algorithm transit from the global exploration to the local development.where a is a constant, t is the current iteration, and N_iter is the maximum number of iterations.

3. Modified Sine-Cosine Algorithm

3.1. Exponential Decreasing Conversion Parameter

The parameter setting is crucial to the search performance in the basic sine-cosine algorithm, in which the control parameter r1 controls the transition of algorithm from global exploration to local development. The larger value r1 can improve the global searching ability of the algorithm, and the smaller value r1can enhance the local development ability of the algorithm. Therefore, r1 is designed as the linear decreasing method of (2) in the basic algorithm to balance the global exploration and local development ability of the algorithm. In the literature [31], experimental contrast analysis is made on the linear decreasing method, parabola decreasing method, and exponential decreasing method in the basic algorithm. It is found that the exponential decreasing method is superior to the other two methods in the search performance. At the same time, the inertia weight remains unchanged in the iterative process of the basic algorithm, which may easily cause the population individuals to oscillate in the later period of search. In this paper, both the linear decreasing inertia weight and exponential decreasing conversion parameter strategy are used on the basis of (1), which can better balance the global exploration and local development ability of the algorithm. The update mode of individuals is as follows:where t is the current iteration, N_iter is the maximum number of iterations, p is the jth dimension value of the optimal individual at iteration t, p is the jth dimension value of the individual i of current iteration, and ωmax and ωmin are the maximum and minimum inertia weight, respectively. It can be seen from (3) that the population individuals work together through the inertia weight ω(t) and conversion parameter r1(t). The value of ω(t) and r1(t) is large in the early iterations, which is conducive to the global exploration of the algorithm. The values of ω(t) and r1(t) are small in later iterations, which are conducive to the local development of the algorithm so as to improve the searching precision and convergence speed of the algorithm.

3.2. The Neighborhood Search of the Optimal Individual

In the basic sine-cosine algorithm, the search directions of the new individuals simply are updating process by optimal individuals in the population. Once the global optimal individuals fall into the local optimum, the whole algorithm easily gets into premature convergence. Therefore, in order to reduce the possibility of algorithm getting into the local optimum, the guiding role of the better individuals possibly existing near the optimal solution should be used. In this paper, the random individuals near the optimal solution are used to replace the current optimal individuals to guide the algorithm search, so as to improve the possibility of algorithm jumping out of the local optimum. The sine-cosine algorithm strategy for the neighborhood search of the optimal individual iswhere unifrnd(−1,1) is the uniform distribution number within (-1, 1), and λ is the disturbance coefficient. Other parameters are in line with (3). In the neighborhood search of the optimal individual, the current optimal individual is taken as the center and λ as the step size, and the algorithm searches between the section r3 · p · (1 − λ · unifrnd(0,1)) and r3 · p · (1 + λ · unifrnd(0,1)). It effectively expands the search orientation and increases the probability of algorithm jumping out of the local optimum.

3.3. Greedy Levy Mutation

In the basic sine-cosine algorithm, the optimal individuals lead the search direction of the whole population. But the optimal individuals lack experiential knowledge and self-learning ability. So they may hardly get effective improvement and thus get into the domain of local optimum. In order to further prevent the basic sine-cosine algorithm from getting into the local optimum and eliminate the defect of low efficiency in later period, a strategy based on greedy Levy mutation is proposed for the optimum individuals. Thus, the population individuals can jump out of the position of optimal value searched previously through the mutation operation, which retains the diversity of population. The mutation method is as follows:where levy is the random number that obeys the Levy distribution, θ(j) is the coefficient of self-adapting variation, and p is the jth dimension value of the optimal individual at iteration t (Algorithm 1).
Algorithm 1

The pseudocode of the basic sine-cosine algorithm.

3.3.1. Random Number Generated According to the Levy Distribution

The levy flight is characterized by long-term short-distance migration and occasional long-distance jump, which is suitable for describing the life active law of many colonial organisms. In this paper, the characteristic of levy flight is used to form a levy mutation mechanism. This mechanism ensures that the proposed algorithm makes sufficient search near the area of the optimal individuals and has a certain mutation at the same time, which can improve the global searching ability of the algorithm. As the integral of probability density function of  levy distribution is difficult, it has been proved that Mantegna algorithm can be used to achieve the equivalent calculation [32]. That is,where σ = 1, β = 3/2, and σ can be calculated based on where Γ is the standard Gamma function.

3.3.2. Coefficient of Self-Adapting Variation

The swarm intelligence optimization algorithm is generally divided into two stages in the iterative process, namely, global exploration at the earlier stage and local development at the later stage. Therefore, in order to achieve the goal of obtaining a big variation to conduct the global disturbances at the earlier stage and decreasing the variation range to accelerate the local search at the later stage, the proposed algorithm is used a self-adapting mutation strategy. The self-adapting variation control coefficient is in where t is the current iteration, N_iter is the maximum iteration, ε is the coefficient, r(j) is the difference between the jth dimension value of the current optimal individual and the jth dimension average value of the population individual, and rmax(j)is the maximum distance of the jth dimension in the population. From (10) ~ (12), it can be seen that the coefficient θ(j) can be mainly considered from both iterative process and diversity. The iterative part is controlled by the part of −ε · t/N_iter, and the diversity is adjusted by the part of 1 − r(j)/rmax(j). On the early iterations, the individuals have poor performance and large diversity. So large coefficient can cause enough disturbances to the population and enhance the global searching ability. As iterations go on, the individuals in the population have better performance and gradually decrease coefficient, which can ensure that the algorithm converges to the optimal value smoothly to reduce the search oscillation of the optimal value. The solution method is shown in Algorithm 2.
Algorithm 2

The pseudocode of the optimal individual based on greedy levy variation.

3.4. The Modified Sine-Cosine Algorithm Based on the Greedy Levy Variation

The procedure of the improved sine-cosine algorithm based on neighborhood search and the greedy Levy variation is shown in Algorithm 3.
Algorithm 3

Algorithm 3 is the pseudocode of the improved sine-cosine algorithm based on the greedy levy variation.

For the basic SCA algorithm, the time complexity of creating the initial population is O(n), the time complexity of performing sine and cosine operations is O(n_iter∗n∗d), and the cross-border processing is O(n_iter∗n). So the time complexity of the basic SCA algorithm is O(n)+O(n_iter∗n) + O(n_iter∗n∗d). In the MSCA algorithm, the time complexity of creating the initial population is O(n), and the time complexity of calculating ϖ(t) and r1 is O(2∗n_iter). The time complexity of performing the sine and cosine operations based on the neighborhood search is O(n_iter∗n∗d). The time complexity of cross-border processing is O(n_iter∗n), and the time complexity of the greedy Levy mutation operation is O(n_iter∗d∗n). Therefore, the time complexity of the MSCA algorithm is O(n) + O(2∗n_iter) + O(n_iter∗n∗d) + O(n_iter∗n) + O(n_iter∗d∗n) = O(n) + O((n + 2)∗n_iter) + O(2∗n_iter∗d∗n). Obviously, the time complexity of the MSCA algorithm is higher than that of the standard SCA algorithm while both of them are in the same order of magnitude.

4. Experimental Simulation

In order to verify the performance of MSCA, the experiment will be conducted from the following three aspects: (1) Contrast experiment is conducted between MSCA and particle swarm optimization (PSO) [8], differential evolution (DE) [9], bat algorithm (BA) [33, 34], teaching-learning-based optimization (TLBO) [35, 36], grey wolf optimizer (GWO) [37], and basic SCA algorithm. (2) The effectiveness of 3 improvement strategies is analyzed. (3) The parameter λ in the optimal individual neighborhood search strategy and parameter ε in the greedy levy mutation strategy are analyzed, respectively, and the effectiveness of the algorithm is discussed, so that the specific reference value of the above parameters in the algorithm can be determined.

4.1. Test Function and Experimental Platform

4.1.1. Experimental Platform

In order to provide a comprehensive and full test environment, the simulation experiment is conducted in the test environment with operating system of Windows 10, CPU of Intel (R) Core (TM) i5-4210U (quad core), dominant frequency of 2.4GHZ and internal storage of 4GB, and programming tool of Matlab 2016b.

4.1.2. Benchmark Functions

In order to validate the performance of the presented algorithm, 20 benchmark test functions in the literature [38, 39] are selected as experimental subjects, which have been widely used in the test. The benchmark test functions selected can be categorized into three types (i.e., unimodal high-dimensional functions, multimodal high-dimensional functions, and multimodal low-dimensional functions ). f1 ~ f7 are the unimodal high-dimensional functions, and they can be used to investigate the optimization precision of the algorithm, which can hardly converge to the global optimal point. f8 ~ f13 are the multimodal high-dimensional functions with several local extreme points, which can be used to test the global searching performance and ability to avoid premature convergence of the algorithm. f14 ~ f20 are the multimodal low-dimensional functions. As the optimal value of the most test functions is zero, we select some test functions with nonzero optimal value. The function name, expression, dimension, search range, and theoretical optimal value are shown in Table 1.
Table 1

Standard test functions.

NoNameBenchmark test functionsDimensionScopeOptimum
f 1(x)Sphere Model f(x)=i=1Dxi2 30[-100,100]0

f 2(x)Schwefel's Problem 2.22 f(x)=i=1D|xi|+i=1D|xi| 30[-10,10]0

f 3(x)Schwefel's Problem 1.2 f(x)=i=1Dj=1ixi2 30[-100,100]0

f 4(x)Schwefel's Problem 2.21 f(x)=maxi=1Dxi 30[-100,100]0

f 5(x)Generalized Rosenbrock's Function f(x)=i=1n-1100xi+1-xi22+1-xi2 30[-30,30]0

f 6(x)Step Function f(x)=i=1Dxi+0.52 30[-100,100]0

f 7(x)Quartic Function i.e. Noise f(x)=i=1Di·xi4+random(0,1) 30[-1.28,1.28]0

f 8(x)Generalized Schwefel's Problem 2.26 f(x)=i=1D-xi·sinxi 30[-500,500]-418.9829n

f 9(x)Generalized Rastrigin's Function f(x)=i=1n(xi2-10·cos(2·π·xi)+10) 30[-5.12,5.12]0

f 10(x)Ackley's Function f(x)=-20exp-0.21di=1dxi2-exp1di=1dcos2πxi+20+e 30[-32,32]0

f 11(x)Generalized Griewank Function f(x)=14000·i=1nxi2-i=1ncosxii+1 30[-600,600]0

f 12(x)Generalized Penalized Function f(x)=πD10sin2(π·y1)+i=1D-1(yi-1)2[1+10sin2(π·yi+1)]+(yn-1)2+i=1nu(xi,10,100,4) 30[-50,50] 0
yi=1+xi+14
u(xi,α,k,m)=k(xi-α)m,xi>α0,-αxiαk(-xi-α)m,xiα

f 13(x)Generalized Penalized Function f(x)=0.110sin2(3π·x1)+i=1D-1(xi-1)2[1+10sin2(3π·xi+1)]+(xn-1)2+i=1nu(xi,10,100,4) 30[-50,50]0

f 14(x)Shekel's Foxholes Function f(x)=1500+j=25251j+i=12xi-aij6-1 2[-65.56,65.56]0.9980

f 15(x)Kowalik's Function f(x)=i=111ai-x1(bi2+bix2)bi2+bix3+x42 4[-5,5]0.0003075

f 16(x)Six‐Hump Camel‐Back Function fx=4x12-2.1x14+x163+x1x2-4x22-4x24 2[-5,5]-1.0316285

f 17(x)Branin Function f(x)=x2-5.14π2x12+5πx1-62+101-18πcosx1+10 2[-5,10; 0,15]0.398

f 18(x)Goldstein‐Price Function f(x)=[1+(x1+x2+1)2(19-14x1+3x12-14x2+6x1x2+3x22)][30+(2x1-3x2)2(18-32x1+12x12+48x2-36x1x2+27x22)] 2[-2,2]3

f 19(x)Hartman's Function f(x)=-i=14ci·exp-j=13aij(xj-pij)2 3[0,1]-3.8628

f 20(x)Hartman's Function f(x)=-i=14ci·exp-j=16aij(xj-pij)2 6[0,1]-3.32

4.2. Contrastive Analysis of Sine-Cosine Algorithm Based on Greedy Levy Mutation

In order to evaluate the performance of the algorithm proposed in this paper, six algorithms are selected as contrast algorithms in the experiment, that is, PSO, DE, BA, TLBO, GWO, and SCA, respectively. The contrast algorithms selected the same parameters as the original literature and the parameter setting as shown in Table 2. The parameters of the MSCA algorithm are set as follows. The population size is 100. The minimum inertia weight ωmax is 0.9. The minimum inertia weight ωmin is 0.4. ε is 30. λ is 0.01. The other parameters are consistent with the basic SCA. For each test function, the number of iterations is 5000, and each algorithm runs independently 20 times. The performance of each algorithm is measured by four indexes, which are optimal value, average value, worst value, and variance. The statistical results are as shown in Tables 3–5.
Table 2

The parameters set of all other algorithms.

AlgorithmsParameters
PSOthe population size is 100, c1 = 1.49445, c2 = 1.49445, ϖ = 0.729
DEthe population size is 100, pCR=0.2, βmin = 0.2, βmax = 0.8
BAthe population size is 100, Qmin=0, Qmax=2, R0 = 0.1,A = 0.9,α = 0.95,γ = 0.9
TLBOthe population size is 100, TF=2 or 1
GWOthe population size is 100
SCAthe population size is 100, a=2
Table 3

Test statistical results of functions f1 ~ f7.

Benchmark functionPSODEBATLBOGWOCSAMSCA
f 1(x)Best9.07650779E-011.65407264E-776.60911194E-06002.41227653E-990
Mean1.97056172E+012.62600749E-768.47162999E-06007.79692173E-860
Worst4.94364998E+016.31133690E-761.067648634E-05001.54558620E-840
Std1.21250926E+012.65771348E-761.08812519E-06003.45449900E-850

f 2(x)Best7.49555834E+004.70179283E-461.03934630E-0201.33605985E-2522.60844948E-630
Mean1.12174418E+017.29820866E-463.22204665E+0101.92526708E-2512.41170865E-570
Worst2.02022159E+011.09423091E-459.30092920E+0104.70878320E-2514.40324480E-560
Std3.24260462E+002.84907171E-464.25000382E+0100.00000000E+009.80901847E-570

f 3(x)Best4.98125395E+023.62807984E+039.12788809E-063.04791020E-2101.41705680E-1634.35895925E-520
Mean1.26469111E+037.46153322E+031.50873736E-051.59616421E-2031.49297576E-1474.66854445E-460
Worst2.34229374E+031.07658866E+042.06438516E-051.07878223E-2027.46124358E-1474.16089002E-450
Std5.55711673E+022.67341362E+033.88507836E-0603.33636341E-1471.08000440E-450

f 4(x)Best6.02122506E+003.83820416E-079.12788809E-0603.88068837E-1143.39775522E-340
Mean8.61122109E+005.60317284E-071.50873736E-0501.33829026E-1124.65163872E-260
Worst1.21184255E+017.68100603E-072.06438516E-0505.28812647E-1129.30221952E-250
Std1.92051215E+001.68313586E-073.88507836E-0602.22979531E-1122.08002707E-250

f 5(x)Best1.98124514E+022.30707248E+011.86517166E-011.78074032E-022.51787920E+012.62363000E+017.76116677E-12
Mean1.01254587E+033.61624708E+011.79820932E+001.16810392E+002.61197789E+012.70812117E+015.82095129E-06
Worst3.26561071E+038.05070422E+014.19411715E+004.16880641E+002.70635443E+012.86501385E+012.47408894E-05
Std9.62804586E+022.48173576E+012.18569786E+001.09797628E+009.14735948E-016.82466657E-016.12299693E-11

f 6(x)Best9.25831394E+000.00000000E+007.20895352E-063.69778549E-322.60632647E-073.37812908E+000
Mean2.87969807E+010.00000000E+007.97379259E-063.12154725E-311.00333382E-013.73341278E+000
Worst6.59256512E+010.00000000E+008.81647389E-061.35893617E-302.51573695E-014.00431209E+000
Std1.56322191E+010.00000000E+006.01088500E-073.40671290E-311.37387749E-011.64786441E-010

f 7(x)Best1.01845349E-013.67133207E-032.30691046E-023.05530711E-052.42052731E-052.03984969E-065.95321712E-07
Mean2.09157287E-014.53974219E-033.06138991E-028.56177810E-055.18444251E-051.75661500E-052.06970501E-05
Worst3.76016628E-015.50134798E-033.63388054E-021.35008643E-049.24177878E-055.33965433E-055.50713991E-05
Std8.23611787E-026.65597117E-044.92468230E-032.77562650E-052.84763595E-051.42228474E-051.31769905E-05
Table 4

Test statistical results of functions f8 ~ f13.

Benchmark functionPSODEBATLBOGWOCSAMSCA
f 8(x)Best-8.78252861E+3-12569.48662-8046.935903-9477.60-6861.79-5025.22-12569.5
Mean-6.77371061E+3-12569.48662-7449.534539-8504.46-6279.36-4343.67-12569.5
Worst-5.28468708E+3-12569.48662-6546.887655-7219.72-5690.82-3996.64-12569.5
Std8.52018857E+206.26E+026.27E+024.86E+022.59E+024.46E-09

f 9(x)Best7.14247120E+103.88050502E+10.00000000E+00000
Mean1.04062178E+206.88525758E+11.10328088E+01000
Worst1.39959354E+209.25326145E+11.89042170E+01000
Std1.67952026E+102.00896972E+15.03185256E+00000

f 10(x)Best7.14247120E+17.99360578E-151.27436962E+14.44089210E-154.44089210E-154.44089210E-158.88178420E-16
Mean1.04062178E+27.99360578E-151.40994295E+16.03961325E-156.57252031E-155.11344290E-018.88178420E-16
Worst1.39959354E+27.99360578E-151.52236570E+17.99360578E-157.99360577E-156.420849008.88178420E-16
Std1.67952026E+101.008145521.81336825E-151.94590142E-151.476958420

f 11(x)Best9.88503548E-0101.582880610000
Mean1.1497963905.237430480000
Worst1.5271662208.125401120000
Std1.32482422E-0102.858744820000

f 12(x)Best1.631175961.57054477E-325.12364688E-81.74802573E-326.59326073E-032.39498372E-011.57054477E-32
Mean5.100894441.57054477E-329.18135651E-14.42045801E-311.82811094E-023.34723047E-011.57094814E-32
Worst1.15654267E+011.57054477E-323.657725252.24517006E-303.24795465E-024.82802812E-011.57861209E-32
Std2.8350337901.539132917.59973303E-319.60865202E-034.90236201E-021.80390681E-35

f 13(x)Best6.875428491.34978380E-326.72918021E+13.32193607E-324.22190070E-071.828204151.34978380E-32
Mean2.47313999E+011.34978380E-327.67009096E+12.33370756E-022.37062320E-012.039227061.34978380E-32
Worst5.64417701E+011.34978380E-329.56157748E+11.41320020E-014.97953608E-012.221774981.34978380E-32
Std1.57138147E+0101.14084239E+13.71532141E-022.05206244E-011.06729486E-012.80801150E-48
Table 5

Test statistical results of functions f14 ~ f20.

Benchmark functionPSODEBATLBOGWOCSAMSCA
f 14(x)Best0.998003840.998003840.998003840.998003840.998003840.998003840.998003838
Mean1.295619040.998003843.368745140.998003840.998003840.998003870.998003838
Worst2.982105160.998003846.903335690.998003840.998003840.998004140.998003838
Std0.7268706502.26483904000.000000070

f 15(x)Best0.000307520.000307490.000307490.000307490.000307490.000310540.00030748598
Mean0.003477000.000307490.000307490.0003074860.000490620.000412040.00030748598
Worst0.020363340.000307490.000307490.0003074860.001223170.001266590.00030748598
Std0.00694358001.50786E-190.000409510.000290670

f 16(x)Best-1.03162845-1.03162845-1.03162845-1.03162845-1.03162845-1.03162842-1.03162845
Mean-1.03162842-1.03162845-1.03162845-1.03162845-1.03162845-1.03162684-1.03162845
Worst-1.03162833-1.03162845-1.03162845-1.03162845-1.03162845-1.03162424-1.03162845
Std0.00000004002.27813E-1600.000001220

f 17(x)Best0.397887370.3978873580.397887360.3978873580.397887360.397888380.397887358
Mean0.397887740.3978873580.397887360.3978873580.397891810.397990180.397887358
Worst0.397888820.3978873580.397887360.3978873580.397909610.398379080.397887358
Std0.000000410000.000009950.000139700

f 18(x)Best3.000000003.000000003.000000003.000000003.000000003.000000003.00000000
Mean3.000005533.000000003.000000003.000000003.000000063.000000033.00000000
Worst3.000021043.000000003.000000003.000000003.000000103.000000083.00000000
Std0.00000656007.62408E-160.000000040.000000030

f 19(x)Best-3.86278215-3.86278215-3.86278214-3.86278215-3.86278215-3.86226093-3.86278215
Mean-3.86278211-3.86278215-3.86278214-3.86278215-3.86278203-3.85557758-3.86278215
Worst-3.86278193-3.86278215-3.86278213-3.86278215-3.86278159-3.85462395-3.86278215
Std0.0000000500.000000012.27813E-150.000000240.002277180

f 20(x)Best-3.32199431-3.32199517-3.32199432-3.32199517-3.32199514-3.16816134-3.32199517
Mean-3.23885032-3.32199517-3.27443705-3.31604271-3.24969715-3.02251139-3.32199517
Worst-3.08390118-3.32199517-3.20310148-3.20310205-3.19844899-2.62970467-3.32199517
Std0.0750773200.065120142.6583E-020.066025150.144766960
It can be seen from Table 3 that 5 theoretical optimal values (f1,f2,f3,f4, andf6) are searched by the MSCA algorithm for the 7 unimodal high-dimensional functions, and the searching precision of another two functions (f5 and f7) is also close to the theoretical optimal values. The MSCA algorithm performs better than PSO, DE, BA, and CSA algorithms in the aspect of optimal value, average value, worst value, and variance. For f1, f2, andf4, both TLBO algorithm and MSCA algorithm can search the global optimal theoretical value. For f3, f5, f6, and f7, the MSCA algorithm obtains better results than the TLBO algorithm. The MSCA algorithm obtains better results than the GWO algorithm besides f1 (both algorithms can search the global optimal value). It shows that the MSCA algorithm has a great advantage in the searching precision of unimodal high-dimensional problems. From the search results of the multimodal high-dimensional functions in Table 4, it can be seen that 3 functions (f8, f9, and f11) obtain the globally optimal solution in the MSCA algorithm, and the search results of the other functions are also better than in the other 6 algorithms. The search result of the PSO algorithm is not good, and the search result of the DE algorithm is better than BA, TLBO, GWO, and CSA algorithms. In contrast to TLBO, MCSA has better performance in the aspect of optimal value, average value, worst value, and variance (besidesf11), which indicates the superiority of optimization results of the MSCA in the multimodal high-dimensional functions. It can be seen from Tables 3 and 4 that the search ability of MCSA is better than that of the TLBA in most high-dimensional functions. Both MCSA and TLBA find out the global optimizing in other functions (i.e., f1, f2, f4, and f11). For multimodal low-dimensional functions (f14 ~ f20), most functions have the characteristics of strong shocks. The low-dimensional functions are usually used to test the ability of the algorithm in breaking away from the local optimum. From the search results of low-dimensional multimodal functions in Table 5, it can be seen that the MSCA algorithm obtains the global optimal solution of all functions, while the basic CSA algorithm has poor stability in solving such problems. MSCA, DE, TLBO, and GWO can obtain theoretical optimal value, illustrating that the four algorithms have the ability of jumping out the local optimal values in multimodal low-dimensional functions. Figures 1–7 show the convergence curves of optimal results for some high-dimensional functions by the 7 algorithms. The data in the figures show the optimal results based on the 7 algorithms after 20 independent experiments. For the convenience of drawing, the abscissa takes the number of iterations, and the ordinate takes the logarithm of fitness value for f1, f3, f9, and f11. Besides, the ordinate takes the fitness value for f5, f7, and f13. It can be seen from Figures 1–7 that the MSCA algorithm has faster convergence speed and higher optimization precision than the other 6 intelligence algorithms.
Figure 1

Convergence rates for f1(x).

Figure 2

Convergence rates for f3(x).

Figure 3

Convergence rates for f5(x).

Figure 4

Convergence rates for f7(x).

Figure 5

Convergence rates for f9(x).

Figure 6

Convergence rates for f11(x).

Figure 7

Convergence rates for f13(x).

In order to verify that the performance of the proposed algorithm has significant advantages over other intelligence algorithms, the statistics are carried out (optimal value, average value, worst value, and variance) for the 7 algorithms after 20 independent experiments, and t-test is also used in the experiments for the significance analysis of the optimization results. The function ttest (x,y,0.05, “left”) is verified in the experiments. Here, “x” means the experimental result of MSCA algorithm. “y” means the experimental result of contrast algorithms. The significance level is 0.05, and “left” means left-tailed test. The test results are shown in Table 6. “+” indicates that the MSCA algorithm has significant advantages over the contrast algorithms. “≈” indicates that there is no significant difference between the MSCA algorithm and the contrast algorithms. “-” indicates that the MSCA algorithm is inferior to the contrast algorithms. According to the data listed in correlation Table 6, compared with the PSO, DE, BA, TLBO, GWO, and SCA algorithms, there are 20, 13, 19, 12, 15, and 17 test functions, respectively, in significant advantages. For f18, the search results of the MSCA algorithm are inferior to that of the DE, BA, and TLBO algorithms. In addition, there is no significant difference between the MSCA algorithm and other contrast algorithms for the search results of other test functions (such asf6, f9, f11, f13, f14, f15, andf16 in the DE algorithm). The main reason is that both the MSCA algorithm and contrast algorithms can obtain the global theoretical solution.
Table 6

The comparisons of t-test for f1 ~ f20.

Algorithm f 1(x) f 2(x) f 3(x) f 4(x) f 5(x) f 6(x) f 7(x) f 8(x) f 9(x) f 10(x)
HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.
MSCA/PSO01.000+01.000+01.000+01.000+00.999+01.000+01.000+01.000+01.000+01.000+
MSCA/DE01.000+01.000+01.000+01.000+01.000+ NaN NaN01.000+00.979+ NaN NaN01.000+
MSCA/BA01.000+00.980+01.000+01.000+00.999+01.000+01.000+01.000+01.000+01.000+
MSCA/TLBO NaN NaN NaN NaN01.000+ NaN NaN01.000+01.000+01.000+01.000+01.000+01.000+
MSCA/GWO NaN NaN01.000+00.979+00.996+01.000+00.999+00.999+01.000+ NaN NaN01.000+
MSCA/SCA00.840+00.861+00.970+00.838+01.000+01.000+00.237+01.000+ NaN NaN00.935+

Algorithm f 11(x) f 12(x) f 13(x) f 14(x) f 15(x) f 16(x) f 17(x) f 18(x) f 19(x) f 20(x)number of winners
HPSig.HPSig.HPSig.HSig.HPSig.HPSig.HPSig.HPSig.HPSig.HPSig.

MSCA/PSO00.999+01.000+01.000+020+00.976+00.999+00.999+00.999+00.999+01.000+20
MSCA/DENaN NaN00.314+ NaN NaN NaN1301.000+ NaN NaN00.500+12e-4-00.162+00.500+13
MSCA/BA01.000+00.996+01.000+019+01.000+ NaN NaN00.500+10.001-01.000+00.999+19
MSCA/TLBONaN NaN00.999+00.979+ NaN12 NaN NaN NaN NaN00.500+12e-4-00.500+00.500+12
MSCA/GWONaN NaN01.000+01.000+ NaN1500.979+ NaN NaN00.979+00.933+00.987+01.000+15
MSCA/SCANaN NaN01.000+01.000+ NaN1700.942+01.000+00.999+00.997+01.000+01.000+17

4.3. Efficiency Analysis of the Improvement Strategy

In order to analyze the influence of the three improvement strategies on the performance of SCA algorithm, the odd-numbered standard test functions in Table 1 have been used to experimentalize. In the C-SCA algorithm, the linear decreasing inertia weight and exponential decreasing conversion parameter strategy are combined with the basic SCA algorithm. In the N-SCA algorithm, optimal individual neighborhood search strategy is combined with the basic SCA algorithm. In the G-SCA algorithm, the greedy Levy mutation strategy is combined with the basic SCA algorithm. The C-SCA, N-SCA, G-SCA, and the basic SCA are compared with the proposed algorithm. The experimental parameters are consistent with those in Section 4.2. Table 7 summarizes the experimental results of the three strategies and the proposed algorithm. It can be seen from the experimental results that the C-SCA which used a single strategy makes a limited improvement on the search performance of the functions besides f1(x) and f3(x). The N-SCA algorithm is basically the same as the basic SCA algorithm. The G-SCA strategy has better improvement effect on the test functions f1(x), f3(x), and f7(x), while it has basically the same search results of other test functions as the basic SCA algorithm. However, when the three improvement strategies work together with the SCA algorithm, the search performance of the proposed algorithm can be greatly improved. The main reasons are analyzed as follows. Firstly, the optimal individual neighborhood research allows the random individuals near the current optimal individuals to play the roles of the leader, which increases the probability of the proposed algorithm jumping out of the local optimal solution. Secondly, the greedy Levy mutation strategy increases the diversity of population and adequacy of local search. Thirdly, as the linear decreasing inertia weight and exponential declining conversion parameter method are used, the algorithm chooses larger inertia weight value and conversion parameter value in the early iteration, which is conducive to the global searching ability of the algorithm. In the later iteration, the algorithm chooses smaller values, which is conducive to the local search. Thus, the presented algorithm avoids falling into the local optimum. The solution precision and convergence speed are significantly improved by the collaboration of the three improvement strategies.
Table 7

Test statistical results with different strategies.

Algorithm f 1(x) f 3(x) f 5(x) f 7(x) f 9(x)
SCABest2.41227653E-994.35895925E-5226.236299982.03984969E-060
Mean7.79692173E-864.66854445E-4627.081211701.75661500E-050
Worst1.54558620E-844.16089002E-4528.650138505.33965433E-050
Std3.45449900E-851.08000440E-450.682466661.42228474E-050

C-SCABest3.10050150E-1876.28523597E-11926.400500472.76012374E-060
Mean3.07519823E-1736.04561631E-10527.039440452.34677075E-050
Worst3.73083365E-1721.20899644E-10328.044824811.15810162E-040
Std02.70338330E-1040.551082102.52889873E-050

N-SCABest1.32804165E-988.06533668E-5725.945609688.07111724E-070
Mean1.14472110E-872.02781396E-4426.906647951.80576054E-050
Worst2.23304334E-862.97707622E-4328.658355078.44166161E-050
Std4.98734410E-876.76072976E-440.686079441.91972975E-050

G-SCABest1.53038991E-2476.21923122E-6925.074358679.33982351E-110
Mean1.38742519E-2281.57837645E-5625.437736462.82512818E-100
Worst2.75931538E-2272.15594380E-5525.966157504.971578236E-90
Std05.19044381E-560.228772371.12630342E-100

MSCABest007.76116677E-125.95321712E-070
Mean005.82095129E-062.06970501E-050
Worst002.47408894E-055.50713991E-050
Std006.12299693E-111.31769905E-050

Algorithm f 11(x) f 13(x) f 15(x) f 17(x) f 19(x)

SCABest01.828204150.000310540.39788838-3.86226093
Mean02.039227060.000412040.39799018-3.85557758
Worst02.221774980.001266590.39837908-3.85462395
Std00.106729490.000290670.000139700.00227718

C-SCABest01.844510653.09612692E-040.39789121-3.86272470
Mean02.052528163.62441048E-040.39795214-3.85598742
Worst02.274413941.23237237E-030.39808342-3.85454062
Std00.095178192.04871436E-040.000061200.00279692

N-SCABest01.717707110.000308660.39789254-3.86269936
Mean01.991598600.000372830.39799489-3.85705467
Worst02.183493060.001248860.39841067-3.85470429
Std00.118104340.000208860.000150850.00349793

G-SCABest00.003844980.000307490.39788736-3.86278215
Mean00.016771590.000464740.39788736-3.86278199
Worst00.072227080.001223170.39788736-3.86278146
Std00.013759950.0003023700.00000021

MSCABest01.349783804E-320.000307485980.397887358-3.86278215
Mean01.349783804E-320.000307485980.397887358-3.86278215
Worst01.349783804E-320.000307485980.397887358-3.86278215
Std02.808011502E-48000
From the results of Wilcoxon rank sum test in Table 8, it can be seen that the C-SCA algorithm has significant advantages over the basic SCA algorithm only in the test results of functions f1(x), f3(x), and f13(x). There is no significant difference between the N-SCA algorithm and the basic SCA algorithm. The G-SCA algorithm has significant advantages over the basic SCA algorithm in the searching performance other than f9(x) and f11(x).
Table 8

Test statistical results of Wilcoxon rank sum test.

FunctionC-SCA /SCAN-SCA /SCAG-SCA /SCA
P valueSig.P valueSig.P valueSig.
f 1(x)6.7956e-08 + 0.09096.7956e-08 +
f 3(x)5.8923e-08 + 0.63596.7956e-08 +
f 5(x)0.65540.33696.7956e-08 +
f 7(x)0.40940.75576.7956e-08 +
f 9(x)NaNNaNNaN
f 11(x)NaNNaNNaN
f 13(x)2.6898e-06+0.21846.7956e-08 +
f 15(x)0.45700.79720.0123 +
f 17(x)0.61680.94618.0065e-09 +
f 19(x)0.39420.79725.3656e-08 +
number of winners (+/≈)3/70/08/2

4.4. Parameter Sensitivity Analysis in the Algorithm

4.4.1. The Analysis of Parameter λ in the Optimal Individual Domain Search Strategy

In order to explore the influence of the parameter λ in the optimal individual domain search strategy, the even-numbered standard test functions in Table 1 are selected. The parameter λ takes 0.005, 0.01, 0.02, 0.03, and 0.05, respectively, for independent experiments, with other parameters unchanged. The optimal individual domain search strategy independently acts on the SCA algorithm (N-SCA). Table 9 summarizes the results when the N-SCA algorithm takes different values of λ. Here, the black boldface means the winners in the comparison expressed by “+”. It can be seen from the last row of Table 9 that the number of the winners is 3 when λ = 0.01, which is better than other cases. Therefore, λ = 0.01 is the optimal parameter selected.
Table 9

Statistical results for different values of λ.

Function λ = 0.005 (mean) λ = 0.01 ( mean) λ = 0.02 ( mean) λ = 0.03 ( mean) λ = 0.05 ( mean)
f 2(x)1.46279E-576.15031E-582.04298E-575.87904E-57 2.34908E-58(+)
f 4(x)2.45329E-29 3.29394E-30(+) 1.00414E-297.65892E-302.45329E-29
f 6(x)3.68341 3.63003(+) 3.688683.727883.63373
f 8(x)-4363.10086-4312.00222 -4446.01569(+) -4410.44537-4371.62707
f 10(x)0.410080.08565 0.03147(+) 0.379090.07129
f 12(x) 0.34002(+) 0.358380.340050.346300.33576
f 14(x)0.998000.998000.998000.998000.99800
f 16(x)-1.03163-1.03163-1.03163-1.03163-1.03163
f 18(x)3.000003.000003.000003.000003.00000
f 20(x)-3.03411 -3.07592(+) -3.05259-3.04242-3.03034
number of winners13201

4.4.2. The Analysis of Parameter ε in the Greedy levy Mutation Strategy

The value of parameter ε has a great effect on the algorithm performance in the self-adapting mutation mode adopted in (10). In order to explore the influence of the parameter ε on the searching performance of the algorithm, the even-numbered standard test functions in Table 1 are selected. The parameter ε takes 10, 30, 60, and 90, respectively, for independent experiments, with other parameters unchanged. The greedy levy mutation strategy independently acts on the SCA algorithm (G-SCA). Table 10 summarizes the results when the G-SCA algorithm takes different values of ε. Here, the optimal results are marked with “+” and showed by overstriking. It can be seen from Table 10 that when ε takes 10, 30, 60, and 90, respectively, the number of optimal search results obtained by GLM-SCA is 1, 5, 0, and 1, respectively. When ε=30, the search results of GLM-SCA are much better than those of other values. Therefore, ε=30 is a reasonable parameter chosen.
Table 10

Statistical results for different values of ε.

Function ε = 10 (mean) ε = 30  ( mean) ε = 60  ( mean) ε = 90  ( mean)
f 2(x)2.1934E-150 1.7432E-152(+) 1.5276E-1509.7415E-152
f 4(x)6.19395E-365.81864E-379.3441E-38 2.0556E-38(+)
f 6(x)0.00152 0.00148(+) 0.001510.00153
f 8(x)-7566.98206 -7598.33875(+) -7561.27736-7596.35448
f 10(x) 0.198701(+) 0.3985120.3982990.398304
f 12(x)0.00026 0.00024(+) 0.000270.00025
f 14(x)0.998000.998000.998000.99800
f 16(x)-1.03163-1.03163-1.03163-1.03163
f 18(x)3.000003.000003.000003.00000
f 20(x)-3.25517 -3.23962(+) -3.23028-3.23419
number of winners1501

5. Conclusion

An improved sine-cosine algorithm based on greedy levy mutation is proposed in this paper. The proposed algorithm adopts the method of both exponential decreasing conversion parameter and linear decreasing inertia weight to better balance the global searching and local development ability of the algorithm. The update mode guided by the of random individual near the optimal individuals is introduced, which increases the probability of algorithm jumping out of the local extremum. Inspired by the levy flight mode of long-term short-distance migration and occasional long-distance jump, a self-adapting greedy levy mutation strategy is designed to mutate the optimal individuals. The proposed strategy can increase the population diversity and reduce the search oscillation of algorithm, making the algorithm converge to the global optimum smoothly. Twenty typical benchmark test functions are applied to verify the performance of the proposed algorithm. The results show that the searching precision and convergence speed of the proposed algorithm can be greatly improved through the collaboration of the three improvement strategies. At the same time, the contribution of the three improvement strategies to the proposed algorithm is analyzed in detail. The influence of parameter selection on the algorithm performance is discussed, and suggestions on parameter selection are also given in this paper. However, the proposed algorithm is still theoretically and practically in its infancy stage, and the setting of the parameters in the algorithm is determined by empirical tests. At the same time, when the algorithm introduces greedy Levy mutation strategy, the time complexity of the algorithm is greatly increased. Therefore, the proposed algorithm only conducts the greedy Levy mutation strategy on the best individual at each iteration.
  3 in total

1.  Multi-Threshold Image Segmentation of Maize Diseases Based on Elite Comprehensive Particle Swarm Optimization and Otsu.

Authors:  Chengcheng Chen; Xianchang Wang; Ali Asghar Heidari; Helong Yu; Huiling Chen
Journal:  Front Plant Sci       Date:  2021-12-13       Impact factor: 5.753

2.  Boosted Sine Cosine Algorithm with Application to Medical Diagnosis.

Authors:  Xiaojia Ye; Zhennao Cai; Chenglang Lu; Huiling Chen; Zhifang Pan
Journal:  Comput Math Methods Med       Date:  2022-06-22       Impact factor: 2.809

3.  A comprehensive survey of sine cosine algorithm: variants and applications.

Authors:  Asma Benmessaoud Gabis; Yassine Meraihi; Seyedali Mirjalili; Amar Ramdane-Cherif
Journal:  Artif Intell Rev       Date:  2021-06-02       Impact factor: 8.139

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.