Literature DB >> 33267208

A Novel Hybrid Meta-Heuristic Algorithm Based on the Cross-Entropy Method and Firefly Algorithm for Global Optimization.

Guocheng Li1,2, Pei Liu3, Chengyi Le4, Benda Zhou1,2.   

Abstract

Global optimization, especially on a large scale, is challenging to solve due to its nonlinearity and multimodality. In this paper, in order to enhance the global searching ability of the firefly algorithm (FA) inspired by bionics, a novel hybrid meta-heuristic algorithm is proposed by embedding the cross-entropy (CE) method into the firefly algorithm. With adaptive smoothing and co-evolution, the proposed method fully absorbs the ergodicity, adaptability and robustness of the cross-entropy method. The new hybrid algorithm achieves an effective balance between exploration and exploitation to avoid falling into a local optimum, enhance its global searching ability, and improve its convergence rate. The results of numeral experiments show that the new hybrid algorithm possesses more powerful global search capacity, higher optimization precision, and stronger robustness.

Entities:  

Keywords:  co-evolution; cross-entropy method; firefly algorithm; global optimization; meta-heuristic

Year:  2019        PMID: 33267208      PMCID: PMC7514982          DOI: 10.3390/e21050494

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

In many tasks or applications, global optimization plays a vital role, such as in power systems, industrial design, image processing, biological engineering, job-shop scheduling, economic dispatch and financial markets. In this paper, we focus our attention on unconstrained optimization problems which can be formulated as min, where and n refers to the problems’ dimension [1]. Traditional optimization methods such as the gradient-based methods usually struggle to deal with these challenging problems due to the objective function can be nonlinearity, multimodality and non-convexity [2,3]. Thus, for decades, researchers have explored many derivative-free optimization methods to solve them. Generally, these optimization methods can be divided into two main classes: deterministic algorithms and stochastic algorithms [3,4]. The former, such as the Hill-Climbing [5], Newton–Raphson [6], DIRECT Algorithm [7], and Geometric and Information Global Optimization Methods with local tuning or local improvement [8,9], can get the same final results if the same set of initial values are used at the beginning [10]. However, the latter such as two well-known algorithms—Genetic Algorithm (GA) [11] and Particle Swarm Optimization (PSO) [12]—often use some randomness in their strategies which can enable the algorithm to escape from the local optima to search more regions on a global scale [10], and which have become very popular for solving real-life problems [3]. In the past two decades, meta-heuristics based on evolutionary computation and swarm intelligence have emerged and become prevalent, such as Ant Colony Optimization (ACO) [13], Differential Evolution (DE) [14], Harmony Search (HS) [15], Bacterial Foraging Optimization Algorithm (BFOA) [16], Honey Bees Mating Optimization (HBMO) [17], Artificial Bee Colony (ABC) [18], Biogeography-Based Optimization (BBO) [19], Gravitational Search Algorithm (GSA) [20], Firefly Algorithm (FA) [21], Cuckoo Search (CS) [22], Bat Algorithm (BA) [23], Grey Wolf Optimizer (GWO) [24], Ant Lion Optimizer (ALO) [25], Moth Flame Optimizer (MFO) [26], Dragonfly Algorithm (DA) [27], Whale Optimization Algorithm (WOA) [28], Salp Swarm Algorithm (SSA) [29], Crow Search Algorithm (CSA) [30], Polar Bear Optimization (PBO) [31], Tree Growth Algorithm (TGA) [32], and Butterfly Optimization Algorithm (BOA) [33]. Meta-heuristic algorithms have been widely adopted to deal with global optimization and engineering optimization problems, and have attracted much attention as effective tools for optimization. However, superior performance for any meta-heuristic algorithm is a target. They perform well when dealing with certain optimization problems but are not ideal in most cases [34]. In order to overcome this shortcoming, many hybrid meta-heuristic algorithms trying to combine meta-heuristics and exact algorithms or other meta-heuristics have been proposed to solve more complicated optimization problems, such as Hybrid Genetic Algorithm with Particle Swarm Optimization [35], Hybrid Particle Swarm and Ant Colony Optimization [36], Hybrid Particle Swarm Optimization with Gravitational Search Algorithm [37], Hybrid Evolutionary Firefly Algorithm [38], Hybrid Artificial Bee Colony with Firefly Algorithm [39], Hybrid Firefly-Genetic Algorithm [40], Hybrid Firefly Algorithm with Differential Evolution [10], Simulated Annealing Gaussian Bat Algorithm [41], Hybrid Harmony Search with Cuckoo Search [42], Hybrid Harmony Search with Artificial Bee Colony Algorithm [43], and Hybrid Whale Optimization Algorithm with Simulated Annealing [44]. These hybrid meta-heuristic algorithms have been successfully applied in function optimization, engineering optimization, portfolio selection, shop scheduling optimization, and feature selection. Based on co-evolution, this paper explores a new hybrid meta-heuristic algorithm combining the cross-entropy (CE) method and the firefly algorithm (FA). The cross-entropy method was proposed by Rubinstein [45] in 1997 to solve rare event probability estimation in complex random networks, while the firefly algorithm (FA) was developed by Yang [21] and inspired by the flashing pattern of tropical fireflies in nature for multimodal optimization. The motivation of our new proposed hybrid algorithm is to improve the global search ability by embedding the cross-entropy method into the firefly algorithm to obtain an effective balance between exploration and exploitation. The rest of the paper is organized as follows. In Section 2, CE and FA are briefly introduced, and their hybridization study is presented in Section 3. Numeral experiments and results are given in Section 4. Further analysis and a discussion of the performance of the new method are conducted in Section 5. In Section 6, the conclusions of the paper are presented.

2. Preliminaries

2.1. The Cross-Entropy Method

The cross-entropy (CE) method was proposed by Rubinstein [45] in 1997 based on Monte Carlo technology and uses Kullback–Leibler divergence to measure the cross-entropy between two sampling distributions, solve an optimization problem by minimizing them, and obtain the optimal probability distribution parameters. CE has excellent global optimization capability, good adaptability, and strong robustness. Thus, Yang regards it as a meta-heuristic algorithm [4]. However, due to the large sample size, it has the disadvantages of large computational cost and slow convergence rate. CE not only solves rare event probability estimation problems. It can also be used to solve complex optimization problems such as combination optimization [46,47,48], function optimization [46,48,49], engineering design [50], vehicle routing problems [51], and problems from other fields [52,53,54]. Let us consider the optimization problem as follows: where S is a real-valued performance function on X. Now, we associate the above problem with a probability distribution estimation problem, and the auxiliary problem is obtained: where is the expectation operator, is a threshold or level parameter, and I is the indicator function, whose value is 1 if and 0, otherwise. In order to reduce the number of samples, the importance sampling method is introduced in CE. Consequently, we can rewrite Equation (2) as where is a random sample from with importance sampling density . In order to obtain the optimal importance sampling density, the Kullback–Leibler divergence is employed to measure the distance between two densities, i.e., the cross-entropy, and the Kullback–Leibler divergence is minimized to obtain the optimal density , which is equivalent to solving the minimization problem [45] The main CE algorithm for optimization problems is summarized in Algorithm 1.

2.2. Firefly Algorithm

The firefly algorithm (FA) was proposed by Yang [21] and inspired by the unique light signal system of fireflies in nature. Fireflies use radiance as a signal to locate and attract the opposite sex, even to forage. Based on idealizing the flashing characteristics of fireflies, the firefly algorithm was formulated for solving optimization problems. Using this algorithm, random search and optimization can be performed within a certain range, such as the solution space. Through the movement of fireflies and the constant renewal of brightness and attraction, they are constantly approaching the best position and ultimately get the best solution to the problem. FA has attracted much attention and has been applied to many applications such as global optimization [55], multimodal optimization [21], multi-objective optimization [56], engineering design problems [57], scheduling problems [58], and other fields [59,60,61,62]. In order to design FA properly, two important issues need to be defined: the variation of light intensity and formulation of the attractiveness [21]. The light intensity of a firefly can be approximated as follows: where represents the original light intensity and is a fixed light absorption coefficient. indicates the distance between firefly i and firefly j and is defined as follows: The attractiveness of a firefly can be formulated as follows: where represents the attractiveness at , which is the maximum attractiveness. Due to the attractiveness from firefly j, the position of firefly i is updated as follows: where and are the positions of fireflies i and j, respectively. The step factor is a constant and satisfies , and is a random number generator uniformly distributed in , which was later replaced by Lévy flight [55]. Based on the above, the main FA can be summarized in pseudo-code as Algorithm 2.

3. Novel Hybrid Cross-Entropy Method and Firefly Algorithm

In this section, the details of the new hybrid algorithm are presented. A meta-heuristic algorithm should have two main exploration and exploitation functions, and an excellent meta-heuristic algorithm should try to effectively balance them and achieve better performance [63]. The cross-entropy method based on the Monte Carlo technique has the advantages of strong global optimization ability, good adaptability, and robustness [46]. It also has obvious disadvantages of large sample size, high computational cost, and slow convergence. At the same time, the firefly algorithm based on bionics has the advantages of strong local search ability and fast convergence, but it tends to fall into a local optimum rather than obtaining a global optimal solution [21]. Based on a co-evolutionary technique, this paper explores constructing a new hybrid meta-heuristic algorithm, named the Cross-Entropy Firefly Algorithm (CEFA), by embedding the cross-entropy method into the firefly algorithm. The new method contains two optimization operators—the CE operator and FA operator—which implement information sharing between the CE sample and the FA population through co-evolution in each iteration. While the FA operator updates its population using the elite sample from CE to improve the population diversity, the CE operator uses the FA population to calculate the initial probability distribution parameters in order to speed up its convergence. The new hybrid meta-heuristic algorithm based on a co-evolutionary technique preserves the advantage of fast convergence of the swarm intelligent bionic algorithm in local search. At the same time, it also makes full use of the global optimization ability of the cross-entropy stochastic optimization method. The introduction of a co-evolutionary technique not only makes the meta-heuristic algorithms from different backgrounds complement each other but also enhances their respective advantages. Therefore, it has strong global exploration capability and local exploitation capability, and can quickly converge to global optimal solution, which provides powerful algorithm support for complex function optimization or engineering optimization problems. The pseudo-code of CEFA is described in Algorithm 3. In order to more clearly show the co-evolutionary process between the FA operator and the CE operator, the flow chart of CEFA is presented in Figure 1.
Figure 1

The flow chart of the Cross-Entropy Firefly Algorithm (CEFA).

4. Experiment and Results

4.1. Benchmark Functions

In this section, 23 standard testing functions utilized by many researchers [20,24,25,27,28,29] were employed to evaluate the performance of the proposed hybrid algorithm CEFA for numerical optimization problems. The benchmark functions including seven unimodal functions, six multimodal functions and ten fixed-dimension multimodal functions are described in Appendix A (Table A1). The unimodal functions were used to evaluate the exploitation and convergence of an algorithm, while the multimodal functions were used to benchmark the performance of exploration and local optima avoidance [25,27]. Further information on all the benchmark functions can be found in Yao et al. (1999) [64].
Table A1

The definition of benchmark functions.

FunctionDimRange Fmin Type
F1(x)=i=1nxi2 30,50,100[−100,100]0Unimodal
F2(x)=i=1nxi+i=1nxi 30,50,100[−10,10]0Unimodal
F3(x)=i=1n(j=1ixj)2 30,50,100[−100,100]0Unimodal
F4(x)=maxi{xi,1in} 30,50,100[−100,100]0Unimodal
F5(x)=i=1n1[100(xi+1xi2)2+(xi1)2] 30,50,100[−30,30]0Unimodal
F6(x)=i=1n([xi+0.5])2 30,50,100[−100,100]0Unimodal
F7(x)=i=1nixi4+random[0,1) 30,50,100[−1.28,1.28]0Unimodal
F8(x)=i=1nxisin(xi) 30,50,100[−500,500] 418.9829×n Multimodal
F9(x)=i=1n1[xi210cos(2πxi)+10] 30,50,100[−5.12,5.12]0Multimodal
F10(x)=20exp(0.21ni=1nxi2)exp(1ni=1ncos(2πxi))+20+e 30,50,100[−32,32]0Multimodal
F11(x)=14000i=1nxi2i=1ncos(xii)+1 30,50,100[−600,600]0Multimodal
F12(x)=πn{10sin2(πy1)+i=1n1(yi1)2[1+10sin2(πyi+1)+(yn+1)2]}+i=1nu(xi,10,100,4) 30,50,100[−50,50]0Multimodal
yi=1+xi+14
u(xi,a,k,m)=k(xia)m,xi>a0,axiak(xia)m,xi<a
F13(x)=0.1{sin2(3πx1)+i=1n(xi1)2[1+sin2(3πxi+1)]}+i=1nu(xi,5,100,4) 30,50,100[−50,50]0Multimodal
F14(x)=(1500+j=1251j+i=12(xiaij)6)1 2[−65.536,65.536]1Multimodal
F15(x)=i=111[aix1(bi2+bix2)bi2+bix3+x4]2 4[−5,5]0.00030Multimodal
F16(x)=4x122.1x14+13x16+x1x2x22+4x22 2[−5,5]−1.0316Multimodal
F17(x)=(x25.14πx12+5πx16)2+10(118π)cosx1+10 2[−5,5]0.398Multimodal
F18(x)=[1+(x1+x2+1)2(1914x1+3x1214x2+6x1x2+3x22)]×[30+(2x13x2)(1832x1+12x12+48x236x1x2+27x22)] 2[−5,5]3Multimodal
F19(x)=i=14ciexp(j=13aij(xjpij)2) 3[1,3]−3.86Multimodal
F20(x)=i=14ciexp(j=16aij(xjpij)2) 6[0,1]−3.32Multimodal
F21(x)=i=15[(Xai)(Xai)T+ci]1 4[0,10]−10.1532Multimodal
F22(x)=i=17[(Xai)(Xai)T+ci]1 4[0,10]−10.4028Multimodal
F23(x)=i=110[(Xai)(Xai)T+ci]1 4[0,10]−10.5363Multimodal

4.2. Experiment Setting

Three test experiments were performed using the proposed CEFA method, and the obtained numerical solutions were compared with those from FA [21], CE [45], GA [11], PSO [12], SSA [29], BOA [31], and Hybrid Firefly Algorithm (HFA) [10] on the benchmark functions. Further information on the experiments is shown in Table 1. For these experiments, the variants were coded in MATLAB R2018b, running on a PC with an Intel Core i7-8700 machine (Gainesville, FL, USA), 3.19 GHz CPU, and 16 GB of RAM.
Table 1

Information about the three test experiments.

NameFunctionsDimensionComparisons
Test 1F1–F232–30FA, CE, GA, PSO, SSA, BOA, HFA, CEFA
Test 2F1–F1350GA, PSO, SSA, BOA, HFA, CEFA
Test 3F1–F13100GA, PSO, SSA, BOA, HFA, CEFA
Test experimental conditions and settings: (1) The population size of the FA operator in CEFA was set to 60 for Test 1 and 100 for Tests 2 and 3, while the sample size of the CE operator was 98. The maximum number of iterations of the FA operator in CEFA was 50, while the CE operator’s was 30 for Test 1 and 50 for Tests 2 and 3. (2) The population sizes of other algorithms for comparison were 100, and the maximum number of iterations were 1500 for Test 1 and 2500 for Tests 2 and 3. (3) All the other parameters of each algorithm were set to be as the same as the original reference. This experimental setup ensures fairness in comparison because the numbers of functional evaluations (NFEs) for all algorithms were the same in the same test. It is well known that all the intelligent methods are based on a certain stochastic distribution, so 30 independent runs were carried out for each method on each test function in order to statistically evaluate the proposed hybrid algorithm. The average value and standard deviation of the best approximated solution in the last iteration are introduced to compare the overall performance of the algorithms.

4.3. Results and Comparisons

The results of Test 1 are shown in Table 2. The winner (best value) is identified in bold. Among the results, the average value was used to evaluate the overall quality of the solution, reflecting the average solution accuracy of the algorithm, and the standard deviation was used to evaluate the stability of the algorithm. From Table 2, we can see the following: (1) The proposed algorithm outperforms FA, CE, GA, PSO, and SSA on almost all seven unimodal functions and six multimodal functions, while it is superior to BOA and HFA for the majority of them. This indicates that CEFA has good performance in terms of exploitation, exploration and local optima avoidance. (2) CEFA provides very competitive results in most of the ten fixed-dimension multimodal functions and tends to outperform other algorithms. The advantages of CEFA have not been fully demonstrated when solving low-dimensional function optimization problems.
Table 2

Comparison of the optimization results obtained in Test 1 ( 2–30).

Fun.Meas.FACEGAPSOSSABOAHFACEFA
F1Aver. 1.23×1003 5.45×1001 1.10×1009 3.18×1023 5.92×1009 3.09×1016 1.64×1063 3.04×1068
Stdev. 4.35×1003 6.72×1002 3.48×1009 8.40×1023 8.80×1010 1.40×1017 1.91×1064 1.58×1068
F2Aver. 4.36×1002 6.00×1001 3.84×1005 1.76×1015 5.24×1006 2.26×1013 1.57×1032 4.18×1033
Stdev. 4.80×1002 6.86×1002 1.22×1004 2.33×1015 6.71×1007 9.69×1015 1.55×1033 1.14×1033
F3Aver. 8.59×10+01 5.19×10+02 2.39×1002 3.48×10+00 7.03×1010 3.27×1016 5.02×1018 1.08×10+02
Stdev. 4.44×10+01 1.24×10+02 7.74×1002 2.56×10+00 2.17×1010 1.19×1017 6.98×1018 9.61×10+01
F4Aver. 8.45×1001 1.07×10+00 1.96×1001 2.63×1001 1.07×1005 2.51×1013 3.51×1014 4.53×1002
Stdev. 1.01×10+00 8.09×1002 6.15×1001 1.12×1001 1.86×1006 1.34×1014 1.31×1013 1.77×1001
F5Aver. 3.85×10+01 3.89×10+01 7.45×1001 3.81×10+01 3.37×10+01 2.89×10+01 6.02×10+00 2.73×10+01
Stdev. 1.27×10+01 1.25×10+00 6.90×10+00 2.69×10+01 6.96×10+01 3.01×1002 2.40×10+00 2.22×1001
F6Aver. 7.08×1004 5.69×1001 1.14×1009 2.45×1023 4.48×1010 4.93×10+00 0 0
Stdev. 3.05×1003 9.65×1002 3.65×1009 6.62×1023 1.60×1010 6.58×1001 0 0
F7Aver. 4.61×1002 1.20×1003 4.26×1002 3.20×1003 1.32×1003 2.74×1004 5.55×1004 3.09×1003
Stdev. 1.88×1002 2.94×1004 1.32×1001 1.19×1003 9.73×1004 9.29×1005 1.65×1004 7.28×1004
F8Aver. 4.08×10+03 4.39×10+03 1.07×10+03 6.76×10+03 2.96×10+03 4.39×10+03 1.04×10+04 5.21×10+03
Stdev. 2.53×10+02 3.47×10+02 3.25×10+03 7.70×10+02 2.25×10+02 3.04×10+02 5.77×10+02 2.06×10+03
F9Aver. 1.49×10+02 1.57×10+02 1.99×1001 3.28×10+01 1.30×10+01 5.69×1015 2.47×10+01 5.44×10+00
Stdev. 1.17×10+01 8.45×10+00 9.38×1001 1.09×10+01 5.82×10+00 1.80×1014 5.94×10+00 2.27×10+00
F10Aver. 4.44×1015 3.64×1001 7.97×1006 6.98×1013 2.20×10+00 1.92×1013 6.93×1015 4.44×1015
Stdev. 0 2.80×1002 2.43×1005 1.14×1012 7.19×1001 4.17×1014 1.66×1015 0
F11Aver. 2.84×1003 7.13×1001 9.86×1005 1.12×1002 3.04×1001 0 0 0
Stdev. 1.32×1003 4.16×1002 9.86×1004 1.25×1002 1.58×1001 0 0 0
F12Aver. 5.50×1005 5.51×1003 4.15×1003 2.03×1026 1.04×1001 3.51×1001 1.57×1032 1.57×1032
Stdev. 7.78×1005 8.54×1004 2.52×1002 5.27×1026 3.20×1001 9.59×1002 3.16×1002 5.57×1048
F13Aver. 5.46×1003 6.17×1002 4.39×1004 1.10×1003 7.32×1004 1.98×10+00 1.35×1032 1.35×1032
Stdev. 7.45×1003 9.98×1003 2.16×1003 3.35×1003 2.79×1003 3.36×1001 5.57×1048 5.57×1048
F14Aver. 1.0037 1.0970 0.3948 1.3280 0.9980 0.9983 0.9980 1.2470
Stdev. 3.12×1002 5.42×1001 1.56×10+00 9.47×1001 1.51×1016 1.34×1003 0 9.44×1001
F15Aver. 6.89×1004 3.07×1004 3.83×1004 3.69×1004 6.94×1004 3.18×1004 3.07×1004 7.31×1004
Stdev. 1.73×1004 3.30×1010 2.05×1003 2.32×1004 3.64×1004 8.14×1006 7.67×1020 2.37×1005
F16Aver. 1.0316 1.0316 −0.1032 1.0316 1.0316 1.0316 1.0316 1.0316
Stdev. 6.78×1016 6.20×1007 3.11×1001 6.78×1016 1.04×1015 5.73×1006 6.78×1016 6.78×1016
F17Aver. 0.3979 0.3979 0.3979 0.4665 0.3979 0.3979 0.3979 0.3979
Stdev. 0 9.80×1006 1.20×1001 1.27×1001 2.63×1015 9.97×1005 0 0
F18Aver.3.90006.4068 3.0000 3.0000 3.0000 3.0020 3.0000 3.9000
Stdev. 4.93×10+00 1.09×10+01 1.245×1010 1.31×1015 3.80×1014 1.41×1003 1.76×1015 4.93×10+00
F19Aver. 3.8628 −3.8593−0.3863−3.7727 3.8628 3.8619 3.8628 −3.8064
Stdev. 2.71×1015 1.20×1002 1.16×10+00 6.63×1002 2.84×1015 1.17×1003 2.71×1015 1.96×1001
F20Aver.−3.2784−3.2863−0.3251−2.3324−3.2190 3.1088 −3.27 3.2900
Stdev. 5.83×1002 5.54×1002 9.80×1001 3.16×1001 4.11×1002 7.21×1002 5.92×1002 5.33×1002
F21Aver. 10.1532 −6.1882−0.638−2.3449−9.0573 9.1254 10.1532 6.7096
Stdev. 6.63×1015 3.77×10+00 2.18×10+00 9.81×1001 2.27×10+00 9.23×1001 1.90×10+00 3.75×10+00
F22Aver. 9.5164 10.1479 0.7815 2.2815 9.8742 9.7991 10.4029 10.4029
Stdev. 2.58×10+00 1.40×10+00 2.57×10+00 9.73×1001 1.61×10+00 5.03×1001 1.75×1015 1.65×1015
F23Aver.−10.3130 10.5364 −0.8559−2.3258−9.919 10.0764 10.5364 10.5364
Stdev. 2.88×10+00 2.22×1009 2.76×10+00 9.13×1001 1.91×10+00 2.96×1001 1.62×1015 1.81×1015
The progress of the average best value over 30 runs for the benchmark functions F1, F2, F6, F10, F12, and F13 is shown in Figure 2; it shows that the proposed CEFA tends to find the global optimum significantly faster than other algorithms and has a higher convergence rate. This is due to the employed co-evolutionary mechanisms adopted between CE and FA to place emphasis on the local search and exploitation as the iteration number increases, which highly accelerate the convergence towards the optimum in the final steps of the iterations.
Figure 2

Convergence of algorithms on some of the benchmark functions in Test 1.

Tests 2 and 3 were intended to further explore the advantages of the CEFA algorithm in solving large-scale optimization problems. The test results are shown in Table 3 and Table 4. Both of them show that the proposed algorithm outperforms GA, PSO, and SSA on all test problems, except for one problem with a slight difference from GA or PSO and provides very competitive results compared to BOA and HFA on the majority of multimodal functions. The superior performance of the proposed method in solving large-scale optimization problems is attributed to a good balance between exploration and exploitation, which also enhances CEFA’s exploration and exploitation capabilities to focus on the high-performance areas of the search space.
Table 3

Comparison of the optimization results obtained in Test 2 ().

FMeas.GAPSOSSABOAHFACEFA
F1Aver. 4.92×1009 5.32×1019 4.68×1009 2.28×1018 3.24×10106 2.05×1065
Stdev. 1.91×1008 8.24×1019 7.90×1010 8.51×1020 1.64×1059 7.97×1066
F2Aver. 1.16×1002 1.96×1012 4.58×1006 3.47×10+20 4.08×1054 1.25×1031
Stdev. 5.01×1002 4.69×1012 9.23×1007 1.90×10+21 3.36×1055 2.66×1032
F3Aver. 2.28×1001 1.58×10+02 5.06×1010 2.33×1018 3.38×1009 5.61×10+02
Stdev. 7.19×1001 5.26×10+01 1.55×1010 7.58×1020 2.92×1009 2.98×10+02
F4Aver. 2.34×1001 2.48×10+00 1.03×1005 1.98×1015 1.43×1002 1.91×10+00
Stdev. 7.34×1001 4.73×1001 1.58×1006 5.10×1017 1.86×1002 1.89×10+00
F5Aver. 2.07×10+00 7.89×10+01 6.51×10+01 4.89×10+01 2.55×10+01 3.92×10+01
Stdev. 1.14×10+01 3.40×10+01 6.03×10+01 3.00×1002 2.27×10+01 5.17×10+00
F6Aver. 1.28×1008 5.97×1019 3.36×1010 9.52×10+00 2.47×1033 0
Stdev. 5.90×1008 1.20×1018 1.01×1010 7.24×1001 5.63×1033 0
F7Aver. 1.39×1001 8.66×1003 9.23×1004 1.79×1004 1.59×1003 3.76×1003
Stdev. 4.26×1001 2.33×1003 8.29×1004 6.52×1005 4.09×1004 9.56×1004
F8Aver. 1.67×10+03 1.13×10+04 3.01×10+03 5.98×10+03 1.61×10+04 7.42×10+03
Stdev. 5.05×10+03 1.22×10+03 2.30×10+02 4.52×10+02 7.73×10+02 4.08×10+03
F9Aver. 1.99×1001 5.91×10+01 1.23×10+01 0 6.83×10+01 1.34×10+01
Stdev. 7.75×1001 1.34×10+01 4.20×10+00 0 1.55×10+01 3.20×10+00
F10Aver. 1.76×1002 1.20×1010 2.26×1001 4.20×1015 8.70×1015 1.98×1015
Stdev. 1.24×1001 1.54×1010 6.37×1001 9.01×1016 2.17×1015 1.79×1016
F11Aver. 1.48×1004 7.55×1003 2.80×1001 0 1.15×1003 0
Stdev. 1.48×1003 8.76×1003 1.20×1001 0 3.09×1002 0
F12Aver. 3.73×1003 8.29×1003 2.07×1002 6.73×1001 2.08×1002 9.42×1033
Stdev. 1.48×1002 2.70×1002 7.89×1002 1.04×1001 1.03×1001 2.78×1048
F13Aver. 7.69×1004 3.30×1003 1.65×1011 4.06×10+00 2.56×1003 1.35×1032
Stdev. 4.75×1003 5.12×1003 5.72×1012 7.17×1001 4.73×1003 5.57×1048
Table 4

Comparison of the optimization results obtained in Test 3 ().

FMeas.GAPSOSSABOAHFACEFA
F1Aver. 1.02×1002 1.40×1005 4.65×1009 2.34×1018 6.00×1044 1.93×1044
Stdev. 3.45×1002 1.18×1005 8.94×1010 6.49×1020 9.67×1044 6.54×1045
F2Aver. 6.88×1001 7.58×1004 4.69×1006 3.76×10+46 1.81×1029 1.70×1021
Stdev. 2.22×10+00 2.11×1003 1.02×1006 8.32×10+46 1.52×1029 2.88×1022
F3Aver. 2.95×10+00 7.67×10+03 4.73×1010 2.39×1018 3.03×10+03 7.50×10+03
Stdrv. 9.16×10+00 1.70×10+03 1.95×1010 6.96×1020 4.07×10+03 1.84×10+03
F4Aver. 2.53×1001 8.39×10+00 1.01×1005 2.00×1015 5.89×10+01 1.51×10+01
Stdev. 7.75×1001 7.99×1001 1.51×1006 6.00×1017 5.13×10+00 4.30×10+00
F5Aver. 1.65×10+01 2.38×10+02 1.70×10+02 9.89×10+01 1.34×10+02 1.04×10+02
Stdev. 5.37×10+01 9.49×10+01 7.30×10+01 2.74×1002 5.26×10+01 2.47×10+01
F6Aver. 3.08×1002 8.74×1006 3.57×1010 2.23×10+01 2.17×1031 0
Stdev. 1.06×1001 8.32×1006 1.39×1010 9.59×1001 2.40×1031 0
F7Aver. 3.57×1001 6.37×1002 8.10×1004 1.79×1004 1.26×1002 9.36×1003
Stdev. 1.11×10+00 1.09×1002 5.87×1004 6.49×1005 2.91×1003 1.46×1003
F8Aver. 2.81×10+03 2.10×10+04 -3.06×10+03 8.52×10+03 3.00×10+04 9.24×10+03
Stdev. 8.47×10+03 2.19×10+03 3.55×10+02 7.06×10+02 1.32×10+03 3.90×10+02
F9Aver. 2.59×10+00 1.29×10+02 4.71×10+01 0 2.28×10+02 3.91×10+01
Stdev. 8.08×10+00 2.03×10+01 1.43×10+01 0 4.64×10+01 5.09×10+00
F10Aver. 1.12×1001 1.54×1002 2.66×1001 4.44×1015 2.43×1001 4.44×1015
Stdev. 3.44×1001 6.60×1002 6.26×1001 5.32×1016 5.12×1001 4.01×1016
F11Aver. 2.30×1004 7.21×1003 3.02×1001 0 3.37×1003 0
Stdev. 7.63×1004 1.39×1002 1.03×1001 0 5.62×1003 0
F12Aver. 2.18×1003 1.66×1002 6.22×1002 9.51×1001 7.61×1002 2.92×1004
Stdev. 8.86×1003 3.03×1002 1.51×1001 7.96×1002 1.16×1001 1.60×1003
F13Aver. 2.71×1003 7.72×1003 7.32×1004 9.98×10+00 9.08×1002 1.35×1032
Stdev. 9.66×1003 1.04×1002 2.79×1003 6.30×1003 3.28×1001 5.57×1048
In addition, the good convergence speed of the proposed CEFA algorithm could be concluded from Figure 3 and Figure 4 when solving large-scale optimization problems, in which the same six functions, F1, F2, F6, F10, F12, and F13, were selected from the benchmark functions for comparison. From these, we can see that the local optima avoidance of this algorithm is satisfactory since it is able to avoid all of the local optima and approximate the global optima on the majority of the multimodal test functions. These results reaffirm that the operators of CEFA appropriately balance exploration and exploitation to handle difficulty in a challenging and high-dimensional search space.
Figure 3

Convergence of algorithms on some of the benchmark functions in Test 2.

Figure 4

Convergence of algorithms on some of the benchmark functions in Test 3.

5. Discussion

5.1. Advantage Analysis of CEFA

The main reasons for the superior performance of the proposed hybrid meta-heuristic algorithm based on CE and FA in solving complex numerical optimization problems may be summarized as follows: CE is a global stochastic optimization method based on Monte Carlo technology, and has the advantages of randomness, adaptability, and robustness; this makes the FA population in the hybrid algorithm have good diversity so that it can effectively overcome its tendency to fall into a local optimum and improve its global optimization ability. FA mimicking the flashing mechanism of fireflies in nature has the advantage of fast convergence. With co-evolution, CEFA uses the superior individuals obtained by the FA operator to update the probability distribution parameters in the CE operator during the iterative process, which improves the convergence rate of the CE operator. The hybrid meta-heuristic algorithm CEFA introduces the co-evolutionary technique to collaboratively update the FA population and the probability distribution parameters in CE, which obtains a good balance between exploration and exploitation, and has excellent performance in terms of exploitation, exploration, and local optima avoidance in solving complex numerical optimization problems. In addition, the proposed CEFA can effectively solve complex high-dimensional optimization problems due to the superior performance of CE in solving them.

5.2. Efficiency Analysis of Co-Evolution

The proposed hybrid meta-heuristic algorithm CEFA employs co-evolutionary technology to achieve a good balance between exploration and exploitation. The application of this co-evolutionary technology can be summarized by three aspects: (1) The CE operator and the FA operator collaboratively update the optimal solution and optimal value. (2) The initial probability distribution parameters of the CE operator during the iterative process are updated with the population of the FA operator. (3) The result of each iteration of the CE operator updates the current population of the FA operator to obtain the best population. Figure 5 shows the specific process of co-evolution when the hybrid algorithm is used to solve F1 and F9 selected from the benchmark functions, where “o” is the optimal function value updated by the FA operator and “.” is updated by the CE operator. This fully demonstrates that the co-evolutionary technology can be well implemented in the proposed method and the optimal function value is collaboratively updated by the two operators FA and CE during the iterative process.
Figure 5

Efficiency analysis of co-evolution: (a,c) two-dimensional versions of F1 and F9; (b,d) FA and CE co-update the current best in CEFA’s iterative process.

5.3. Parameter Analysis of CEFA

In the proposed hybrid meta-heuristic algorithm, the numbers of iterations of the operators CE and FA are two key parameters that affect its performance in solving numerical optimization problems. To this end, this paper took F1 (dimension d = 30) as an example, and used the experimental method to explore the influence of their different combinations on the optimization results. The specific experiment was set as follows: the number of iterations of the CE operator was set to 1, 5, 10, 30, 50, 100, 200, or 300, while the number of iterations of the FA operator took values of 30, 50, 100, 200, 500, or 1000, and all the other parameters were the same as before. The results were averaged over 30 runs and the average optimal function value and time consumption are reported in Table 5.
Table 5

Experimental results of different numbers of iterations for FA and CE operators in CEFA.

N1 N2 30501002005001000
1 F1min 6.59×10+00 8.94×1002 3.32×1004 6.03×1007 4.95×1015 3.62×1028
T 0.010.020.050.100.240.48
5 F1min 8.49×1011 6.98×1021 3.08×1045 4.43×1095 00
T 0.030.040.080.150.340.76
10 F1min 7.16×1020 2.80×1036 4.30×1076 000
T 0.040.050.110.310.762.10
30 F1min 8.83×1040 6.80×1069 0000
T 0.140.210.420.792.164.70
50 F1min 6.84×1051 1.32×1087 0000
T 0.240.310.571.153.436.35
100 F1min 8.11×1068 00000
T 0.350.641.192.196.5712.03
200 F1min 3.10×1086 00000
T 0.541.072.104.6112.7524.10
300 F1min 8.97×1098 00000
T 1.131.483.016.8018.5638.75
Table 5 shows that the hybrid algorithm can adjust the number of iterations and of the two operators in solving the specific optimization problem to achieve higher accuracy. The values of and are determined by the characteristics and complexity of the given optimization problem, and they are generally between 30 and 100.

5.4. Performance of CEFA for High-Dimensional Function Optimization Problems

In order to further explore the influence of search space dimension on the optimization performance and convergence rate of CEFA when solving high-dimensional function optimization problems, this paper selected the standard GA, PSO, SSA, BOA, and HFA as comparison objects to test F1 from the benchmark functions. The dimension of the search space was increased from 10 to 200 in steps of 10. It can be seen from Figure 6 that the accuracy of the proposed CEFA is not greatly affected by the increase of the dimension of the search space, which is obviously different from GA, PSO, and SSA. It can be also seen that BOA has the same advantage, but its solution accuracy is not as high as that of CEFA. As the dimensions of the search space increase, for example, it is greater than 70 for F1, CEFA obtains more accurate results than HFA. This may provide a new and effective way for solving high-dimensional function optimization problems.
Figure 6

Comparison of optimization accuracy of different search space dimensions.

6. Conclusions

Global optimization problems are challenging to solve due to their nonlinearity and multimodality. In this paper, based on the firefly algorithm and the cross-entropy method, a novel hybrid meta-heuristic algorithm was constructed. In order to enhance the global search ability of the proposed method, the co-evolutionary technique was introduced to obtain an efficient balance between exploration and exploitation. The benchmark functions are employed to evaluate the performance of the proposed hybrid algorithm CEFA for numerical optimization problems. The results of the numeral experiments show that the new method provides very competitive results and possesses more powerful global search capacity, higher optimization precision, and stronger robustness. Furthermore, the new method exhibits excellent performance in solving high-dimensional function optimization problems. In addition, for future research, a discrete version of CEFA will be developed to solve combinatorial optimization problems.
  2 in total

1.  A Novel Hybrid Firefly Algorithm for Global Optimization.

Authors:  Lina Zhang; Liqiang Liu; Xin-She Yang; Yuntao Dai
Journal:  PLoS One       Date:  2016-09-29       Impact factor: 3.240

2.  On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget.

Authors:  Ya D Sergeyev; D E Kvasov; M S Mukhametzhanov
Journal:  Sci Rep       Date:  2018-01-11       Impact factor: 4.379

  2 in total
  3 in total

1.  Interpretation of Electrocardiogram Heartbeat by CNN and GRU.

Authors:  Guoliang Yao; Xiaobo Mao; Nan Li; Huaxing Xu; Xiangyang Xu; Yi Jiao; Jinhong Ni
Journal:  Comput Math Methods Med       Date:  2021-08-29       Impact factor: 2.238

2.  Public transcriptome database-based selection and validation of reliable reference genes for breast cancer research.

Authors:  Qiang Song; Lu Dou; Wenjin Zhang; Yang Peng; Man Huang; Mengyuan Wang
Journal:  Biomed Eng Online       Date:  2021-12-11       Impact factor: 2.819

3.  Improved genetic algorithm optimized LSTM model and its application in short-term traffic flow prediction.

Authors:  Junxi Zhang; Shiru Qu; Zhiteng Zhang; Shaokang Cheng
Journal:  PeerJ Comput Sci       Date:  2022-07-19
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.