Literature DB >> 28473848

Memetic Differential Evolution with an Improved Contraction Criterion.

Lei Peng1,2, Yanyun Zhang1, Guangming Dai1,2, Maocai Wang1,2.   

Abstract

Memetic algorithms with an appropriate trade-off between the exploration and exploitation can obtain very good results in continuous optimization. In this paper, we present an improved memetic differential evolution algorithm for solving global optimization problems. The proposed approach, called memetic DE (MDE), hybridizes differential evolution (DE) with a local search (LS) operator and periodic reinitialization to balance the exploration and exploitation. A new contraction criterion, which is based on the improved maximum distance in objective space, is proposed to decide when the local search starts. The proposed algorithm is compared with six well-known evolutionary algorithms on twenty-one benchmark functions, and the experimental results are analyzed with two kinds of nonparametric statistical tests. Moreover, sensitivity analyses for parameters in MDE are also made. Experimental results have demonstrated the competitive performance of the proposed method with respect to the six compared algorithms.

Entities:  

Mesh:

Year:  2017        PMID: 28473848      PMCID: PMC5394905          DOI: 10.1155/2017/1395025

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

In 1989, the name of “memetic algorithms” (MAs) [1] was introduced for the first time. In the last two decades, MAs gradually became one of the recent growing areas of research in evolutionary computation. They combine various evolutionary algorithms (EAs) with different LS methods to balance exploration and exploitation. Existing examples of memetic algorithms are NM-BRO [2], MA-LSCh-CMA [3], LBBO [4], IMMA [5], and MPSO [6]. In the framework of MAs, LS operators are used to execute further exploitation for the individuals generated by common EA operations, which is helpful to enhance the EA's capacity of solving complicated problems. Differential evolution was first proposed by Storn and Price [7] in 1995 to solve global numerical optimization problems over continuous search spaces. It shares some similarities with other EAs. For example, DE works with a population of solutions, called vectors; it uses recombination and mutation operators to generate new vectors and, finally, it has a replacement process to discard the less fit vectors. DE represents solutions with real coding. Some of the differences with respect to other EAs are as follows: DE uses a special mutation operator based on the linear combination of three individuals and uses a uniform crossover operator. It has several attractive features. DE is relatively simple to implement and was demonstrated to be very effective on a large number of cases. In the past few decades, DE has been successfully used in many real-world applications, such as space trajectory design [8-10], hydrothermal optimization [11], underwater glider path planning [12], and vehicle routing problem [13]. Despite its successful applications to different classes of problems in different fields, DE was demonstrated to converge to a fixed point, a level set [10], or a hyperplane not containing the global optimum [14]. Furthermore, in some cases it was shown to have slow local convergence. In order to overcome these shortcomings, some authors have proposed a hybridization of DE with some local search heuristics. dos Santos Coelho and Mariani [15] proposed a version of memetic DE which combines DE with the generator of chaos sequences and sequential quadratic programming technique (DEC-SQP). In this memetic algorithm, DE with chaos sequences is the global optimizer and SQP is applied to the best individual to find the local minimum. Noman and Iba [16] proposed an adaptive hill-climbing crossover-based local search operation for enhancing the performance of standard differential evolution (DEahcSPX). Muelas et al. [17] developed MDE-DC which combines DE with multiple trajectory search algorithm (MTS). Neri and Tirronen [18] proposed the scale factor local search differential evolution (SFLSDE). In SFLSDE, golden section search and unidimensional hill-climb local search are applied to detect an optimal value of the scale factor and generate a higher quality offspring. Wang et al. [19] proposed an adaptive MA framework called DE-LS. In DE-LS, self-adaptive differential evolution (SaDE) [20] is the global search method, while covariance matrix adaptation evolution strategy (CMA-ES) [21] and self-adaptive mixed distribution based univariate EDA (MUEDA) [22] are employed as the local search methods. Vasile et al. [10] proposed an inflationary differential evolution algorithm (IDEA), which hybridizes DE with the restarting procedure of Monotonic Basin Hopping (MBH), to solve space trajectory optimization problems. Minisci and Vasile [9] and Di Carlo et al. [8] proposed an adaptive version of inflationary differential evolution algorithm (AIDEA) and a multipopulation version of AIDEA (MP-AIDEA) which automatically adapt the values of four control parameters. Locatelli et al. [23] proposed a memetic differential evolution for disk-packing and sphere-packing problems. In this algorithm, two kinds of local searches (MINOS and SNOPT) are used to detect local minima. Asafuddoula et al. [24] proposed an adaptive hybrid DE algorithm (AH-DEa) which has three features. The first is its use of adaptive crossover rates from a given set of discrete values. The second is an adaptive crossover strategy at different stages of the evolution. The last is the inclusion of a local search strategy to further improve the best solution. Qin et al. [25] proposed an advanced SaDE, which incorporates two different local search chains (Lamarckian and Baldwinian) into SaDE to enhance exploitation capability. Trivedi et al. [26] hybridized DE and GA to solve the unit commitment scheduling problems, in which GA was used to handle the binary unit commitment variables while DE was employed to optimize the continuous power dispatch related variables. In the same year, Li et al. [27] proposed a new hybridization, named DEEP, based on DE framework and the key features of CMA-ES, which generates a trial vector by first using a DE/rand/1/bin strategy followed by an Evolution Path (EP) mutation of CMA-ES. The focus of this paper is to optimally combine DE global search operators with Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm to improve local search in continuous optimization. A new contraction criterion, which is based on the maximum distances in objective space and decision space, is proposed. When the contraction criterion is satisfied, BFGS starts from the best solution at the current generation. Furthermore, a restart mechanism is employed. If the best solution is not improved during the course of the local search, the population is reinitialized to increase the chance to find the global optimum. The paper is organized as follows: DE is briefly introduced in Section 2. The proposed DE algorithm with local search and reinitialization is presented in Section 3. The design of the experiments, the results, and the corresponding discussions are included in Section 4. The last section, Section 5, is devoted to conclusions and the future work.

2. A Short Introduction to Differential Evolution

DE is a population-based stochastic parallel optimization method. Each vector (or individual) of the population at t generation is called the target vector, and it will generate one offspring called the trial vector. For example, the ith vector of the population x will generate one trial vector u. Trial vectors are generated by adding weighted difference vectors to the target vector. This process is referred to as the mutation operator where the target vector is mutated. A crossover step is then applied to produce an offspring which is only accepted if it improves on the fitness of the parent individual. Many variants of standard DE have been proposed, which use different learning strategies and/or recombination operations in the reproduction stage. A general DE variant may be recorded as DE/a/b/c, where “a” denotes the mutation strategy, “b” specifies the number of difference vectors used, and “c” specifies the crossover scheme which may be binomial or exponential. The DE/rand/1/exp is described in Algorithm 1.
Algorithm 1

DE with rand/1/exp.

3. Proposed Algorithm

In this section, we describe four major operations of the proposed MDE algorithm in detail, including contraction criterion, BFGS search, reinitialization scheme, and boundary constraint handling. The detailed description of MDE is given in Algorithm 2.
Algorithm 2

Pseudocode of MDE.

3.1. Contraction Criterion

In order to design an effective and efficient hybrid algorithm for global optimization, we need to take advantage of both the exploration capabilities of EA and the exploitation capabilities of LS and combine them in a well-balanced manner. To incorporate BFGS into DE successfully, a triggering condition, called contraction criterion, is needed to decide when the local search has to start. There are several kinds of methods to define a contraction criterion. Qin and Suganthan [20] applies local search method after a fixed number of generations (every 200 generations). Sun et al. [5] starts the LS if the promising solution is not updated in t-consecutive generations. Simon et al. [4] use the minimum fitness in the objective space as the contraction criterion; Di Carlo et al. [8-10] perform LS when the maximum distance in decision space is below a given threshold. In MDE, we propose a new contraction criterion which combines two criteria: (a) ρ1 is the improved maximum distance in objective space and (b) ρ2 is the maximum distance in decision space. The idea of ρ1 is derived from [28]where ρ1 is a measure of the diversity of the population in objective space. The distance in decision space is defined aswhere ||·|| is the Euclidean distance. ρ2 is a measure of the diversity of the population in decision space.

3.2. BFGS Search

In MDE, the local search utilizes the better solutions obtained by the global search to update the population of MDE and thus enhances MDE's exploitation ability to find the best solution. In MDE, we use the BFGS algorithm as the local search method. BFGS is one of the quasi-Newton methods which do not need the precise Hessian matrix and is able to approximate it based on the individual successive gradients. BFGS is considered as the most effective and popular quasi-Newton method and has been proven to have good performance even for nonsmooth optimizations. The details can be found in [29].

3.3. Reinitialization Scheme

If the best solution has not been improved after local search, a reinitialization of the whole population is used to give the algorithms more opportunities to find the global optimum. Simon et al. [4] proposed a partial reinitialization of the population. Every 20 generations, the algorithm selects the best M individuals from a temporary population of 2M + 2 individuals as the reinitialization pool. Sun et al. [5] chose the individuals, which have the largest distances from the local optima, from a temporary population to form the next population. Zamuda et al. [30] proposed a population size reduction method as the reinitialization scheme. In MDE, we apply a simple reinitialization scheme described in Algorithm 3. If the result of the local search does not improve the best individual in the population, a reinitialization of the population is triggered. A counter C keeps track of the number of restarts. For C < Cmax, where Cmax is user-defined, M individuals are generated randomly in the search space, drawing samples from a uniform distribution. For C ≥ Cmax, 2M/3 individuals in the population are initialized randomly in the search space, while the rest are initialized by a normal distribution which takes the best individual as the center and (U − L)/50 as the standard deviation. Algorithm 3 summarises the reinitialization procedure. The function randreal draws samples from a uniform distribution while function Gaussian draws samples from a normal distribution and [L, U] are the lower and upper boundaries on x.
Algorithm 3

Pseudocode of reinitialization scheme.

3.4. Boundary Constraint Handling

After mutation and crossover, each generated trial vector u undergoes boundary constraint check. If some variables of u are out of the boundary, a repair method is applied as follows:where randreal (L, U) can generate a random real number from [L, U].

4. Experimental Results

In order to verify the performance of MDE, we select the 21 nonnoisy benchmark functions from CEC2005 special session on real-parameter optimization (excluding noisy functions F4, F17, F24, and F25) since MDE has no ability to handle functions with noisy landscapes. The details about these functions can be found in [31]. We compare MDE with six peer algorithms, including CLPSO [32], GL-25 [33], CMA-ES [21], LBBO [4], SFLSDE [18], and L-SHADE [34].

4.1. Experimental Setup

For each algorithm on each benchmark problem, we conduct 25 independent runs and limit each run to 10000∗D max function evaluations, where D is the problem dimension (D = 10, D = 30, and D = 50). The performance of the algorithms is evaluated in terms of function error value [31], defined as f(x) − f(x), where x is the global optimum of the test function. The mean and the standard deviation of the function error values are recorded. The parameters of MDE are set as M = 30, ρ1,max = 2.0, ρ2,max = 2.0, Cmax = 3, CR ∈ N(0.8,0.1), and F ∈ N(0.5,0.1); the mutation and crossover strategies are the same as those in [24]. For the other six algorithms, we use the same parameter settings in their original papers.

4.2. Performance Criteria

To effectively analyze the results, two nonparametric statistical tests, as similarly done in [35, 36], are used in the experiments. (i) Wilcoxon's signed-rank test at α = 0.05 is performed to test the statistical significance of the experimental results between two algorithms on both single-problem and multiproblem. (ii) Friedman's test is employed to obtain the average rankings of all the compared algorithms. Wilcoxon's signed-rank test on single-problem is calculated by Matlab, while Wilcoxon's signed-rank test on multiproblem and Friedman test are calculated by the software of KEEL [37].

4.3. Comparison between the Other Six Algorithms and MDE

Table 1 shows the results of MDE and the other six algorithms on the 10-dimensional benchmarks. It can be seen that MDE performs significantly better than CLPSO, GL-25, CMA-ES, LBBO, SFLSDE, and L-SHADE on 15, 16, 17, 7, 8, and 8 test functions. And CLPSO, GL-25, CMA-ES, LBBO, SFLSDE, and L-SHADE win on 4, 4, 3, 5, 8, and 8 test functions, respectively. MDE obtains similar results with the other six algorithms in 2, 1, 1, 9, 5, and 5 cases. Additionally, the results of the multiple-problem statistical analysis are shown in Table 4. It can be seen that MDE can obtain higher R+ values than R− values in all cases, where R+ is the sum of ranks for the functions on which MDE outperformed the compared algorithm, and R− is the sum of ranks for the opposite [36]. According to Wilcoxon's test at α = 0.05 and α = 0.1, there are significant differences in three cases (MDE versus CLPSO, MDE versus GL-25, and MDE versus CMA-ES), which means that in those cases MDE is significantly better than CLPSO, GL-25, and CMA-ES. In addition, Friedman's test is employed to evaluate the significant differences of all the compared algorithms. As shown in Figure 1, MDE gets the second average ranking value, while L-SHADE gets the first average ranking values on the 10-dimensional problems.
Table 1

Comparison of Mean Error and standard deviation between MDE and other six EAs over 25 independent runs on twenty-one 10-dimensional test functions.

ProbCLPSOGL-25CMA-ESLBBOSFLSDEL-SHADEMDE
F013.32E + 00 ± 1.96E + 00−1.71E − 26 ± 4.40E − 26−3.17E − 27 ± 1.82E − 27−0.00E + 00 ± 0.00E + 00=6.50E − 09 ± 1.37E − 08−6.66E − 08 ± 1.05E − 07−0.00E + 00 ± 0.00E + 00
F027.42E − 02 ± 5.95E − 02−2.73E − 28 ± 4.79E − 28−6.17E − 27 ± 2.18E − 27−0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00
F033.54E + 05 ± 1.45E + 05−2.49E + 04 ± 1.12E + 04−3.94E − 23 ± 2.17E − 23−0.00E + 00 ± 0.00E + 00=9.54E + 03 ± 8.05E + 03−0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00
F056.29E + 00 ± 5.49E + 00−5.21E − 05 ± 2.28E − 04+4.82E − 11 ± 6.54E − 12+3.06E − 03 ± 3.68E − 03+0.00E + 00 ± 0.00E + 00+0.00E + 00 ± 0.00E + 00+1.13E − 02 ± 8.32E − 03
F069.25E − 01 ± 7.47E − 01−2.30E + 00 ± 6.43E − 01−9.57E − 01 ± 1.74E + 00−1.53E − 06 ± 4.89E − 06=1.59E − 01 ± 7.97E − 01=0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00
F072.82E − 01 ± 9.68E − 02−1.07E − 01 ± 2.80E − 02−1.21E − 02 ± 1.16E − 02+1.06E − 01 ± 2.15E − 01=1.27E + 03 ± 1.70E − 01−3.84E − 03 ± 6.20E − 03+1.87E − 02 ± 1.75E − 02
F082.04E + 01 ± 8.71E − 02−2.04E + 01 ± 8.87E − 02−2.00E + 01 ± 7.49E − 02=2.00E + 01 ± 2.95E − 09=2.04E + 01 ± 9.14E − 02−2.01E + 01 ± 1.03E − 01−2.00E + 01 ± 0.00E + 00
F095.47E + 00 ± 1.61E + 00−6.83E + 00 ± 4.58E + 00−1.26E + 02 ± 4.21E + 01−9.80E − 12 ± 1.31E − 11+2.91E + 00 ± 2.27E + 00+8.37E − 01 ± 7.90E − 01+4.54E + 00 ± 1.99E + 00
F108.53E + 00 ± 1.70E + 00−1.36E + 01 ± 9.53E + 00−7.51E + 01 ± 1.01E + 02−1.32E + 01 ± 3.76E + 00−7.41E + 00 ± 2.95E + 00−2.47E + 00 ± 1.04E + 00+4.10E + 00 ± 1.16E + 00
F115.01E + 00 ± 7.30E − 01+3.20E + 00 ± 9.45E − 01+2.42E + 00 ± 1.30E + 00+5.04E + 00 ± 1.17E + 00+1.77E + 00 ± 1.36E + 00+4.12E + 00 ± 7.50E − 01+6.69E + 00 ± 1.56E + 00
F121.38E + 02 ± 9.66E + 01−1.04E + 01 ± 8.81E − 01−6.92E + 03 ± 1.21E + 04−2.59E − 02 ± 1.20E − 01−1.15E + 02 ± 3.33E + 02−2.40E + 00 ± 4.36E + 00−0.00E + 00 ± 0.00E + 00
F134.56E − 01 ± 7.95E − 02−9.68E − 01 ± 4.21E − 01−1.02E + 00 ± 4.47E − 01−4.16E − 01 ± 1.52E − 01=3.66E − 01 ± 1.02E − 01+2.31E − 01 ± 3.67E − 02+4.05E − 01 ± 1.26E − 01
F143.15E + 00 ± 2.24E − 01+2.86E + 00 ± 4.76E − 01+4.87E + 00 ± 1.87E − 01−3.33E + 00 ± 3.55E − 01+2.86E + 00 ± 3.29E − 01+2.40E + 00 ± 2.93E − 01+3.77E + 00 ± 3.30E − 01
F159.30E + 00 ± 2.09E + 01+3.79E + 02 ± 6.21E + 01−5.30E + 02 ± 2.76E + 02−1.70E − 12 ± 1.89E − 12+2.72E + 02 ± 1.53E + 02−1.78E + 02 ± 1.90E + 02−6.04E + 01 ± 7.57E + 01
F161.15E + 02 ± 1.04E + 01=9.69E + 01 ± 1.10E + 01+2.03E + 02 ± 2.23E + 02−1.27E + 02 ± 1.51E + 01−1.11E + 02 ± 1.10E + 01+9.26E + 01 ± 2.66E + 00+1.15E + 02 ± 1.17E + 01
F186.57E + 02 ± 1.18E + 02−7.88E + 02 ± 5.80E + 01−8.26E + 02 ± 3.85E + 02−7.60E + 02 ± 1.61E + 02−5.20E + 02 ± 2.55E + 02+6.00E + 02 ± 2.50E + 02=5.51E + 02 ± 2.36E + 02
F196.11E + 02 ± 1.39E + 02−8.00E + 02 ± 1.86E − 02−7.73E + 02 ± 3.44E + 02−7.71E + 02 ± 1.49E + 02−5.33E + 02 ± 2.21E + 02=6.40E + 02 ± 2.38E + 02−4.95E + 02 ± 2.38E + 02
F206.64E + 02 ± 1.52E + 02−7.80E + 02 ± 1.00E + 02−7.06E + 02 ± 2.86E + 02−7.11E + 02 ± 1.97E + 02−4.54E + 02 ± 2.15E + 02+6.20E + 02 ± 2.45E + 02−5.17E + 02 ± 2.37E + 02
F214.49E + 02 ± 1.11E + 02=8.00E + 02 ± 8.37E − 14−8.54E + 02 ± 2.64E + 02−5.35E + 02 ± 2.66E + 02−5.64E + 02 ± 1.89E + 02−4.88E + 02 ± 1.90E + 02−4.45E + 02 ± 1.83E + 02
F227.47E + 02 ± 9.64E + 01−6.72E + 02 ± 1.90E + 02=7.69E + 02 ± 2.53E + 01−6.51E + 02 ± 2.24E + 02=7.63E + 02 ± 2.85E + 01=7.45E + 02 ± 1.20E + 01=6.90E + 02 ± 1.64E + 02
F235.58E + 02 ± 6.83E + 01+9.74E + 02 ± 1.63E + 01−9.89E + 02 ± 2.59E + 02−6.40E + 02 ± 9.69E + 01=6.62E + 02 ± 1.87E + 02=6.44E + 02 ± 1.24E + 02−6.42E + 02 ± 1.46E + 02

151617788/
+443588/
=211955/

∗ indicates that when six algorithms obtain the global optimum, the intermediate results are reported at NFEs = 10000. “−,” “+,” and “=” denote that the performance of this algorithm is, respectively, worse than, better than, and similar to MDE according to the Wilcoxon signed-rank test at α = 0.05.

Table 4

Results obtained by the Multiple-Problem Wilcoxon test for twenty-one test functions at D = 10.

MDE versus R + R p valueAt α = 0.05At α = 0.1
CLPSO169.041.00.016042++
GL-25184.047.00.016472++
CMA-ES192.018.00.001088++
LBBO141.090.00.366155 = =
SFLSDE147.063.00.112595 = =
L-SHADE136.573.50.232226 = =
Figure 1

Average rankings of the seven algorithms by Friedman test for all functions at D = 10.

Table 2 shows that MDE performs significantly better than the other six compared algorithms in the majority of the test functions. For example, MDE wins in 12 cases, only loses in 3 cases, and ties in 6 cases, compared with SFLSDE. We can also find that MDE obtains much better solutions than LBBO and CLPSO. As for L-SHADE, MDE wins in 8 cases and loses in 10 cases. According to the results of the multiple-problem statistical analysis shown in Table 5, it can be seen that MDE can obtain higher R+ values than R− values in all cases. According to Wilcoxon's test at α = 0.05 and α = 0.1, there are significant differences in four cases (MDE versus CLPSO, MDE versus GL-25, MDE versus CMA-ES, and MDE versus SFLSDE), which means that in those cases MDE is significantly better than CLPSO, GL-25, CMA-ES, and SFLSDE. And from Table 5, we can find that L-SHADE and MDE have comparable results. Moreover, Figure 2 shows that MDE performs the first average ranking value and L-SHADE obtains the second average ranking values on the 30-dimensional problems by Friedman's test.
Table 2

Comparison of Mean Error and standard deviation between MDE and other six EAs over 25 independent runs on twenty-one 30-dimensional test functions.

ProbCLPSOGL-25CMA-ESLBBOSFLSDEL-SHADEMDE
F012.81E + 01 ± 6.52E + 00−7.05E − 24 ± 3.18E − 23=2.05E − 25 ± 6.01E − 26=0.00E + 00 ± 0.00E + 00+1.36E − 11 ± 1.43E − 11−3.52E − 05 ± 2.87E − 05−1.36E − 14 ± 2.48E − 14
F028.66E + 02 ± 1.75E + 02−5.46E + 01 ± 8.48E + 01−6.36E − 25 ± 2.18E − 25+2.11E − 08 ± 8.29E − 08−3.78E − 09 ± 4.84E − 09−6.82E − 15 ± 1.89E − 14+2.07E − 13 ± 8.51E − 14
F031.62E + 07 ± 4.99E + 06−2.13E + 06 ± 8.45E + 05−5.38E − 21 ± 1.66E − 21−3.97E + 02 ± 6.54E + 02−3.97E + 05 ± 2.40E + 05−2.86E − 13 ± 5.04E − 13−0.00E + 00 ± 0.00E + 00
F053.97E + 03 ± 4.11E + 02−2.48E + 03 ± 2.08E + 02−3.34E − 10 ± 7.85E − 11+2.69E + 03 ± 7.88E + 02−1.07E + 03 ± 6.35E + 02−1.32E − 11 ± 7.71E − 12+3.05E + 02 ± 9.81E + 01
F066.09E + 00 ± 5.44E + 00−2.17E + 01 ± 1.53E + 00−4.78E − 01 ± 1.32E + 00=2.15E − 01 ± 1.03E + 00−4.78E − 01 ± 1.32E + 00−1.00E − 13 ± 5.51E − 14−0.00E + 00 ± 0.00E + 00
F074.85E − 01 ± 8.90E − 02−1.37E − 02 ± 1.10E − 02−1.58E − 03 ± 4.92E − 03−9.77E − 02 ± 2.66E − 01−4.70E + 03 ± 1.73E + 00−0.00E + 00 ± 0.00E + 00+6.71E − 14 ± 2.83E − 14
F082.10E + 01 ± 5.95E − 02−2.10E + 01 ± 5.19E − 02−2.03E + 01 ± 5.70E − 01−2.00E + 01 ± 1.25E − 04−2.10E + 01 ± 4.53E − 02−2.03E + 01 ± 3.47E − 01−2.00E + 01 ± 0.00E + 00
F093.21E + 01 ± 5.26E + 00−4.84E + 01 ± 3.62E + 01−4.28E + 02 ± 1.13E + 02−2.48E − 07 ± 3.31E − 07+3.37E + 01 ± 7.65E + 00−4.10E + 01 ± 5.83E + 00−2.69E + 01 ± 1.11E + 01
F101.05E + 02 ± 1.35E + 01−1.74E + 02 ± 1.22E + 01−4.95E + 01 ± 1.29E + 01−1.73E + 02 ± 3.10E + 01−4.34E + 01 ± 1.27E + 01=6.53E + 00±1.60E + 00+4.07E + 01 ± 7.91E + 00
F112.56E + 01 ± 1.55E + 00+3.31E + 01 ± 7.73E + 00−7.43E + 00 ± 2.36E + 00+2.63E + 01 ± 2.75E + 00+1.72E + 01 ± 3.30E + 00+2.64E + 01 ± 1.33E + 00+2.80E + 01 ± 3.08E + 00
F121.78E + 04 ± 5.59E + 03−7.13E + 03 ± 5.05E + 03−1.11E + 04 ± 9.70E + 03−1.55E + 01 ± 3.49E + 01=9.72E + 03 ± 8.88E + 03−8.97E + 02 ± 1.27E + 03−1.93E + 02 ± 4.79E + 02
F132.12E + 00 ± 2.68E − 01−5.28E + 00 ± 4.08E + 00−3.54E + 00 ± 7.71E − 01−1.94E + 00 ± 4.23E − 01−2.02E + 00 ± 5.91E − 01−1.24E + 00 ± 1.06E − 01+1.55E + 00 ± 3.51E − 01
F141.28E + 01 ± 1.80E − 01+1.29E + 01 ± 4.63E − 01+1.46E + 01 ± 2.91E − 01−1.29E + 01 ± 2.63E − 01+1.27E + 01 ± 5.54E − 01+1.18E + 01 ± 3.43E − 01+1.38E + 01 ± 3.18E − 01
F156.15E + 01 ± 5.70E + 01+3.04E + 02 ± 2.00E + 01=4.09E + 02 ± 2.21E + 02=7.90E + 01 ± 1.37E + 02+3.04E + 02 ± 9.27E + 01=3.84E + 02 ± 4.73E + 01−3.23E + 02 ± 1.02E + 02
F161.71E + 02 ± 2.88E + 01−1.28E + 02 ± 9.13E + 01=4.32E + 02 ± 3.57E + 02−1.74E + 02 ± 3.52E + 01−1.48E + 02 ± 1.19E + 02=2.39E + 01 ± 2.60E + 00+9.89E + 01±2.61E + 01
F188.97E + 02 ± 7.87E + 01−9.07E + 02 ± 1.37E + 00=9.28E + 02 ± 1.18E + 02=9.16E + 02 ± 3.51E + 01−9.05E + 02 ± 1.36E + 00=9.03E + 02 ± 1.81E − 01=8.91E + 02 ± 4.10E + 01
F199.14E + 02 ± 1.73E + 00−9.06E + 02 ± 1.48E + 00=9.04E + 02 ± 2.83E − 01=9.21E + 02 ± 2.53E + 01−9.06E + 02 ± 1.83E + 00=9.03E + 02 ± 1.94E − 01=8.93E + 02 ± 4.18E + 01
F209.14E + 02 ± 1.23E + 00−9.07E + 02 ± 1.49E + 00=9.21E + 02 ± 8.59E + 01=9.24E + 02 ± 3.14E + 00−9.06E + 02 ± 4.11E + 00=9.03E + 02 ± 2.07E − 01=9.02E + 02 ± 3.18E + 01
F215.00E + 02 ± 5.42E − 13−5.00E + 02 ± 4.27E − 13−5.12E + 02 ± 6.00E + 01−5.00E + 02 ± 1.07E − 08−5.04E + 02 ± 1.75E + 01−5.00E + 02 ± 2.89E − 13−5.00E + 02 ± 2.54E − 13
F229.70E + 02 ± 1.17E + 01−9.27E + 02 ± 8.14E + 00+8.27E + 02 ± 1.82E + 01+1.06E + 03 ± 3.65E + 01−8.71E + 02 ± 1.88E + 01+8.42E + 02 ± 1.81E + 01+9.33E + 02 ± 1.40E + 01
F235.34E + 02 ± 1.15E − 04+5.34E + 02 ± 4.63E − 04+5.37E + 02 ± 4.41E + 00−5.90E + 02 ± 9.52E + 00−7.07E + 02 ± 1.57E + 02−5.34E + 02±5.77E − 13+5.34E + 02 ± 5.38E − 04

17121115128/
+4345310/
= 066163/

∗ indicates that when six algorithms obtain the global optimum, the intermediate results are reported at NFEs = 30000. “−,” “+,” and “=” denote that the performance of this algorithm is, respectively, worse than, better than, and similar to MDE according to the Wilcoxon signed-rank test at α = 0.05.

Table 5

Results obtained by the Multiple-Problem Wilcoxon test for twenty-one test functions at D = 30.

MDE versus R + R p valueAt α = 0.05At α = 0.1
CLPSO200.031.00.003004++
GL-25198.532.50.003705++
CMA-ES177.054.00.031164++
LBBO160.570.50.113770 = =
SFLSDE184.047.00.016008++
L-SHADE116.5114.50.95842 = =
Figure 2

Average rankings of the seven algorithms by Friedman test for all functions at D = 30.

Table 3 shows that MDE also performs significantly better than five compared algorithms in the majority of the test functions except L-SHADE. For example, MDE wins in 15 cases, only loses in 3 cases, and ties in 3 cases, compared with LBBO. When comparing with L-SHADE, MDE wins in 8 cases and loses in 12 cases. Table 6 shows that MDE can perform higher R+ values than R− values in five cases. According to Wilcoxon's test at α = 0.05 and α = 0.1, there are significant differences in two cases (MDE versus GL-25 and MDE versus LBBO), which means that in those cases MDE is significantly better than GL-25 and LBBO. Figure 3 shows that L-SHADE obtains better average ranking values than the other six algorithms on 50-dimensional problems by Friedman's test.
Table 3

Comparison of Mean Error and standard deviation between MDE and other six EAs over 25 independent runs on twenty-one 50-dimensional test functions.

ProbCLPSOGL-25CMA-ESLBBOSFLSDEL-SHADEMDE
F010.00E + 00 ± 0.00E + 00=1.48E − 23 ± 5.81E − 23−4.46E − 25 ± 8.18E − 26−1.46E − 10 ± 1.23E − 10−4.77E − 14 ± 2.13E − 14−0.00E + 00 ± 0.00E + 00=0.00E + 00 ± 0.00E + 00
F028.94E + 03 ± 1.24E + 03−1.54E + 03 ± 1.10E + 03−6.51E − 24 ± 1.70E − 24+4.42E − 07 ± 2.80E − 07−1.24E + 00 ± 8.43E − 01−2.11E − 13 ± 5.81E − 14+2.73E − 13 ± 9.14E − 14
F034.35E + 07 ± 1.05E + 07−5.50E + 06 ± 2.00E + 06−4.26E − 20 ± 8.09E − 21−1.57E + 04 ± 8.05E + 03−1.48E + 06 ± 5.79E + 05−1.26E + 03 ± 1.44E + 03−0.00E + 00 ± 0.00E + 00
F059.45E + 03 ± 8.94E + 02−5.70E + 03 ± 5.04E + 02−1.40E − 01 ± 6.98E − 01+8.53E + 03 ± 1.58E + 03−3.33E + 03 ± 7.16E + 02−2.09E + 02 ± 2.02E + 02+1.19E + 03 ± 4.08E + 02
F061.43E + 01 ± 1.54E + 01−4.95E + 01 ± 2.14E + 01−4.78E − 01 ± 1.32E + 00=3.97E + 01 ± 7.63E + 01−2.72E + 01 ± 3.19E + 01−1.30E − 01 ± 3.71E − 01−0.00E + 00 ± 0.00E + 00
F073.58E − 01 ± 5.69E − 02−6.40E − 02 ± 5.49E − 02−1.58E − 03 ± 4.38E − 03−4.78E − 01 ± 3.03E − 01−6.20E + 03 ± 7.88E − 13−2.84E − 14 ± 0.00E + 00+2.17E − 13 ± 5.31E − 14
F082.11E + 01 ± 4.65E − 02−2.11E + 01 ± 4.06E − 02−2.05E + 01 ± 7.13E − 01−2.00E + 01 ± 1.16E − 02−2.11E + 01 ± 3.53E − 02−2.04E + 01 ± 4.43E − 01−2.00E + 01 ± 0.00E + 00
F090.00E + 00 ± 0.00E + 00=5.37E + 01 ± 1.23E + 01−6.87E + 02 ± 1.74E + 02−9.05E − 05 ± 2.41E − 04=4.38E − 01 ± 1.12E + 00−5.56E − 09 ± 9.78E − 09−0.00E + 00 ± 0.00E + 00
F102.62E + 02 ± 3.16E + 01−2.41E + 02 ± 1.47E + 02−9.56E + 01 ± 1.92E + 01=3.70E + 02 ± 4.89E + 01−9.15E + 01 ± 4.17E + 01+1.28E + 01 ± 2.30E + 00+9.78E + 01 ± 1.51E + 01
F115.09E + 01 ± 2.13E + 00+6.49E + 01 ± 1.08E + 01−1.15E + 01 ± 3.69E + 00+5.45E + 01 ± 2.84E + 00+4.22E + 01 ± 1.62E + 01+5.23E + 01 ± 2.12E + 00+5.90E + 01 ± 4.90E + 00
F126.73E + 04 ± 1.39E + 04−4.40E + 04 ± 2.00E + 04−3.03E + 04 ± 2.59E + 04−1.01E + 03 ± 1.26E + 03=2.75E + 04 ± 2.29E + 04−6.89E + 03 ± 6.05E + 03−1.35E + 03 ± 2.45E + 03
F133.75E + 00 ± 4.16E − 01−1.13E + 01 ± 8.61E + 00−6.00E + 00 ± 1.57E + 00−4.03E + 00 ± 4.28E − 01−8.90E + 00 ± 4.83E − 01−2.61E + 00 ± 1.45E − 01+2.96E + 00 ± 4.38E − 01
F142.25E + 01 ± 2.06E − 01+2.27E + 01 ± 1.81E − 01+2.44E + 01 ± 3.08E − 01−2.23E + 01 ± 3.79E − 01+2.28E + 01 ± 3.26E − 01+2.11E + 01 ± 5.41E − 01+2.36E + 01 ± 2.62E − 01
F158.87E + 01 ± 6.41E + 01+3.84E + 02 ± 5.52E + 01−3.97E + 02 ± 2.27E + 02=2.26E + 01 ± 7.66E + 01+2.60E + 02 ± 8.66E + 01+3.52E + 02 ± 8.72E + 01−3.20E + 02 ± 5.52E + 01
F162.30E + 02 ± 4.41E + 01−1.66E + 02 ± 9.88E + 01−2.87E + 02 ± 2.48E + 02−2.15E + 02 ± 2.50E + 01−8.52E + 01 ± 3.88E + 01+1.85E + 01 ± 1.34E + 00+1.35E + 02 ± 5.19E + 01
F189.46E + 02 ± 5.26E + 00+9.24E + 02 ± 1.58E + 00+9.12E + 02 ± 4.86E − 01+9.65E + 02 ± 2.06E + 01−9.17E + 02 ± 4.10E + 00+9.13E + 02 ± 9.02E − 01+9.56E + 02 ± 3.41E + 01
F199.44E + 02 ± 6.29E + 00+9.19E + 02 ± 2.49E + 01+9.12E + 02 ± 4.87E − 01+9.59E + 02 ± 1.33E + 01−9.18E + 02 ± 3.14E + 00+9.13E + 02 ± 1.91E + 00+9.52E + 02 ± 3.31E + 01
F209.44E + 02 ± 4.61E + 00+9.24E + 02 ± 1.90E + 00+9.12E + 02 ± 4.54E − 01+9.63E + 02 ± 2.03E + 01=9.17E + 02 ± 3.92E + 00+9.14E + 02 ± 1.74E + 00+9.58E + 02 ± 7.97E + 00
F215.00E + 02 ± 7.54E − 13−5.00E + 02 ± 2.11E − 11−5.68E + 02 ± 1.98E + 02−5.00E + 02 ± 1.99E − 08−1.01E + 03 ± 2.11E + 00−1.00E + 03 ± 1.41E + 00−5.00E + 02 ± 3.28E − 13
F221.00E + 03 ± 7.79E + 00−9.69E + 02 ± 6.73E + 00+8.58E + 02 ± 1.06E + 01+1.09E + 03 ± 2.52E + 01−8.98E + 02 ± 1.31E + 01+8.75E + 02 ± 3.13E + 00+9.93E + 02 ± 1.51E + 01
F235.39E + 02 ± 8.67E − 05+5.39E + 02 ± 2.75E − 01−5.86E + 02 ± 1.52E + 02−5.95E + 02 ± 7.69E + 00−1.01E + 03 ± 1.87E + 00−1.01E + 03 ± 1.41E + 00−5.39E + 02 ± 1.62E − 02

12161115128/
+7573912/
= 203301/

“−,” “+,” and “=” denote that the performance of this algorithm is, respectively, worse than, better than, and similar to MDE according to the Wilcoxon signed-rank test at α = 0.05.

Table 6

Results obtained by the Multiple-Problem Wilcoxon test for twenty-one test functions at D = 50.

MDE versus R + R p valueAt α = 0.05At α = 0.1
CLPSO155.575.50.157413 = =
GL-25179.551.50.024970++
CMA-ES136.095.00.465445 = =
LBBO175.555.50.035480++
SFLSDE138.093.00.424043 = =
L-SHADE94.0116.01 = =
Figure 3

Average rankings of the seven algorithms by Friedman test for all functions at D = 50.

In general, according to the analysis above, we can conclude that MDE and L-SHADE have better average rankings among the seven algorithms on 21 benchmark problems for all of three different dimensions. The performance of MDE is comparable to L-SHADE on 10- and 30-dimensional problems, while L-SHADE is better than MDE on 50-dimensional problems because of the larger initial population and linear population size reduction.

4.4. Influence of Contraction Criterion

In the previous experiments the recommended initial ρ1,max = 2.0 and ρ2,max = 2.0 are used. In order to test the influence of different initial contraction criterion values to the enhanced performance of MDE, in this section, MDE is tested with different initial ρ1,max and ρ2,max values. The initial values are set as ρ1,max = {1.0,2.0,3.0} and ρ2,max = {1.0,2.0,3.0}. All other parameters are not changed as described in Section 4.1. Nine groups of experiments with different combinations between ρ1,max and ρ2,max are done. ρ1.0,1.0 means the value of parameters ρ1,max = 1.0 and ρ2,max = 1.0. The statistical results by Friedman's test with all initial values are shown in Table 7.
Table 7

Average rankings of contraction criterion combinations by Friedman test at D = 10, D = 30, and D = 50.

D = 10 D = 30 D = 50
ParametersRankingParametersRankingParametersRanking
ρ 1.0,1.0 4.5714 ρ 1.0,1.0 4.4286 ρ 1.0,1.0 4.2619
ρ1.0,2.04.9286 ρ 1.0,2.0 4.881 ρ 1.0,2.0 4.4524
ρ1.0,3.04.9524 ρ 1.0,3.0 4.381 ρ 1.0,3.0 5.0714
ρ2.0,1.04.9762 ρ 2.0,1.0 4.9286 ρ 2.0,1.0 5
ρ2.0,2.0 3.5714 ρ 2.0,2.0 4.2381 ρ 2.0,2.0 5.0476
ρ2.0,3.06 ρ 2.0,3.0 5.6667 ρ 2.0,3.0 5.0238
ρ3.0,1.05.1429 ρ 3.0,1.0 5.3571 ρ 3.0,1.0 5.2381
ρ3.0,2.05.5714 ρ 3.0,2.0 5.5952 ρ 3.0,2.0 5.6905
ρ3.0,3.05.2857 ρ 3.0,3.0 5.5238 ρ 3.0,3.0 5.2143
From Table 7, MDE owes the best average ranking value at ρ2.0,2.0 than the other 8 groups on both 10-dimensional and 30-dimensional test functions. On 50-dimensional test functions, ρ1.0,1.0 is the better choice to MDE. In general, we can conclude that it is better to set smaller contraction criterion values as the dimension of test functions increases.

4.5. Influence of Parameter Cmax

The experiment is to test the influence of parameter Cmax in MDE. Friedman's test results are shown in Table 8, where the values of Cmax are set as  Cmax = {3,5, 7,9, 11,13,15} in Table 8. All other parameters are kept unchanged as described in Section 4.1. In addition, all experiments are conducted for 25 independent runs for each function.
Table 8

Average rankings of Cmax by Friedman test at D = 10, D = 30, and D = 50.

D = 10 D = 30 D = 50
ParametersRankingParametersRankingParametersRanking
Cmax = 3 3.4286 C max = 33.5238 C max = 3 3.6905
Cmax = 53.5476 C max = 5 3.3095 C max = 54.2381
Cmax = 73.9762 C max = 74.0476 C max = 73.881
Cmax = 93.881 C max = 94.3333 C max = 93.8095
Cmax = 114.5238 C max = 114.2381 C max = 114.5238
Cmax = 133.7143 C max = 134.1429 C max = 134.119
Cmax = 154.9286 C max = 154.4048 C max = 153.7381
It can be seen from Table 8 that MDE with Cmax = 3.0 gets the better average ranking value than the other six cases at D = 10. At D = 30, Cmax = 5.0 is the best choice and Cmax = 3.0 is the second best choice to MDE. On 50-dimensional test functions, Cmax = 3.0 is the better choice parameter to MDE. Generally speaking, the small Cmax value such as 3 or 5 is good to enhance the performance of the MDE algorithm.

4.6. Influence of Population Size M

To analyze the influence of the population size M, different values of M are tested in a set of experiments. Friedman's test results are shown in Table 9, where the values of M are set as {30,60,90,120,150}. All other parameters are kept unchanged as described in Section 4.1.
Table 9

Average rankings of M by Friedman test at D = 10, D = 30, and D = 50.

D = 10 D = 30 D = 50
ParametersRankingParametersRankingParametersRanking
M = 30 2.6429 M = 30 2.3095 M = 303.1905
M = 602.7143 M = 602.6429 M = 602.7857
M = 902.8333 M = 902.9762 M = 90 2.381
M = 1202.9762 M = 1203.0238 M = 1203.0952
M = 1503.8333 M = 1504.0476 M = 1503.5476
From Table 9, MDE with M = 30 ranks the first both at D = 10 and D = 30, while MDE with M = 90 performs the best at D = 50. From the results, it can be concluded that, as the dimension increases, a properly increased population size M can enhance the search capability of MDE.

5. Conclusion

A memetic differential evolution, MDE, has been introduced in this paper. MDE uses a new contraction criterion to decide when the local search starts. In addition, MDE includes the global and local search operators, along with reinitialization, to improve performance. To evaluate the performance of MDE, 21 benchmark functions with different characteristics are chosen for test. The results show that (i) MDE can obtain better or at least comparable results, compared with the other six algorithms; (ii) small contraction criterion values and Cmax can enhance the performance of MDE in terms of the quality of the final results; (iii) a large population size is good to MDE as the dimension increases. In this paper, some preliminary experiments have been performed to verify its effect on the performance of MDE. In our future work, MDE will be tested on some real-world applications problems. Moreover, we believe that some other local search algorithms and adaptive population size strategy can also be used in MDE.
  4 in total

Review 1.  Completely derandomized self-adaptation in evolution strategies.

Authors:  N Hansen; A Ostermeier
Journal:  Evol Comput       Date:  2001       Impact factor: 3.277

2.  Memetic algorithms for continuous optimisation based on local search chains.

Authors:  Daniel Molina; Manuel Lozano; Carlos García-Martínez; Francisco Herrera
Journal:  Evol Comput       Date:  2010       Impact factor: 3.277

3.  Differential Evolution with an Evolution Path: A DEEP Evolutionary Algorithm.

Authors:  Yuan-Long Li; Zhi-Hui Zhan; Yue-Jiao Gong; Wei-Neng Chen; Jun Zhang; Yun Li
Journal:  IEEE Trans Cybern       Date:  2014-10-09       Impact factor: 11.448

4.  An intelligent multi-restart memetic algorithm for box constrained global optimisation.

Authors:  J Sun; J M Garibaldi; N Krasnogor; Q Zhang
Journal:  Evol Comput       Date:  2012-03-12       Impact factor: 3.277

  4 in total
  2 in total

1.  Information Literacy Assessment with a Modified Hybrid Differential Evolution with Model-Based Reinitialization.

Authors:  Yuan Wang; Hui Li; Zhenguo Ding
Journal:  Comput Intell Neurosci       Date:  2018-10-22

Review 2.  Application of bio-inspired optimization algorithms in food processing.

Authors:  Tanmay Sarkar; Molla Salauddin; Alok Mukherjee; Mohammad Ali Shariati; Maksim Rebezov; Lyudmila Tretyak; Mirian Pateiro; José M Lorenzo
Journal:  Curr Res Food Sci       Date:  2022-02-16
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.