Literature DB >> 29861714

Multiscale Quantum Harmonic Oscillator Algorithm for Multimodal Optimization.

Peng Wang1, Kun Cheng2,3, Yan Huang4,5,6, Bo Li2,3, Xinggui Ye2,3, Xiuhong Chen6.   

Abstract

This paper presents a variant of multiscale quantum harmonic oscillator algorithm for multimodal optimization named MQHOA-MMO. MQHOA-MMO has only two main iterative processes: quantum harmonic oscillator process and multiscale process. In the two iterations, MQHOA-MMO only does one thing: sampling according to the wave function at different scales. A set of benchmark test functions including some challenging functions are used to test the performance of MQHOA-MMO. Experimental results demonstrate good performance of MQHOA-MMO in solving multimodal function optimization problems. For the 12 test functions, all of the global peaks can be found without being trapped in a local optimum, and MQHOA-MMO converges within 10 iterations.

Entities:  

Mesh:

Year:  2018        PMID: 29861714      PMCID: PMC5971293          DOI: 10.1155/2018/8430175

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Many real-world optimization problems are multimodal optimization problems, such as classification problems in machine learning [1] and inversion of teleseismic waves [2]. Multimodal optimization problems always contain several high quality global or local solutions which have to be identified and the most appropriate solution should be chosen. Global optimization of a continuous multimodal function aims at finding its several global minima or the most appropriate solution, without being trapped in a local optimum. When facing complex multimodal optimization problems, traditional optimization methods, such as gradient descent, quasi-Newton method, and Nelder–Mead's simplex methods, which may exploit all local information in an effective way, can easily be trapped into the local optimum. If a point-by-point classical optimization approach is used for this task, the approach must have to be applied several times, each time hoping to find a different optimal solution. There are two main reasons for us to find such optima as many as possible. Firstly, an optimal solution currently favorable in the future may not remain to be so. With the knowledge of another optimal solution for the problem, users can simply switch to this new optimal solution when such a predicament occurs. Secondly, the sheer knowledge of multiple optimal solutions in the search space may provide useful insights to the properties of optimal solutions of the problem. Evolutionary algorithms (EAs) and particle swarm optimization (PSO) are used to tackle multimodal optimization problems. Due to the population-based approach, EAs have natural advantage over classical optimization techniques. EAs maintain a population of candidate solutions, which are processed in every generation. If several distinct solutions can be preserved over all these generations, we will get multiple good solutions, rather than the only best solution. In recent years, there are several attempts to improve EAs so as to deal with multimodal fitness landscapes. Niching methods are widely used in genetic algorithms (GA), differential evolution (DE), and other evolutionary algorithms for multimodal optimization [1, 3–16]. Similar to EAs, PSO is also an iterative, population-based optimization technique. The principle of PSO is that each particle has learning ability. It can learn from itself (pbest) and its best neighbor (gbest). According to the learning approaches of particles, PSO can be divided into two models. One is the global model, the other one is the local model. In the local PSO model, each particle learns from the best particle in its neighborhood while in the global model every particle learns from the best particle in the whole population. To ensure different particles in the population converge into different optima in the solution space, the way of choosing neighborhood topology structure is crucial. This property leads to the application of PSO for multimodal optimization problems in recent years [17, 18]. Owing to PSO's features of easy-to-implement and robust adaptability, the PSO converges quickly. But once it gets stuck into the local optimum, it will be very difficult to get out from the local optimum. To overcome this problem, quantum theories are introduced into PSO system. Quantum behaved Particle Swarm Optimization (QPSO) is the quantum model of a PSO. In QPSO, individual particles have quantum behavior [19, 20]. Instead of position and velocity, wavefunction ψ(x, t) [21, 22] is used to depict the state of a particle in QPSO [23]. Though QPSO performs better in global optimization than standard PSO, it also has the problem of premature convergence. A novel optimization algorithm named multiscale quantum harmonic oscillator algorithm (MQHOA) is proposed in 2013 [24]. The population parameter and sampling parameter are researched in [24]. The uncertainty principle, zero energy and quantum tunnel effect of MQHOA are researched in [25]. MQHOA was inspired by the wavefunction of quantum harmonic oscillator. It tranforms the optimization problems to find the low energy state of potential V(x) = f(x). The complex objective function's second order Taylor approximation is Harmonic oscillator potential. According to quantum theory, the wavefunction of quantum harmonic oscillator represents the distribution of optimal solution. Different spring coefficients in quantum harmonic oscillator correspond with different search scales. Different spring coefficients vary inversely with search scales. MQHOA's structure is elegant and pithy. It only includes two iteration processes: Quantum harmonic oscillator process (QHO process) and multiscale process (M process). The goal of optimization problem is searching the lowest energy position f(xbest) (where xbest is global minimum position). QHO process simulates the quantum harmonic oscillator annealing from high energy level to ground state. In M process, MQHOA chooses σ decreasing with a series of 1/2 to get an increased series spring coefficient. With the same σ, in QHO process, MQHOA defines a new wavefunction to get sufficient sampling points in the global optimal area. The new wavefunction is defined as the summation of Gaussian probability-density functions. MQHOA's wavefunction in scale σ is the sum of k Gaussian probability-density functions which take k as centers. It depicts the probability distribution of optimal solutions in domain. The equation can be written as The experimental results of 15 typical two-dimensional test functions show that MQHOA performs well in finding global optima [24]. In this paper, we present a variant of MQHOA for multimodal optimization named MQHOA-MMO. Similar to PSO's local version, in the proposed MQHOA-MMO, for each scale, every sampling point just needs to compare with the sampling points which are of the same Gaussian distribution. This paper is organized as follows. Section 2 describe the framework of MQHOA-MMO. Test functions and comparasion algorithms are presented in Section 3. The results of experiments are discussed in Section 4. Finally, Section 5 concludes the paper.

2. The Framework of MQHOA-MMO

This section presents the framework of MQHOA-MMO. We define the symbols as follows: k is the number of swarms and Gaussian distributions. m is the number of sampling points of each Gaussian distribution. σmin is the accuracy of optimization. σ is the standard deviation for all x. σ is the standard deviation for all new x. Δσ is the absolute value of the difference between σ and σ. σ is the current scale for iteration, the initial value is defined as the domain length. S = x1,…, x,…, x is the swarm of k particles such that x indicates particle i  (i = 1,2, 3,…, k). x is randomly generated in domain. For every x, generate m  x  (q = 1,2, 3,…, m), which are following probability distribution N(x, σ2). k × m sampling positions are needed by every iteration. k optimal positions are stored in x. xbest is the optimal position selected from m sampling positions x. xbest′ is the optimal position the algorithm has found. MQHOA-MMO includes just two nested iteration processes: QHO process and M process. In MQHOA-MMO, the QHO process is nested inside the M process. The convergence conditions of QHO process and M process are Δσ < σ and σ ≤ σmin respectively. The framework of MQHOA-MMO is described in Algorithm 1.
Algorithm 1

The framework of MQHOA-MMO.

The elaborate interpretation of the framework of MQHOA-MMO is as follows: Initialize σmin = 0.00001, dmin = −10, dmax = 10. So σ = 20. Here, we choose k = 20, m = 200, the influence of the value k will be discussed in Section 4.1. Randomly generate x  (i = 1,…, 20) in [−10,10]. Calculate the standard deviation σ for all x. For each x, generate x  (q = 1,…, 200), which are following probability distribution N(x, σ2). Choose the optimal position xbest from all x  (q = 1,…, 200) for each x. For each x, x = xbest. Calculate the standard deviation σ for all new x. Calculate Δσ = |σ − σ|. Compare Δσ and σ. If Δσ > σ, return to step (3). If Δσ < σ, σ = σ/2. Compare σmin and σ. If σ > σmin, return to step (3). If σ < σmin, return xbest′, fbest. According to the framework above, only two parameters (k and m) need to be set. The selection of k is discussed in Section 4.1. In framework the superposition of k Gauss sampling areas constructs the wavefunction. Wavefunction written as equation (1) depicts the probability distribution of optimal solutions in domain. The changes of wavefunction in iterations are showed in Figure 5. In order to reduce the energy of system, k optimal positions xbest are retained from k × m sampling positions. In QHO process, σ = σ/2 transforms the system from high energy state at scale σ to ground state at scale σ/2.
Figure 5

Changes of wavefunction in iterations.

For high dimensional test functions, MQHOA-MMO can use two-dimensional array x to store the k high dimensional central positions of Gauss sampling area (N(x, σ)). Where i is dimension, j is the number of Gauss sampling areas. For every dimension, MQHOA-MMO calculates the value of σ and Δσ. The QHO process at scale σ will end until Δσ ≤ σ in every dimension.

3. Experimental Setup

In this section, we give a brief description of benchmark functions and comparison algorithms. Experimental setup is present at the end of this section.

3.1. Test Functions

The benchmark functions we choose are widely used in multimodal optimization. Some test functions have various characteristics, such as irregular landscape, symmetric or equal distribution of optima. The goals are thus to evaluate the ability to tackle a complicated problem, to validate its capacity to detect all the global peaks of a function. A brief description of the functions is listed in Table 1.
Table 1

Benchmark function.

Function nameDRangeBenchmark functionOptima
E1-F1:Equal maxima1 x ∈ [0 1] F1(x) = sin6⁡(5πx)5
E1-F2:Uneven maxima1 x ∈ [0 1] F2(x) = sin6⁡(5π(x3/4 − 0.05))5
E1-F3:Himmelblau's2 x ∈ [−4 4] F3(x, y) = 200 − (x2 + y − 11)2 − (x + y2 − 7)24
y ∈ [−4 4]
E1-F4:Six-Hump Camel2 x ∈ [−1.9 1.9] F4x,y=4-2.1x2+x43x2+xy+-4+4y2y2 2
y ∈ [−1.1 1.1]
E1-F5:Shekel's foxholes2 x ∈ [−65.54 65.54] F5x,y=500-10.002+i=0241/1+i+x-ai6+y-bi6 1
y ∈ [−65.54 65.54]
E1-F6:Branin RCOS2 x ∈ [−5 10] F6x,y=y-5.14π2x2+5πx-62+101-18πcosx+10 3
y ∈ [0 15]
E1-F7:Root function2 x = x1 + ix2 F7x=11+x6-1 6
x ∈ [−2 2]
E1-F8:Hansen2 x ∈ [−10 10] F8(x, y) = ∑i=15icos⁡((i − 1)xi) · ∑j=15jcos⁡((j + 1)y + j)9
y ∈ [−10 10]
E1-F9:Holder Table2 x ∈ [−10 10] F9x,y=-sinx1cosx2·exp1-x12+x22π 4
E1-F10:Rescaled Six-Hump Camel2 x ∈ [−1.9 1.9] F10x,y=-4-2.1x2+x43x2+10xy+-4+410y4 2
x ∈ [−1.1 1.1]
E1-F11:2D Inverted Shubert2 x 1 ∈ [−10 10] F11X=-i=1Dj=15jcosj+1xi+j 18
x 2 ∈ [−10 10]
E1-F12:Cross-In-Tray2 X i ∈ [−10 10] F12x=-0.0001sinx1sinx2exp100-x12+x22π+10.1 4

3.2. Comparasion Algorithms

To evaluate the performance of MQHAO-MMO, MQHAO-MMO is compared with the following standard multimodal evolutionary algorithms. MQHOA-MMO is marked as AL0. AL1 (IWO-σ-GSO [26]): the vasion weed optimization-σ-group search optimizers; AL2 (scma [27]): CAM-ES with Self-Adaptive Niche Radius; AL3 (cde [28]): the original crowding DE; AL4 (sde [29]): speciation-based DE; AL5 (ferpso [30]): fitness-Euclidean distance Ratio; AL6 (spso [31]): speciation-based PSO; AL7 (r2pso [31]): a lbest PSO with a ring topology, each member interacts with only its immediate member to its right; AL8 (r3pso [31]): a lbest PSO with a ring topology, each member interacts with only its immediate member to its left; AL9 (r2psolhc): the same as r2pso [32], but with no overlapping neighborhoods; AL10 (r3psolhc): the same as r3pso [32], but with no overlapping neighborhoods.

3.3. Experimental Environment and Criteria

MQHOA-MMO is coded in Matlab R2014 and the simulations are run on i5 CPU 2.9 GHz with 8 GB memory. Results are averaged over 30 independent runs. If the difference between a computed solution and a known global optimum is less than ε, the peak is considered to be found. The performance of all multimodal algorithms is measured in terms of the following two criteria: Success rate: The percentage of runs in which an algorithm can detect all the global peaks. Average peak number: Peak number found over 30 runs for each function.

4. Experimental Studies

The experimental studies and analyses are presented in this section. MQHOA-MMO had been run until σ < σmin or the maximum number of function evaluation was exhausted.

4.1. Parameter Experiment

In this section, we examine the effectiveness and efficiency of MQHOA-MMO by applying it to selected four benchmark functions: Six-Hump Camel Back Function (E1-F4), Himmelblaus's Function (E1-F3), Hansen Function (E1-F8) and 2D Inverted Shubert Function (E1-F18). The numbers of solution individuals are 2, 4, 9, and 18, respectively. In MQHOA-MMO, we generate initial population using two parameters: k and m. We choose different initial parameter k to test the impact on the ability of finding global optima. We run over 30 times for each k while m = 200. Figure 1 contains four relation figures between the parameter k and the number of global optima which MQHOA-MMO can find. The initial k value begins at 5 and increases by 5 each time. According to Figures 1(c) and 1(d), while k is smaller than the number of global optima, MQHOA-MMO can only find part of global optima. With the increasing of k, we can find most global optima when the initial k is close to the number of global optima. When the number of optimal solutions is small, for example, less than 10, the increased k can find all the optimal solutions. But when the number of global optima is lager (more than 10), with the increase of k, the number of global optima which MQHOA-MMO can find also increases. When k increases to a certain number, the number of global optima which MQHOA-MMO can find is stable. When the domain is large, k should increase in a large scale more than ten times. We can make a conclusion that to find all the global optima the initial value of k should be larger than the number of the global optima. Besides, along with the increase of domain, the number of k should increase correspondingly.
Figure 1

Relation of k and optimal solution with m = 200, σmin = 10e − 6, and repeat time = 30.

4.2. Convergence Experiment

To be more intuitive, we apply 12 benchmark test functions to demonstrate the convergence and verify the effectiveness of MQHOA-MMO. Figures 2, 3, and 4 show the relationship between fitness and iteration times. Fitness is the average of k optimum function values of test functions. The value of iteration times is the number of QHO processes. For the 12 benchmark test functions, we set parameters as follows: k = 50, m = 200, and σmin = 10e − 6. Results are averaged over 30 independent runs.
Figure 2

Convergence of F1–F4, where k = 50, m = 200, σmin = 10e − 6, repeat time = 30.

Figure 3

Convergence of F5–F8, where k = 50, m = 200, σmin = 10e − 6, repeat time = 30.

Figure 4

Convergence of F9–F11, where k = 50, m = 200, σmin = 10e − 6, repeat time = 30.

From Figures 2–4, we find that almost all the functions have converged to several small areas before the tenth iteration. Some functions, such as F1, F2, F3, F5, F6, F9, and F12, can converge to several small areas at the fifth iteration. Table 2 presents the iteration times of 12 test functions while σ < σmin or the maximum number of function evaluations was exhausted. The results mean that MQHOA-MMO has a fast convergent ability, and for most test functions there is only one QHO process in each M process.
Table 2

Iteration times (N) for 12 test functions, where k = 50, m = 200, σmin = 10e − 6, σ = dmax − dmin, and repeat time = 30.

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12
202020202724222525222525

4.3. Changes of Wavefunction

In this section, we choose Hansen function (E1-F8) to present the changes of wavefunction in MQHOA-MMO. We set k = 35, m = 200, dmin = −10, and dmax = 10. The function is written as follows: MQHOA-MMO's wavefunction is written as (1). To describe wavefunction clearly, we have defined three notions. First notion is incipient centers of k swarms, which are x used in QHO's first iteration for each σ. Second notion is incipient wavefunction, which is the wavefunction of x used in QHO's first iteration for each σ. The last notion is last wavefunction, which is the wavefunction of x used in QHO's last iteration for each σ. After the last QHO iteration for each σ, σ will be cut half to σ/2. Figure 5 presents the change of wavefunction in iterations. Figures 5(a)–5(d) show the incipient centers of k swarms with different σ. Figures 5(e)–5(h) show the incipient wavefunctions with different σ. Figures 5(i)–5(l) show the last wavefunctions with different σ. With σ = 20, Figure 5(a) shows the incipient centers of swarms and Figure 5(e) shows the incipient wavefunction while the last wavefunction is shown in Figure 5(i). With σ = 2.5, Figure 5(b) shows the incipient centers of swarms, Figure 5(f) shows the incipient wavefunction, and the last wavefunction is shown in Figure 5(j). With σ = 1.25, Figure 5(c) shows the incipient centers of swarms, Figure 5(g) shows the incipient wavefunction, and the last wavefunction is shown in Figure 5(k). With σ = 0.3125, Figure 5(d) shows the incipient centers of swarms, Figure 5(h) shows the incipient wavefunction, and the last wavefunction is shown in Figure 5(l). Figures of the same σ show the changes of wavefunctions in different QHO iterations. For example, Figures 5(g) and 5(k) show the changes in QHO iterations with σ = 1.25. As mentioned in Section 4.2 the number of QHO iterations is small in each M iteration, so the change of wavefunctions is not obvious in different QHO iterations with the same σ. Figures of different σ give us the differences of wavefunctions in different M iterations. For example, Figures 5(i), 5(j), 5(k), and 5(l) show the changes of wavefunctions in M iterations. According to our definition, Figures 5(j) and 5(g) have the same centers of k swarms and the different σ. From the wavefunctions, we also can find that if there is a large probability of the optimal solution at a certain point, the point will be more attracted in this area, according to Gauss sampling law. It means that the higher probability is that the optimal solution is in the region. With the decrease of σ, the probability distribution of particles is more and more concentrated. This is the principle that MQHOA-MMO uses multiscale to implement the precision of the algorithm.

4.4. Comparison Experiments

In this section, we will present a detailed discussion on the performance of various algorithms that were chosen in the comparative study. We use 6 challenging functions of various characteristics to evaluate the MQHOA-MMO's performance. MQHOA-MMO runs until σ < σmin or the maximum number of function evaluations was exhausted. The experimental results of other algorithms are quoted from [26]. All performances of MQHOA-MMO are calculated and averaged over 30 independent runs with k = 50, m = 200, and σmin = 10e − 6. From Table 3, we can find that MQHOA-MMO's success rate can reach up to 100%. For some complex test functions, such as F3, F5, and F6, MQHOA-MMO can also get 100% success rate while some other algorithms even cannot find the global optima.
Table 3

Success rates for test functions; the parameters for MQHOA-MMO are as follows: k = 50, m = 200, σmin = 10e − 6, and repeat time = 30; the domain of definition is different with different functions.

Func ε AL0AL1AL2AL3AL4AL5AL6AL7AL8AL9AL10
F10.0000011001009228728488928810092
F20.0000011001008828601009288729292
F30.0005100887207272024282424
F40.0000011006060010096060565260
F50.000011001008852321005688767260
F60.110092748278767072666062
Table 4 shows the average number of global peaks detected by the MQHOA-MMO and other ten evolutionary multimodal optimization algorithms on test functions. Table 4 further indicates that MQHOA-MMO is able to detect the global optima in the test cases, and MQHOA-MMO can yield a good level of accuracy. IWO-σ-GSO is a new excellent combination algorithm. It can generate acceptable results over the test functions. SCMA can also generate good results over simple and low dimension. Their performance gradually becomes poor when the dimension increases. SDE algorithm shows very poor performance if the number of peaks has high accuracy. Other SDE could not be able to generate satisfactory solution. FERPSO is able to generate relatively satisfactory results in many test functions. For MQHOA-MMO, given suitable parameters, it can get all global optima.
Table 4

Average number of peaks found for the test functions; the parameters for MQHOA-MMO are as follows: k = 50, m = 200, σmin = 10e − 6, and repeat time = 30, and the domain of definition is different with different functions.

FuncεAL0AL1AL2AL3AL4AL5AL6AL7AL8AL9AL10
F10.000001554.923.844.724.844.884.924.8854.92
F20.000001554.883.964.654.924.884.724.924.88
F30.000543.883.720.323.723.680.842.922.7633.12
F40.00000121.61.60.0421.960.081.441.561.561.48
F50.00001110.880.520.3210.560.880.760.720.6
F60.132.882.562.762.722.642.482.522.442.362.40

5. Conclusion

In this paper, we proposed a multimodal optimization algorithm named MQHOA-MMO. We used wave function to locate the possibility positions of the optimal solutions. The experimental study undertook 12 distinct test functions with number of global peaks varying from 2 to 25. Through comparison of results which were obtained from optimization of several benchmark functions using MQHOA-MMO and other optimization algorithms with two criteria and results obtained from performance experiments it is revealed that MQHOA-MMO can detect all the global optimum in a fast effective, controllable, and higher accuracy. Furthermore, this algorithm can find the global optimum of multifunction without being trapped in local optimum. The experimental study clearly indicated that, in most of the test cases, performance of MQHOA-MMO remains statistically better than all the other algorithms compared with it. For some complex functions, MQHOA-MMO could not have a good success rate. Future research of MQHOA-MMO will be focused on the optimization of such complex functions and higher-dimensional functions.
  2 in total

1.  A species conserving genetic algorithm for multimodal function optimization.

Authors:  Jian-Ping Li; Marton E Balazs; Geoffrey T Parks; P John Clarkson
Journal:  Evol Comput       Date:  2002       Impact factor: 3.277

2.  Adaptive niche radii and niche shapes approaches for niching with the CMA-ES.

Authors:  Ofer M Shir; Michael Emmerich; Thomas Bäck
Journal:  Evol Comput       Date:  2010       Impact factor: 3.277

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.