Literature DB >> 31220152

Fractional-order quantum particle swarm optimization.

Lai Xu1, Aamir Muhammad1, Yifei Pu1, Jiliu Zhou1, Yi Zhang1.   

Abstract

Motivated by the concepts of quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was developed to achieve better global search ability. This paper proposes a new method to improve the global search ability of QPSO with fractional calculus (FC). Based on one of the most frequently used fractional differential definitions, the Grünwald-Letnikov definition, we introduce its discrete expression into the position updating of QPSO. Extensive experiments on well-known benchmark functions were performed to evaluate the performance of the proposed fractional-order quantum particle swarm optimization (FQPSO). The experimental results demonstrate its superior ability in achieving optimal solutions for several different optimizations.

Entities:  

Mesh:

Year:  2019        PMID: 31220152      PMCID: PMC6586292          DOI: 10.1371/journal.pone.0218285

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Particle swarm optimization (PSO) [1], which is inspired by animal social behaviors, such as birds, was first proposed by Kennedy and Eberhart as a population-based optimization technique. In PSO, the potential solutions, which are called particles, go through the solution space by relying on their own experiences and current best particle. PSO has a competitive performance with the classical Genetic Algorithm (GA) [2], evolutionary programming (EP) [3], evolution strategies (ES) [4], genetic programming (GP)[5] and other classic algorithms. It has attracted increasing attention during recent years thanks to its effectiveness in different optimization problems [6][7][8]. Quantum computer [9] was proposed 30 years ago and the formal definition of the quantum computer was given in the late 1980s. Since the quantum computer has shown its potential in several special problems [10], many efforts were dedicated to this field. Several well-known algorithms were proposed, and Shor’s quantum factoring algorithm was the most famous one in these methods [11]. Inspired by a similar idea, the quantum-behaved particle swarm optimization (QPSO) [12] was introduced in 2004 by Sun et al. to improve the convergence of classical PSO. In quantum space, particles search in the complete solution space and the global optimum is guaranteed. In recent decades, fractional calculus has drawn increasing interests and been a strong branch of mathematical analyses. Furthermore, the random variables in the physical process can be regarded as the substitution of real stochastic motion. As a result, the fractional calculus can be introduced to analyze the physical statuses and procedures of objects in Euclidean space. The fractional differential functions have two features. A fractional differential function is a power function for primary functions, and it is an iterative addition or product of specific functions for the other functions. Meanwhile, it has been proved that many fractional-order models are more suitable to describe the natural phenomena. Based on these observations, fractional calculus has been introduced into many fields such as viscoelastic theory [13], diffusion processing [14] and stochastic fractal dynamics [15]. Most of the researches on fractional-order applications focus on the transient state of physical changes. However, the evolutive procedures of systems are rarely included. In recent years, QPSO has attracted great attention from many researchers. To balance the global and local searching abilities, Xi et al. proposed a novel QPSO called weighted QPSO (WQPSO)[16]. Jiao et al. proposed a dynamic-context cooperative quantum-behaved particle swarm optimization (CQPSO)[17] for medical image segmentation. Although QPSO and its variants have better performance in some aspects, they do not make full use of the state information during the ergodic process and it is inefficient in hunting global optimum. In this paper, a novel quantum particle swarm optimization with the fractional-order position is proposed. Due to the nonlinear, non-causal and non-stationary characteristics of fractional calculus, searching global optimum can be significantly accelerated [18][19]. The rest of this paper is organized as follows: In section 2, some mathematical background about fractional calculus is introduced. Section 3 presents the basic ideas of PSO and QPSO and the proposed method is also given there. Section 4 demonstrated the experimental results of the proposed method. Finally, Section 5 outlines the conclusion.

Background theory for fractional calculus

Grünwald-Letnikov (GL) [20], Riemann-Liouville (RL) [21], and Caputo [22] definitions are three different definitions for fractional calculus in Euclidean space. Due to its convenient computational form, GL definition for the fractional derivative is commonly used for engineering problems. The GL derivative with an order of function is defined as: where f(x) is a differintegrable function, [a,x] is the function duration, and Γ is the gamma function. Here, denotes the GL fractional differential operator. In (1), when N is big enough, the limit symbol can be neglected and we can rewrite (1) as: which is a proximate form substituting fractional derivative with multiplication and addition operations [12]. For 1D signal, it has the following expression:

Particle swarm optimization with fractional -order position

Quantum particle swarm optimization

Trajectory analyses in [23] demonstrated that each particle should converge to the corresponding attractor C, which is given as follows: where a = c1r1/(c1r1+c2r2). It can be seen that the local attractor is a stochastic attractor of particle i that lies in a hyper-rectangle with pb and gb being two ends of its diagonal. Based on the convergence analysis of PSO [24], inspired by the theory of quantum physics, Sun et al. studied the convergence behavior of PSO and proposed a novel PSO model from quantum mechanics abbreviated as QPSO [25]. Based on the Delta potential, the quantum behavior of particles are considered. In the framework of quantum time-space, the quantum state of a particle can be defined by a wave function ψ(x,t). In 3-D space, ψ(x,t) is given as where Q is the probability that measures the particle’s location in the 3-D space. As a probability density function, we have The normalized version of ψ can be given as: As a result, Q and the corresponding distribution function F can be obtained as: And where L(t) denotes the standard deviation, which describes the search range of each particle. The position of the particle can be obtained by Monte Carlo method with the following formula: where s denotes a random constant, which is uniformly distributed on . Then, u = e−2|. Let y = x−c, we have The convergence condition of PSO is given by: Let L be the function of time, we have: With (13), we have the iterative version of i-th multidimensional particle as follows A global point called mean best position is introduced to evaluate L(t). The global point, which is denoted by mbest, can be computed as the mean of the pbest positions of all particles, which can be given as: The values of L(t) is calculated by: Finally, the position can be given by: where parameter β is step size, which is utilized to control the convergence speed. rand is a random number with a range of 0 to 1, which is the deciding factor of “±” in (17). Table 1 illustrates the main steps of QPSO.
Table 1

The main steps of QPSO.

Alogtihm2
Initialize QPSO parameters;RepeatFor all particles docompute fIf f(xi)<f(pbi)pbi = XiEndIf f(pbi)<f(gb)gb = pbiEndCalculate Q using (19)If rand>0.5Xid(t+1)=Cid(t)+β|mbestdXid(t)|ln(1u)ElseXid(t+1)=Cid(t)β|mbestdXid(t)|ln(1u)Endt = t+1Until stopping criteria

QPSO with the fractional-order position

It is well known that fractional calculus has a remarkable long-term memory characteristic [26]. From the definition of Grünwald-Letnikov in (1), it can be seen that fractional derivative is computed with all historical states and it is naturally suitable for the iterative procedure of intelligent optimization algorithms. For examples, Pires E.J.S introduced fractional calculus theory into the updated formula of particle swarm optimization algorithm [27]. To further improve the speed and accuracy of convergence of QPSO, in this section, the proposed QPSO with the fractional-order position is detailed. Initially, the original position is rearranged to modify the order of the position derivative, which can be derived as: (23) and (26) can be uniformly rewritten as: The left side of (22) is the discrete version of the derivative with α = 1 and we can extend (22) to a generalized version, leading to the following fractional-order expression when rand>0.5,mbest>X(t) and rand<0.5,mbest Similarly, for rand>0.5,mbestX(t), we have Previous researches have demonstrated that while the order α of the derivative is set to [0,1], it will introduce a smoother variation and prolong memory effect, which may lead to a better performance than original integral-order method [12][13]. To study the behavior of the proposed fractional-order strategy, a set of functions are tested and the order α is set to range from 0 to 1 with step size of Δα = 0.1. To simplify the computational complexity, we usually truncate (3) and only use the first four terms, so we have Then, (23) can be modified to and (24) can be also rewritten as where It can be seen that from (23) and (24), the position updating of particles depends not only on the position of the previous particle but also on the historical position of the particle in different points in time. The position updating of particles is the result of long-term memory, which can protect the population distribution and diversity to a certain extent. The flowchart of the proposed quantum-behaved swarm optimization with the fractional position (FQPSO) is shown in Table 2.
Table 2

The main steps of FQPSO.

Alogtihm3
Initialize FQPSO parameters;Initialize population: random XiFor each particle i∈[i,s]compute fIf f(xi)<f(pbi)pbi = XiEndIf f(pbi)<f(gb)gb = pbiEndCalculate Q using the equationIf rand>0.5,mbestd<Xid(t) or rand<0.5,mbestd>Xid(t)Xid(t+1)=Cid(t)+βln(1u)mbestd(βln(1u)±1α)Xid(t)+XXid(t)ElseIf rand>0.5,mbestd>Xid(t) or rand<0.5,mbestd<Xid(t)Xid(t+1)=Cid(t)βln(1u)mbestd+(βln(1u)±1+α)Xid(t)+XXid(t)Endt = t+1Until termination criterion is satisfied

Experiments

Experimental setup

To validate the performance of the proposed FQPSO, 8 benchmark functions [28-30] listed in Table 3 were used to compare FQPSO with PSO and QPSO under the same maximum function evaluations (FEs). For FQPSO, the order was set to from 0.1 to 0.9 with step 0.1. Firstly, to investigate the impact of a fractional position in the proposed algorithm, we use FQPSO with different fractional-orders to compare to QPSO. Then, the best results of FQPSO were used for comparison with other variants of PSO including PSO [31], QPSO, PSO with both chaotic sequences and crossover operation(CCPSO) [32], naive PSO(NPSO) [33] and moderate-random-search strategy PSO(MRPSO) [34]. The parameters of the compared algorithms were set as recommended in the original references. Since the impact of population size on the performance of PSO-based methods is of the minimum significance [35], all experiments in this research were performed with a population size of 20. [34].
Table 3

Benchmark test functions.

FFormulaRangeXmaxfminX*
f1i=1nxi2[-100,100]10000
f2i=1n(j=1ixj)2[-100,100]10000
f3i=1nixi2[-100,100]10000
f4i=1n|xi|+i=1n|xi|[-10,10]10000
f5i=1n(xi)2+i=1n(xi)2[-10,10]10000
f6i=1n(xi210cos(2πxi)+10)[-100,100]10000
f7i=1n(k=020(0.5)kcos(2π(3)k(xi+0.5)))nk=020((0.5)kcos(2π3k0.5))[-5.12,5.12]5.1200
f820exp(0.2(1ni=1nxi2)12)exp(1ni=1ncos2πxi)+20+e[-5.12,5.12]3200

X* denotes the global optimum.

X* denotes the global optimum. The parameters of the compared algorithms were set as recommended in the original references. Since the impact of population size on the performance of PSO-based methods is of the minimum significance [35], all experiments in this research were performed with a population size of 20. β is computed according to the following formula: where β0 = 0.8, β1 = 0.6, t is the current number of iterations and tmax is the maximum number of iterations [36].

Testing FQPSO with different fractional-order

Since QPSO is a stochastic algorithm, it will lead to a different trajectory convergence every time. Therefore, the simulations were performed 50 times with each value in the parameter set α = {0,0.1,0.2,…,1}. In Figs 1 and 2, the result is given for the adopted optimization functions f, j = 1,2,…,8. To show the gains achieved by our proposed algorithm, three groups of the experiments were performed. In unimodal functions (f1-f5, Group 1) and multimodal functions (f6-f8, Group 2) tests, the maximum numbers of FEs were set to 10000, 30000 and 100000, for 10-D, 30-D and 100-D problems, respectively. In the results, we provided the best results and the mean results. The final results over 50 runs of FQPSO are summarized in Tables 4–7.
Fig 1

Comparison between FQPSO with different fractional-order on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

Fig 2

Comparison between FQPSO with different fractional-order on Group 2.

(a) f6, (b) f7, (c) f8.

Table 4

Comparison between FQPSOs with different fractional-order on function 1–2.

Fractional-orderf1f2
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000
α = 1BestMean5.6324e-2674.5321e-2652.3453e-2411.5837e-2392.9833e-2303.5636e-2313.7568e-2581.8379e-2551.7033e-2376.8877e-2341.4328e-1634.5673e-158
α = 0.9BestMean00005.3234e-2694.5639e-26700005.3293e-2046.3214e-201
α = 0.8BestMean00007.3535e-2486.4356e-24600008.5313e-1984.2314e-197
α = 0.7BestMean00005.3623e-2427.5323e-23900005.3241e-1883.1235e-185
α = 0.6BestMean00007.4342e-1976.4329e-19400008.4232e-1635.3123e-161
α = 0.5BestMean00005.3252e-1465.9753e-14300004.3242e-1418.5223e-140
α = 0.4BestMean00009.5332e-1085.4256e-9900005.3213e-1116.4132e-108
α = 0.3BestMean00006.5352e-565.953e-5000008.5231e-665.3145e-49
α = 0.2BestMean2.1613e-2366.6966e-2505.1257e-1062.2847e-1006.4235e-183.4562e-112.1552e-1527.0445e-1451.0355e-1052.2098e-981.2345e-168.5242e-08
α = 0.1BestMean2.613e-531.2778e-453.5677e-163.9080e-124.5712e-083.4564e-051.4244e-362.0523e-244.5673e-152.4097e-115.3113e-064.5313e-04
Table 7

Comparison between FQPSOs with different fractional-order on function 7–8.

Fractional-orderf7f8
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000
α = 1BestMean7.1054e-159.9476e-152.5251e-104.8798e-078.5322e-085.4252e-071.4622e-2481.2011e-2157.9936e-157.9936e-154.5231e-063.4251e-05
α = 0.9BestMean1.9257e-171.8609e-135.6302e-113.7460e-086.5232e-157.5323e-11004.4409e-156.9278e-152.3451e-074.4123e-06
α = 0.8BestMean4.9016e-255.6360e-185.4000e-101.1253e-077.4213e-199.3421e-15001.6409e-155.1514e-158.4222e-075.3245e-06
α = 0.7BestMean8.3079e-331.1100e-225.4000e-208.0348e-156.4231e-245.3313e-19004.4409e-154.4708e-156.4134e-075.3121e-06
α = 0.6BestMean5.0220e-442.4418e-263.4005e-121.0209e-081.2334e-145.4134e-13004.4409e-154.4409e-154.3311e-074.2134e-06
α = 0.5BestMean6.5183e-439.7154e-221.6486e-112.2779e-097.4231e-099.3134e-06004.4409e-154.4409e-151.2134e-085.4131e-06
α = 0.4BestMean4.765e-288.98017e-133.1412e-173.1149e-148.4255e-105.3424e-086.2616e-2515.5247e-2434.4409e-154.4409e-153.4131e-073.4111e-06
α = 0.3BestMean2.0456e-161.1142e-143.9874e-131.0676e-108.4325e-124.5231e-102.6343e-1479.3570e-1413.7503e-152.8574e-155.3314e-084.3141e-07
α = 0.2BestMean3.1005e-196.01e-134.2733e-172.6818e-123.4112e-051.2144e-031.5234e-742.0325e-686.7564e-156.9345e-156.5131e-034.5131e-01
α = 0.1BestMean0.0047770.08650.08130.76660.11331.21345.9892e-302.2309e-257.9835e-158.2343e-150.13141.2144

Comparison between FQPSO with different fractional-order on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

Comparison between FQPSO with different fractional-order on Group 2.

(a) f6, (b) f7, (c) f8. Fig 1 shows the performance of FQPSO with different fractional-orders in Group 1. f1, a Sphere function, is the most widely used unimodal test function. Compared with algorithms with integer-order position, FQPSO shows the best results for this function. Similar results were obtained for other unimodal functions. The improvements achieved by FQPSO on these unimodal functions suggest that fractional-order methods are better at a fine-gained search than integer-order ones. However, it is also worth noting that the performances of FQPSO algorithms with orders 0.1 and 0.2 were not better than integer-order. The reason is that (35) is just an approximation of D and the approximation accuracy of D0.1 and D0.2 is not good enough. From Fig 1, we can see that most FQPSO methods’ convergence accuracies are better than QPSO. For 10-D and 30-D problems in function 1, 2 and 3 showed in Fig 1A, 1B and 1C and Tables 5 and 6, when 0.3≤α≤0.9, the convergence accuracies are better than QPSO. For 10-D and 30-D problems in function 4, the convergence accuracies of are better than QPSO, when 0.4≤α≤0.9. For 10-D and 30-D problems in function 5, the convergence accuracies of are better than QPSO, when 0.2≤α≤0.9. Tables 5–7 also show that the convergence accuracies are better than QPSO in function 1, 2, 3, 4 and 5 when 0.7≤α≤0.9 for100-D problems.
Table 5

Comparison between FQPSOs with different fractional-order on function 3–4.

Fractional-orderf3f4
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000
α = 1BestMean6.325e-2597.424e-2551.0111e-2281.2325e-2244.3529e-1947.5332e-1791.4622e-2481.2011e-2156.2412e-1112.6011e-996.4353e-568.4224e-41
α = 0.9BestMean00008.4243e-2236.5324e-215003.5000e-3235.9000e-3237.5224e-756.3243e-71
α = 0.8BestMean00009.5352e-2455.3563e-228001.0000e-3234.9000e-3248.4242e-774.2412e-73
α = 0.7BestMean00007.5324e-2376.4256e-22900006.3242e-655.4224e-61
α = 0.6BestMean00008.5363e-1856.4353e-1780001.000e-3239.4245e-396.2345e-36
α =0.5BestMean00009.5363e-1667.5352e-16100004.2135e-341.2356e-33
α = 0.4BestMean00008.4256e-1549.5324e-1516.2616e-2515.5247e-243005.2214e-275.6241e-21
α = 0.3BestMean0003.78e-3216.3245e-1345.4242e-1312.6343e-1479.3570e-1413.7503e-2282.8574e-2217.5213e-185.2134e-15
α = 0.2BestMean2.1613e-2366.6966e-2209.7128e-1044.7649e-1005.3523e-676.3213e-601.5234e-742.0325e-683.9143e-735.1233e-695.6231e-127.5234e-07
α = 0.1BestMean1.2583e-455.6267e-397.5296e-157.9333e-123.4525e-065.3256e-035.9892e-302.2309e-256.6300e-134.2332e-104.3241e-083.3413e-05
Table 6

Comparison between FQPSOs with different fractional-order on function 5–6.

Fractional-orderf5f6
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 100FEs = 30000
α = 1BestMean9.397e-1893.5943e-1851.1027e-1541.1027e-1555.4324e-1094.5632e-1100.99501.591911.939516.025032.452456.3234
α = 0.9BestMean00006.4242e-1345.6321e-1301.7764e-150.10900.99506.64841.74324.5252
α = 0.8BestMean00003.4245e-1295.6324e-126000.45172.29330.998510.4245
α = 0.7BestMean00008.7432e-1194.5256e-115000.02972.46960.59433.4255
α = 0.6BestMean00006.5241e-895.3241e-88000.534212.44681.42529.3245
α = 0.5BestMean00008.5352e-825.4232e-772.456e-080.014134.5314e-066.4245e-040.042550.4256
α = 0.4BestMean00009.4256e-746.6322e-720.13891.51266.678956.653311.335257.3241
α = 0.3BestMean4.2206e-2512.3626e-2315.6014e-3171.6254e-2865.3214e-665.3242e-650.31000.33940.67893.393216.425254.1343
α = 0.2BestMean1.5239e-1202.9750e-1022.4207e-991.2928e-771.2345e-492.4562e-461.6976e-102.1109e-0910.2080185.93726.4952192.345
α = 0.1BestMean4.5632e-422.9225e-301.2071e-161.6344e-069.5224e-083.4521e-041.041516.017436.442866.943086.4211211.245
In general, we can always find an appropriate fractional order so that the convergence accuracy of the algorithm is better than the integer order algorithm in Group 1. In Fig 2, for f6, f7 and f8, the numbers of local minima will increase dramatically as the dimension of the function raises. In this part, we mainly investigated the capability of global searching. f6 is the generalized Rastrigin’s function, which is the most widely used test multimodal functions in PSO algorithm, and tends to be trapped by local minimums. Considering more orders to search in the solution space, FQPSO gets more favorable results than the compared algorithms. f7 is the Ackley function and according to the results in Table 8, the performances of FQPSO have little changes with the variation of dimension and achieve the best results on each dimension. Function f8 is the Weierstrass function, which is continuous everywhere, but differentiable nowhere. In short, FQPSO reaches the global optimum on 10 and 30 dimensions. In Fig 2 and Tables 6 and 7, it can be observed that except FQPSO with orders 0.1 and 0.2, FQPSO can always achieve better results than QPSO. Meanwhile, for function 6, 7 and 8, the convergence accuracies are better than QPSO when 0.3≤α≤0.9.
Table 8

Time consumption.

f1f2f3f4f5f6f7f8
QPSO0.81230.92450.81550.89340.86880.93550.96420.9942
FQPSO0.84710.93340.83580.95480.88990.96740.98561.032
In summary, FQPSO has superior ability in tackling multimodal functions compared with other algorithms. We can always find an appropriate fractional order for the algorithm that has better convergence accuracy than the integer order one in Group 2. Table 8 shows the time consumption of FQPSO and QPSO in solving function optimization problems. The default time unit is seconds. The experimental results also confirm that the fractional order method only consumes a little more time in each iteration process, and does not cause a lot of waste of time.

Compare with other variants of PSO

In this experiment, the best results of the FQPSO methods were used for comparison with other variants of PSO, including PSO, QPSO, CCPSO, NPSO and MRPSO. The parameters of the compared algorithms were set according to the recommendations in their original papers. The maximum numbers of FEs were respectively set to 10000 and 30000 for solving 10-D and 30-D problems. All experiments were performed with a population size of 20. Tables 9 and 10 shows the statistical results of different algorithms on unimodal functions. From the previous results in the last subsection, we can see that FQPSO with D0.8 obtained the best results on functions 1–3 and FQPSO with D0.7 achieved the best results on functions 4–5. We fixed the orders to compare those results with other variants of PSO. The results from different algorithms on these five unimodal functions suggest that FQPSO is better at a fine-gained search than all the other algorithms. The rapid convergence of the FQPSO can be seen as an evidence for our observation in Fig 3. In summary, FQPSO performs best in solving unimodal functions among all the algorithms. Tables 9 and 10 and Fig 4 show the performances of different algorithms on Group 2.
Table 9

Comparison between different PSO algorithms on function 1–3.

Algorithmf1f2f3
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000
FQPSOBestMeanstd0000000001.6812e-3131.9619e-2965.3432e-307000000
QPSOBestMeanstd5.6324e-2674.5321e-2653.4523e-2662.3453e-2411.5837e-2396.3245e-2413.7568e-2581.8379e-2552.5431e-2561.7033e-2376.8877e-2342.9832e-2371.5e-2568.1e-2545.31e-2561.01e-2431.86e-2417.543e-243
PSOBestMeanstd7.764e-201.17e-206.3e-206.7954e-141.58e-134.17e-135.8742e-164.9000e-151.3864e-164.3257e-101.2264e-092.4987e-105.8734e-219.3854e-203.5467e-193.876e-122.086e-066.4303e-11
CCPSOBestMeanstd3.2341e-971,2313e-952.8734e-977.4324e-866.0851e-845.3241e-851.2353e-208.3483e-205.9834e-209.3557e-161.9619e-135.7934e-156.3258e-438.5423e-421.5425e-435.2134e-353.6880e-337.3424e-34
NPSOBestMeanstd3.4653e-535.2356e-522.3456e-535.3789e-384.8357e-369.2134e-373.4453e-229.5151e-182.3456e-215.2223e-131.2580e-119.9863e-131.7431e-143.4564e-142.6731e-149.3452e-097.2875e-068.4245e-08
MRPSOBestMeanstd1.5677e-1101.239e-1098.2345e-1105.8723e-976.2391e-936.3394e-976.3434e-613.5639e-489.5332e-514.9053e-454.4788e-447.3421e-443.4546e-859.4671e-842.3546e-853.4546e-859.4671e-842.3546e-85
Table 10

Comparison between different PSO algorithms on function 4–6.

Algorithmf4f5f6
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000
FQPSOBestMeanstd0000000000000007.0106e-061.43841.234
QPSOBestMeanstd1.4622e-2481.2011e-2155.6231e-2356.2412e-1461.2203e-1374.562e-1379.397e-1893.5943e-1856.3423e-1871.1027e-1541.1027e-1556.3453e-1551.7764e-102.1453e-085.4214e-0915.939516.02505.3453
PSOBestMeanstd5.324e-131.5242e-112.584e-111.0000e-097.5267e-076.3324e-085.4234e-192.534e-183.4532e-195.324e-070.05710.0004244.34e-063.2e-053.2e-057.34747.83631.3456
CCPSOBestMeanstd1.324e-308.3453e-274.5356e-297.4352e-236.3257e-215.6485e-226.4234e-554.5313e-518.5423e-545.3256e-449.4075e-423.4578e-430.12340.14230.05422.97684.4756.4246
NPSOBestMeanstd5.6354e-094.3563e-072.3446e-097.5242e-071.8913e-044.6452e-066.4312e-1834.5683e-1809.4563e-1824.5353e-1752.2115e-1693.4532e-1720.345610.405930.52895.3235.3540.0034
MRPSOBestMeanstd6.4232e-1547.4323e-1514.2456e-1545.3133e-1502.9806e-1474.5624e-1498.6431e-2126.4331e-2106.7456e-2104.3534e-2091.0762e-2033.5356e-2062.456e-084.355e-065.342e-074.5674.6780.543
Fig 3

Comparison between different PSO algorithms on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

Fig 4

Comparison between different PSO algorithms on Group 2.

(a) f6, (b) f7, (c) f8.

Comparison between different PSO algorithms on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

Comparison between different PSO algorithms on Group 2.

(a) f6, (b) f7, (c) f8. Tables 9 and 10 shows the statistical results of different algorithms on unimodal functions. From the previous results in the last subsection, we can see that FQPSO with D0.8 obtained the best results on functions 1–3 and FQPSO with D0.7 achieved the best results on functions 4–5. We fixed the orders to compare those results with other variants of PSO. The results from different algorithms on these five unimodal functions suggest that FQPSO is better at a fine-gained search than all the other algorithms. The rapid convergence of the FQPSO can be seen as an evidence for our observation in Fig 3. In summary, FQPSO performs best in solving unimodal functions among all the algorithms. Tables 10 and 11 and Fig 4 show the performances of different algorithms on Group 2.
Table 11

Comparison between different PSO algorithms on function 7–8.

Algorithmf1f2
Dim = 10FEs = 10000Dim = 30FEs = 30000Dim = 10FEs = 10000Dim = 30FEs = 30000
FQPSOBestMeanstd7.1054e-159.9476e-158.4534e-140007.6438e-345.3222e-297.4345e-331.6409e-155.1514e-153.4356e-15
QPSOBestMeanstd1.9257e-351.8609e-334.3456e-342.5251e-263.6749e-215.4245e-261.7059e-196.8877e-207.4352e-197.9936e-157.9936e-150
PSOBestMeanstd4.9016e-088.345e-065.4345e-0741.345.04310.4135.6534e-146.4523e-134.5634e-145.4326e-057.5331e-044.5674e-05
CCPSOBestMeanstd8.3079e-051.1100e-044.5634e-050.00353.82536.42316.5341e-193.2145e-185.6313e-197.4243e-094.4509e-086.4234e-09
NPSOBestMeanstd1.7431e-143.4564e-142.6731e-143.4531e-050.08010.00043.4356e-225.3561e-195.5546e-225.3563e-138.4356e-136.4325e-13
MRPSOBestMeanstd6.5183e-119.7154e-094.5623e-101.6486e-084.678e-056.3245e-075.6356e-271.2343e-243.4465e-266.4382e-149.3435e-147.4231e-14
From the previous results in the last subsection, it can be noticed that FQPSO with D0.9 obtained the best result on function 6, FQPSO with D0.7 got the best result on function 7, and FQPSO with D0.9 achieved the best result on function 8. We also fixed the orders to compare those results with other variants of PSO. It can be seen that FQPSO obtains the global optimum on 10 and 30 dimensions. FQPSO is better to deal with multimodal functions than other algorithms. In the results of different PSOs on 30 dimensions also supports our conclusion that FQPSO is suitable for multimodal functions. In summary, FQPSO performs best in solving both unimodal and multimodal functions among all the algorithms.

Conclusion

Inspired by the properties of fractional calculus, we presented a novel QPSO algorithm incorporated with fractional calculus strategy, which is based on the properties of long time memory and non-locality of fractional calculus. The goal is to employ the proposed method to accelerate not only the convergence speed but also avoid the local optimums. Since the property of fractional calculus enables quantum-particles in FQPSO to appear anywhere during iterations, it significantly improves the global searching ability. Furthermore, FQPSO also increases the convergence rate for the quantum particles. As a result, the proposed FQPSO method achieves more favorable results than all the other algorithms.
  1 in total

1.  An electronic transition-based bare bones particle swarm optimization algorithm for high dimensional optimization problems.

Authors:  Hao Tian; Jia Guo; Haiyang Xiao; Ke Yan; Yuji Sato
Journal:  PLoS One       Date:  2022-07-25       Impact factor: 3.752

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.