Literature DB >> 35251160

Subpopulation Particle Swarm Optimization with a Hybrid Mutation Strategy.

Zixuan Xie1, Xueyu Huang1, Wenwen Liu1.   

Abstract

With the large-scale optimization problems in the real world becoming more and more complex, they also require different optimization algorithms to keep pace with the times. Particle swarm optimization algorithm is a good tool that has been proved to deal with various optimization problems. Conventional particle swarm optimization algorithms learn from two particles, namely, the best position of the current particle and the best position of all particles. This particle swarm optimization algorithm is simple to implement, simple, and easy to understand, but it has a fatal defect. It is hard to find the global optimal solution quickly and accurately. In order to deal with these defects of standard particle swarm optimization, this paper proposes a particle swarm optimization algorithm (SHMPSO) based on the hybrid strategy of seed swarm optimization (using codes available from https://gitee.com/mr-xie123234/code/tree/master/). In SHMPSO, a subpopulation coevolution particle swarm optimization algorithm is adopted. In SHMPSO, an elastic candidate-based strategy is used to find a candidate and realize information sharing and coevolution among populations. The mean dimension learning strategy can be used to make the population converge faster and improve the solution accuracy of SHMPSO. Twenty-one benchmark functions and six industries-recognized particle swarm optimization variants are used to verify the advantages of SHMPSO. The experimental results show that SHMPSO has good convergence speed and good robustness and can obtain high-precision solutions.
Copyright © 2022 Zixuan Xie et al.

Entities:  

Mesh:

Year:  2022        PMID: 35251160      PMCID: PMC8890841          DOI: 10.1155/2022/9599417

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

So far, optimization algorithm has been a popular research problem. Particle swarm optimization algorithm is a population-based optimization algorithm, which was first invented by Dr. Eberhart and Dr. Kennedy in 1995 [1, 2]. Particle swarm optimization algorithm is inspired by the behavior of birds looking for food. Particle swarm optimization is a search algorithm in which particles cooperate with each other. Of course, particle swarm optimization algorithm is a population intelligent optimization algorithm. In addition to particle swarm optimization algorithm, common intelligent algorithms include differential evolution algorithm (DE) [3], ant colony optimization (ACO) algorithm [4], artificial bee colony (ABC) [5], programming algorithm (FEP) [6], simulated annealing algorithm [7], neural network [8], text clustering [9], resource allocation [10], and task allocation [11]. Particle swarm optimization algorithm has been widely accepted with the advantages of rapid convergence, excellent robustness, and concise understanding. At present, many optimization algorithms have achieved significant results in many fields. It includes many applications, such as optimal tuning of type-1 and type-2 fuzzy controllers [12], optimal tuning of interval type-2 fuzzy controllers [13], scheduling planning [14], undergraduate systems engineering curriculum optimization technology [15], improving the performance of FinFET devices [16], brain models of fear processing and conflict modulation [17], multiobjective dynamic optimization problem [18, 19], mixed-variable Newsvendor problems [20], and feature selection [21]. With the advantages of particle swarm optimization algorithm [22], many applications use particle swarm optimization algorithm, such as image processing [23], neural network [24], feature selection [25], data clustering [26], and mixed-variable optimization problems (MVOPs) [27]. Although particle swarm optimization algorithm has been preeminently optimized and improved, it still has some problems to be solved. In particular, when the dimension of the population becomes higher, the particle swarm optimization algorithm is extremely prone to early convergence, which makes it hard to jump out of the local optimal solution in the final stage of the algorithm. In order to continue to improve the performance of particle swarm optimization algorithm, countless particle swarm optimization researchers are committed to four improvement directions, namely, particle swarm optimization parameter adjustment, learning strategy improvement, topology selection, and integration with other algorithms. These four aspects are described as follows: Parameter control: it includes setting inertia weight, acceleration coefficient, and population size. The particle swarm optimization algorithm with constant inertia weight balances the ability of global development and local exploration. This method limits the particle swarm optimization algorithm in a sense. Another adaptive control parameter is proposed to control local balance and global exploration through adaptive parameters. Shi and Eberhart et al. proposed a PSO algorithm with an inertia weight set to a constant value to balance the global exploration and local exploitation abilities of the algorithm [28]. The inertia weight is constant, which greatly limits the potential search ability of particle swarm optimization. Zhan et al. proposed an APSO algorithm with adaptive control parameters. APSO achieves a locally and globally stable search state through adaptive control parameters [29]. Learning strategy improvement: it includes some comprehensive learning strategies, biogeography-based learning strategies, segmentation based advantage learning strategies, domain based learning strategies, and dynamic domain learning strategies. Yang et al. proposed the SPLSO algorithm that used segment-based predominant learning. This model first segments the dimensions of each poor-performing particle; then, each segment is learned from a better-performing particle, allowing the algorithm to capitalize on the information from the better particles and avoid premature convergence [30]. Liang and Suganthan proposed a DMS-PSO algorithm with a dynamic neighborhood structure in which the learning of each particle is no longer limited to one population; instead, it also includes other populations [31]. Topology: topology can effectively use particle information in the field, but it ignores the global optimal information to a certain extent. Common topologies include ring topology, star topology, network topology, dynamic tree topology, and dynamic competition topology. Li et al. proposed an adaptive particle swarm optimization algorithm using scale-free network topology. Based on the characteristics of scale-free network topology with a power-law distribution, the algorithm can construct a corresponding neighborhood for each particle [32]. Janson et al. constructed a dynamically changing tree topology in which each particle learns from its parent to utilize the information of each particle effectively [33]. Hybrid algorithm integration: mixing with other algorithms is one of the main research fields of particle swarm optimization algorithm, which can effectively improve the performance of the algorithm. Zhang et al. proposed the DSPSO algorithm by combining the differential mutation operation with the SLPSO algorithm [34]. Valdez et al. proposed a new hybrid approach for optimization by combining particle swarm optimization (PSO) and genetic algorithms (GAs) using fuzzy logic to integrate the results [35]. According to the above analysis, an excellent particle swarm optimization algorithm is to have better local exploration and global exploration capabilities. To achieve both, the diversity of population is essential, which can prevent the premature convergence of particle swarm optimization algorithm. Because each optimization problem is different, it is difficult for a single evolutionary strategy to meet each optimization problem. Relevant literature has shown that the combination of several strategies helps to improve the possibility of particle swarm convergence to global optimization. In order to adapt to more optimization problems, inspired by the MPCPSO algorithm, this paper proposed a joint strategy of subpopulation cooperative particle swarm optimization algorithm. The particle swarm is initialized, and the fitness values of all particles for the first time are recorded [36]. According to the fitness values, the population is divided into two populations: dominant population (DP) and poor population (PP). For the two populations, the first population and the second population adopt different evolutionary strategies, respectively, the second population adopts candidate learning strategy, and the first population adopts mean dimension learning. By comparing with other algorithms, the effect of the two strategies in this paper is feasible, and the solution of the function can be obtained at the same time. The first section of this paper is organized as follows: Section 2 mainly introduces the related work of writing this article. Section 3 introduces the two learning strategies and SHMPSO in detail. In Section 4, the SHMPSO algorithm is tested by using twenty-one benchmark functions and six famous PSO variants. At the end of the article, Section 5 gives the relevant conclusions.

2. Related Work

This part mainly introduces some harvest before completing the experiment. Inspired by two particle swarm optimization algorithms, the first is the classical particle swarm optimization algorithm, and the second is the particle swarm optimization algorithm based on biogeography. Basically, all PSO variants are improved on the classic PSO.

2.1. Classic PSO

The classical particle swarm optimization algorithm is an optimization algorithm based on the whole population. Each particle represents the possibility of a solution. All particles update their positions according to their own historical best positions and global historical best positions. Generally, there are D dimensions in the search space of the whole particle swarm. The position vector of particle i at a certain time t is X=(x, x,…x), and the velocity vector is V=(v, v,…v). All particles of the next-generation population can be generated according to the position vector and velocity vector. The update equation for generating optimized next-generation particles iswhere ω is an inertia weight, c1 and c2 are an acceleration factor, and r1 and r2 are a random number between [0, 1]. These two equations are the core equations of standard particle swarm optimization. The equation for speed and position is updated. Although particle swarm optimization algorithm has made many improvements in recent decades, there is still an algorithmic barrier that is hard to breakthrough. For example, the algorithm is difficult to find the extreme value in the case of high dimension, and it is simple to fall into local optimization. At present, many strategies have been proposed to solve these difficulties. The following introduces the comprehensive learning strategies and biogeography learning strategies.

2.2. Particle Swarm Optimization Algorithm with the Comprehensive Learning Strategy

Liang et al. proposed a CLPSO algorithm, which is different from the conventional particle swarm optimization algorithm introduced earlier [37]. CLPSO never learns from the previously mentioned global optimal particle, but each dimension of the particle is like the historical optimal learning of the sample constructed in its field, and the learning probability is P. CLPSO canonical learning scheme avoids falling into local optimization in some multimodal problems. The revised scheme proposed by CLPSO is described as follows:where ω is the inertia weight, c is an acceleration coefficient, and r is a random number between (0, 1). f(d) represents that in the D dimension, the particle i changes from f(d) in p best that is an optimum position of particle history and f(d)=[f(1), f(2),…, f(D)]. All sample vectors of particle i are defined in f(d). In CLPSO, equation (3) shows that each different particle can learn from different particles from different dimensions. The main core methods of CLPSO are shown in the following flow chart: Generate random number P between (0, 1), and judge the size of p and P. If P > P, the current particle i updates according to its optimal position. If P < P, the optimal particle is selected by comparing the fitness values of all particles. Let the particle replace the personal optimal position of the particle i and guide particle i to update. The algorithm employs a comprehensive learning strategy whereby the best position before other particles is a paradigm that can be learned by any particle, and each dimension of a particle has the potential to learn from a different paradigm. The new strategy allows particles to have more learning paradigms and a larger potential flight space.

2.3. Learning Particle Swarm Optimization Algorithm Based on Biogeography (BLPSO)

BLPSO is improved according to CLPSO, which is based on biogeographic migration. BLPSO proposes a new learning strategy particle swarm optimization algorithm based on biogeography [38]. All particles in the population are updated by biogeographic migration using their own optimal location and the optimal location combination of other particles. All particles have a migration in and migration out rate, respectively, α and β to represent: Sort each particle, and calculate the migration rate of each particle after sorting α and β. Generate the index of the sample vector,f(d)=[f(1), f(2),…, f(D)]. Generate a random number r. When r < α, the index j sum of a particle and its migration rate is selected by the roulette selection probabilities β. Assign j to f. Otherwise, assign i to f. If f is equal to particle i, randomly select a particle j that is not equal to particle i, randomly select a dimension l, and assign the dimension of j to f(l). Contrastingly, the biogeography-based learning strategy employs a ranking technique whereby particles can learn more from particles with high-quality personal best positions, and this effectively enhances the exploitation of the original CLPSO.

3. Particle Swarm Optimization Algorithm with the Hybrid Strategy of Seed Swarm Optimization

The third section mainly introduces the particle swarm optimization algorithm based on the hybrid strategy of seed swarm optimization. Section 3.1 mainly introduces the learning strategies of the mean dimension. Section 3.2 mainly introduces the candidate generation strategy based on elasticity. The overall operation framework of this strategy will be given in Section 3.2. At the end of this section, the whole process of SHMPSO is shown in the form of pseudo code.

3.1. Learning Strategy of Mean Dimension

The particle swarm optimization algorithm for subpopulation has been around for a long time, and it has received excellent feedback on some issues. The CMPSODMO proposed by Liu et al. in 2017 handles multiobjective dynamic optimization problems in a complex and changing environment and uses a multiswarm-based particle swarm optimization framework to optimize problems in a dynamic environment [39]. The algorithm has achieved excellent results. In the FTPSO, proposed by Yazdani et al. in 2013, it was clearly proposed that in the algorithm, the advantages of multiple groups should be used to cover multiple peaks of the multiobjective problem in a dynamic space [40]. Inspired by TSLPSO, it was proposed by Xu et al. in 2019 [41]. TSLPSO proposes to adopt two populations and uses different strategies to iterate the two populations in the search space, respectively, and obtain significant results. Yen and Daneshyari proposed a method to exchange information among multiple swarms [42]. In the SHMPSO algorithm proposed in this paper, firstly, a population is initialized, and the population is divided into two populations by using the fitness ranking mechanism. The ranking order is ascending. In each iteration process of the algorithm, the part of particles with smaller fitness values is divided into the dominant population, and the other part is classified as the poor population. The proportional coefficient of the dominant population particle is s. The dominant populations use the mean dimension learning strategy to optimize the population, and the poor population uses the candidate generation strategy based on elasticity to learn. In this way, different evolutionary strategies for different populations can effectively ensure the diversity of populations and avoid falling into local optimization. The dominant population can guide the search direction of the whole population and make the population further converge to the solution quickly. There is also a strange phenomenon in PSO algorithm, which is called “spiral rise,” that is, the phenomenon of “two steps forward and one step backward” [43]. This means that although the fitness of the particles has been improved, the effects of the minority components of the particles have worsened. In order to overcome this problem, it is inevitable that the evaluation function needs to be changed frequently. Basically, all variants of PSO algorithm have some phenomena that the convergence speed is slow when the whole algorithm runs to the later stage. In SHMPSO, all particles are learning from the dominant population to achieve faster convergence speed. Classical PSO requires two guiding particles, which are the current particle historical best value and the global historical best value. However, there are many algorithms that only have one guiding particle, such as the CLPSO and BLPSO mentioned above (Section 2). The advantage of this is that the speed update formula has fewer parameters and is easy to understand. The difficulty lies in how to construct this guide particle. For the dominant population, this article uses a guide particle. The following is an introduction to the evolution strategy of the dominant population. For the whole dominant group, when the particle velocity is updated, the current particle will only receive the influence of m best generated by the comprehensive dimension learning. The velocity update equation of particles iswhere ω, c, and r have the same meanings as in equation (3). Inspired by MPCPSO, m best is obtained from equation (5). Its purpose is to help particles get rid of the local optimal state. As can be seen from equation (9), m best is regarded as the only learning paradigm used to guide particle motion. Suppose the algorithm falls into the local extremum, m best seldom actively help particles find better solutions. In mean dimension learning, each particle learns not only from other particles but also from other related dimensions, which greatly increases the universality of particle domain learning.where D represents the dimension of particles, r is a random number between [0, 1], and N represents the total number of particles in the dominant population. Among them, ρ is a dynamically defined value. The specific solution is as follows: The location update equation is as follows:where φ is a dynamic parameter. Its equation (8) is given below:where N1 is the current population size (the dominant population), ave1 refers to the average fitness of the dominant population after the first iteration of the population, and iter represents the current number of iterations. Based on the above iterative updating, the dominant population can be updated in a good direction. It can be seen from equation (7) that the convergence speed of particles of the dominant population will be accelerated to a great extent. With the acceleration of convergence speed, it is inevitable that mean dimension learning is simple to fall into local optimization. In order to avoid this situation, a differential mutation operator is introduced to increase the diversity of the population [44, 45]. Here, we need an operation to randomly select two particles from the dominant population. The values of these two particles cannot be the same as those of the current iteration. At this time, we need to calculate the differential of these two random particles and make them a differential vector. This differential vector also needs to be mutated with the scaling factor of F. After mutation, it is summed with the global optimum (P). The whole operation is described by the following equation:where P is the global optimal position of the current population, F is a mutation coefficient. a  and b represent the index of randomly selected particles and meet the condition requirements i ≠ a ≠ b. The equation of mutation operation not only improves the search ability of the algorithm in the dominant particle swarm optimization but also expands the diversity in the process of population search. Algorithm 1 lists the pseudo code of mean dimensions learning.
Algorithm 1

Mean learning strategy.

3.2. Candidate Generation Strategy Based on Elasticity

Next, the strategy of poor population is the candidate generation strategy based on elasticity. SHMPSO uses different populations and adopts different evolutionary strategies to evolve, respectively. This strategy is conducive to the rapid convergence of the population without losing the diversity of the population. The most important thing about this is how to generate candidates, which is the core part of the whole strategy. Here, the speed update equation has changed, instead of learning from the global optimization (P) like the traditional particle swarm optimization algorithm, but introducing candidates and learning from candidates. New speed update equation is as follows:where ω, c1, c2, r1, and r2 have the same meanings as in equation (1). The vector p best 2 represents the historical optimum of the current particle in the poor population. The generation method of Candidate is given below. Inspired by the elastic force generated by spring compression, an elastic coefficient prob is introduced here. The elastic coefficient of the poor population particle is set as prob=0.5. This parameter size is set by the user. Like a spring, it can be stretched or compressed. Particles are sorted in ascending order according to the fitness value, the larger the value, the worse the performance of the particles. A random number r between [0, 1] is randomly generated. If the elastic coefficient prob is greater than r, then the Candidate is generated by the equation:where the vector of g best 1 represents the global optimal position of the dominant population. N represents the number of particles. The vector of step zize is a D-dimensional vector obtained by equation (17) and generated by Levy flight. An introduction to Levy flight is given below. The generation of Levy flight random number includes two parts. The first part is the selection of random direction, and the second part is the generation of Levy distribution. Random walks are derived from Levy stability. This distribution is a simple power-law equation:where 0 < β < 2. It is an index.

Definition 1 .

To determine Levy distribution mathematically, it defined by the following equation:where the parameter μ is a control displacement parameter or position, γ > 0 represents the scale parameter (the scale used to control the distribution).

Definition 2 .

Usually, Levy distribution is defined by Fourier transform. The specific equation is given in the following:where α is a parameter in the interval [−1, 1], which is usually called skewness or scale factor. Stability index β is controlled between (0, 2), which is also commonly referred to as Levy index. β in most cases, his analytical form of integration is unknown. For random walking, the step S can be calculated by Mantegna's equation:where u and v in equation (13) obey a positive distribution:The step size can then be calculated by the following equation:where the factor 0.01 comes from the typical step factor L/100 , and L is a typical length ratio. Otherwise, Levy flight will become too radical and jump out of the design plan (waste evaluation). When the elasticity factor prob < r, in order to ensure the diversity of the population, two particles m, n are selected from the dominant population. These two particles and the global optimal particle P are required which is different from each other. Compare particle X and X fitness value, and select the particle with a smaller fitness value X. Candidate is obtained by the equation:where stepzize and N have the same meanings as in (12). The candidate improves the diversity of the population in this way. Algorithm 2 lists the pseudo code of the candidate generation strategy based on elasticity.
Algorithm 2

Strategy of candidate generation based on elasticity.

3.3. Overall Framework of the SHMPSO Algorithm

Based on the improved strategies of 3.1 and 3.2, SHMPSO is constructed. The specific steps of the SHMPSO algorithm are as follows: Step 1: initialize the population and set the parameters, the mutation factor F = 0.5, the population proportion scale factor s = 0.5, and the times of falling into local optimum M = 6. Step 2: calculate the fitness value f(X) and the global optimal value (P) of all particles. Step 3: sorting in ascending order according to the fitness value, taking s ∗ N (all particles) particles as the dominant population (DP) and the rest particles as the poor population (PP) Step 4: optimizing disadvantaged populations based on flexible candidate strategy, which uses equations (2) and (10). The mean dimension learning is used to optimize the DP subpopulation through equations (4) and (7). Step 5: when M > 6, equation (9) is used to update the position of particles in the dominant population (DP). Step 6: recalculate the fitness values of all particles, and update the current global optimal value (P). Step 7: repeat steps 3–6 when the maximum allowed times of iteration is bigger than the times of the maximum number of iterations. The algorithm flow chart of SHMPSO is shown in Figure 1. This algorithm mainly uses the global search ability to find a better search space. The algorithm starts from the global situation, finds the current global optimal position, and can converge to the global optimal solution faster. Thirdly, the general framework of the algorithm is given in Algorithm 3.
Figure 1

Convergence curves of 1–12 functions. (a) f1. (b) f2. (c) f3. (d) f4. (e) f5. (f) f6. (g) f7. (h) f8. (i) f9. (j) f10. (k) f11. (l) f12.

Algorithm 3

Grouping-mixed-based particle swarm optimization algorithm.

4. Experiment

In this section, in order to verify the reliability and efficiency of the proposed SHMPSO, twenty-one widely used benchmark functions are adopted. Comparing SHMPSO with other varieties of PSO, the results are verified. The experiment process is as follows.

4.1. Benchmark Function and Parameter Setting

The twenty-one benchmark functions listed in Table 1 are used to demonstrate the superiority of SHMPSO. In Table 1, the first column represents the function number, the second column represents the function name, the third column represents the function mathematical expression, the fourth column represents the function search range, and the fifth column shows the minimum value of the function. The tested functions include 11 unimodal functions (f1 − f2,f4 − f12) and 10 multimodal functions (f3, f13 − f21). The optimal value of all benchmark functions tested is 0. In order to compare the superior performance of the algorithm, six PSO variants are selected, including inertia weight PSO [40], ACPSO [46], SLPSO [47], CLPSO [37], BLPSO [38], and MPCPSO [36].
Table 1

Twenty-one benchmark functions.

FunctionFunction nameTest functionSearch range f min
f 1 Brown functioni=1n−1(xi2)(xi+12+1)+(xi+12)(xi2+1)[−1, 4]0
f 2 Exponential function−exp(−0.5∑i=1nxi2)[−1, 2]0
f 3 Griewank function 1+i=1nxi2/4000i=1ncosxi/i [−600, 600]0
f 4 Ridge function x i +d∗(∑i=1nxi2)α, d=1, α=0.5[−5, 5]0
f 5 Schwefel2.20 functioni=1n|xi|[−100, 100]0
f 6 Schwefel2.21 function maxi=1,,nxi [−100, 100]0
f 7 Schwefel2.22 functioni=1n|xi|+∏i=1n|xi|[−100, 100]0
f 8 Schwefel2.23 functioni=1nxi10[−10, 10]0
f 9 Spherei=1nxi2[−5.12, 5.12]0
f 10 Sum squaresi=1nixi2[−10, 10]0
f 11 Xin-she Yang N.3exp(−∑i=1n(xi/β)2m) − 2exp(−∑i=1nxi2)∏i=1ncos2(xi)[−2 π, 2 π]0
f 12 Zakharoui=1nxi2+(0.5i=1nixi)2[−5, 10]0
f 13 Ackley 1 20+exp120exp0.2i=1nxi2/nexp1ni=1ncos2πxi [−35, 35]0
f 14 Alphine 1i=1n|xisin(xi)+0.1xi|[0, 10]0
f 15 Happy cat i=1nxi2nα+1n0.2i=1nxi2+i=1nxi+0.5 , α=0.5[−2, 2]0
f 16 Periodic1+∑i=1nsin2(xi) − 0.1exp(−∑i=1nxi2)[−10, 10]0
f 17 Rastrign10n+∑i=1n[xi2 − 10cos(2πxi)][−5.12, 5.12]0
f 18 Xin-she Yang 2(∑i=1n|xi|i)exp[−∑i=1nsin(xi2)][−2 π, 2 π]0
f 19 Schwefel 1.2i=1n(x1+x2+⋯+xn)2[−100, 100]0
f 20 Step2i=1nxi+0.52[−100, 100]0
f 21 Penalized2((sin2(3πxi)+∑i=1n−1(xi − 1)2[1+sin2(3πxi+1)]+(xn − 1)2)[1+sin2(2πxn)])0.1 + ∑i=1n100(ai − 5)4, ai=xi,xi>5xi,xi<5[−50, 50]0

4.2. Selection of the Population Proportion Coefficient

In order to get a fairer comparison result, all the parameters used in the data experiment are the same, including the maximum number of independent runs (runNumber), the number of evaluations (maxFEs), and the maximum number of iterations (maxgen) where N represents the number of all particles in the population. All the comparison algorithms tested the twenty-one benchmark functions and got the mean and standard deviation after 30 runs. The parameter settings of all PSO variants are given in Table 2. According to the experimental results, all the functions are ranked and compared. In order to test the proportion of the dominant population in the SHMPSO algorithm, parameter c is tested, and the experimental results are as follows.
Table 2

Parameter settings of different PSO variants.

AlgorithmParameter setting
PSO ω=0.729, c1=c2=1.49445, Vmax=0.2 × range
ACPSO ω=0.9 ~ 0.4, c1=c2=1.49445, Vmax=0.2 × range,alpha=0.1; beta=0.1
SLPSO m=M+floor(D/100), c=D/M∗0.01, M=100
CLPSO ω=0.9 ~ 0.2, gamp=5, c=1.49445, Vmax=0.2 × range
BLPSO ω=0.9 ~ 0.2, gamp=5, I=E=1, c=1.49445, =0.2 × range
MPCPSO ω=0.729, c1=c2=1.49445, F=0.5, t=0.5, Vmax=0.2 × range
SHMPSO ω=0.729, c1=c2=1.49445, F=0.5, s=0.5, prob=0.5, Vmax=0.2 × range
Through the overall analysis of Table 3 and ranking of each function, it is concluded that when s = 0.5, when the proportion of the dominant population in this paper is kept at 50%, the population optimization effect is the best. All the following comparisons are the experimental results based on the population scale coefficient of 0.5.
Table 3

Selection of parameter s.

s = 0.1 s = 0.2 s = 0.3 s = 0.4 s = 0.5 s = 0.6 s = 0.7
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 1 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00  +  001.00E + 001.00E + 00
f 2 Std.6.78E – 166.78E − 166.78E – 166.78E − 166.78E − 166.78E − 166.78E − 16
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 3 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean4.70E – 031.34E − 053.38E – 043.10E − 132.28E − 2162.80E − 2023.74E − 193
f 4 Std.2.57E – 025.10E − 051.80E – 031.70E − 120.00E + 000.00E + 000.00E + 00
Rank7564123
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 5 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 6 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 7 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 8 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 9 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 10 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank0111111
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 11 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 12 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean8.88E – 168.88E − 168.88E – 168.88E − 168.88E − 168.88E − 168.88E − 16
f 13 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 14 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean1.07E + 004.47E − 014.51E – 015.90E − 012.19E − 011.01E + 001.75E − 01
f 15 Std.7.80E – 012.73E − 014.98E – 011.43E + 001.59E − 013.47E + 001.20E − 01
Rank7345261
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 16 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 17 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean6.18E – 063.71E − 064.44E – 077.78E − 072.57E − 073.67E − 082.49E − 07
f 18 Std.7.85E – 067.21E − 061.54E – 063.19E − 066.04E − 071.10E − 071.34E − 06
Rank7645312
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 19 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 20 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean2.62E + 002.89E + 003.10E + 002.81E + 002.85E + 002.83E + 002.96E + 00
f 21 Std.6.76E – 016.23E − 011.60E + 004.34E − 012.98E − 015.88E − 012.03E − 01
Rank1572436
Average rank1.861.711.811.571.291.381.38

4.3. Comparison of Experimental Data between SHMPSO and Other PSO Variants

At the same time, 30, 50, and 100 dimensions are used to evaluate the given test function. The total population of SLPSO has its own definition, and all other functional population sizes are tested at (N = 100). When the particle dimension of all populations is 30 dimensions, maxgen is set to 3 × 10^3, and the function evaluation times maxFEs is set to 3 × 10^5. When the dimension of the particle is 50 dimensions, maxgen is set to 5 × 10^3, and the function evaluation times maxFEs is set to 5 × 10^5. When the particle dimension is 100 dimensions, the population's maxgen is set to 1 × 10^4, and the function evaluation times maxFEs is set to 1 × 10^6. It can be seen from Tables 4 and 5 that the performance tested by the proposed SHMPSO is relatively stable in 30 and 50 dimensions, and the results obtained are similar. It can be seen that the performance of SHMPSO is excellent in the 30-dimensional and 50-dimensional convergence processes. SHMPSO is in f1, f3,  f5 − f10,  f12,  f14,  f16,  f17,  f19, and  f20. The optimal solution of these 14 functions can be found by the strategy proposed in this paper. These results explain that the strategy proposed in this paper can be well applied to these functions. Of course, the effects of f15 and f21 become worse with the increase of dimensions, and the optimization effect of SHMPSO on some multimodal functions which are difficult to optimize needs to be improved. In f18, the effect of SHMPSO is not as good as that of CLPSO, BLPSO, and SLPSO. However, in the overall 21 test functions, the average rank of SHMPSO ranks first, 1.61 and 1.67, respectively. Through the above analysis, the performance of SHMPSO is better than that of other six comparison algorithms.
Table 4

Comparison of results of benchmark functions on various PSO variants (30-D).

PSOCLPSOBLPSOACPSOSLPSOMPCPSOSHMPSO
Mean1.34E + 019.66E − 115.10E − 301.00E – 112.08E − 1400.00E + 000.00E + 00
f 1 Std.1.47E + 013.51E − 114.15E − 301.27E − 113.39E − 1400.00E + 000.00E + 00
Rank7645311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 2 Std.6.78E − 169.38E − 134.65E − 160.00E + 006.78E − 166.78E − 166.78E − 16
Rank3721333
Mean1.05E + 001.89E − 070.00E + 005.48E − 025.75E − 040.00E + 000.00E + 00
f 3 Std.6.97E − 021.23E − 070.00E + 003.27E − 022.21E − 030.00E + 000.00E + 00
Rank7416511
Mean0.00E + 002.66E − 085.58E − 078.65E − 141.10E − 063.49E − 185.98E − 184
f 4 Std.0.00E + 004.12E − 088.24E − 071.22E − 131.72E − 061.91E − 170.00E + 00
Rank1564732
Mean3.13E + 018.20E − 051.24E − 161.50E − 042.70E − 700.00E + 000.00E + 00
f 5 Std.1.44E + 011.70E − 056.60E − 179.56E − 053.09E − 700.00E + 000.00E + 00
Rank7546311
Mean1.50E + 011.17E + 016.12E − 015.21E − 031.14E − 370.00E + 000.00E + 00
f 6 Std.3.55E + 009.31E − 012.62E − 014.08E − 031.09E − 370.00E + 000.00E + 00
Rank7654311
Mean2.10E + 021.17E − 041.49E − 161.70E − 043.17E − 690.00E + 000.00E + 00
f 7 Std.1.38E + 022.44E − 058.02E − 176.14E − 052.02E − 690.00E + 000.00E + 00
Rank7546311
Mean5.55E + 003.54E − 281.54E − 477.56E − 490.00E + 000.00E + 000.00E + 00
f 8 Std.1.17E + 016.44E − 285.49E − 472.67E − 480.00E + 000.00E + 000.00E + 00
Rank7654111
Mean0.00E + 002.62E − 112.75E − 231.50E − 110.00E + 000.00E + 000.00E + 00
f 9 Std.0.00E + 009.72E − 123.65E − 231.89E − 110.00E + 000.00E + 000.00E + 00
Rank1756111
Mean6.40E + 012.97E − 095.47E − 279.22E − 101.07E − 1380.00E + 000.00E + 00
f 10 Std.3.67E + 011.08E − 095.16E − 279.09E − 101.81E − 1380.00E + 000.00E + 00
Rank7645311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 11 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean6.44E + 011.18E + 014.00E − 051.77E − 052.11E + 000.00E + 000.00E + 00
f 12 Std.3.60E + 012.35E + 004.30E − 052.46E − 052.27E + 000.00E + 000.00E + 00
Rank7643511
Mean8.53E + 001.48E − 042.95E − 141.88E − 055.74E − 158.88E − 168.88E − 16
f 13 Std.9.51E − 012.87E − 051.79E − 149.74E − 061.23E − 150.00E + 000.00E + 00
Rank7645311
Mean7.33E + 005.68E − 042.16E − 101.54E − 055.00E − 170.00E + 000.00E + 00
f 14 Std.2.56E + 001.13E − 045.15E − 109.12E − 061.31E − 160.00E + 000.00E + 00
Rank7645311
Mean2.89E − 015.33E − 026.84E − 038.89E − 034.42E − 027.36E + 002.19E − 01
f 15 Std.1.01E − 016.61E − 031.55E − 033.70E − 039.95E − 031.03E + 011.59E − 01
Rank6412375
Mean5.43E − 011.01E − 011.00E − 011.00E − 011.00E − 010.00E + 000.00E + 00
f 16 Std.2.53E − 013.44E − 042.94E − 063.21E − 061.13E − 160.00E + 000.00E + 00
Rank7645311
Mean6.91E + 011.14E − 040.00E + 002.48E − 071.34E + 010.00E + 000.00E + 00
f 17 Std.1.20E + 014.23E − 050.00E + 006.30E − 073.87E + 000.00E + 000.00E + 00
Rank7514611
Mean6.27E − 113.52E − 123.53E − 123.51E − 128.54E − 129.64E − 052.57E − 07
f 18 Std.4.87E − 111.33E − 158.58E − 145.05E − 191.45E − 122.81E − 046.04E − 07
Rank5231476
Mean1.68E + 032.38E + 031.33E + 017.40E − 021.72E − 110.00E + 000.00E + 00
f 19 Std.1.09E + 034.68E + 026.45E + 001.07E − 012.36E − 110.00E + 000.00E + 00
Rank6754311
Mean5.78E + 020.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
f 20 Std.2.18E + 020.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank7111111
Mean8.23E + 013.95E − 082.71E − 263.95E + 071.35E − 322.91E + 002.82E + 00
f 21 Std.9.32E + 011.50E − 081.97E − 262.16E + 085.57E − 481.30E − 013.37E − 01
Rank6327154
Average rank64.952.734.053.11.951.71
Table 5

Comparison of results of benchmark functions on various PSO variants (50-D).

PSOCLPSOBLPSOACPSOSLPSOMPCPSOSHMPSO
Mean4.38E + 018.38E − 118.18E − 353.50E − 128.09E − 1600.00E + 000.00E + 00
f 1 Std.2.67E + 012.48E − 118.13E − 352.59E − 121.44E − 1590.00E + 000.00E + 00
Rank7645311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 2 Std.2.26E − 166.21E − 163.66E − 160.00E + 004.46E − 162.26E − 162.26E − 16
Rank2751622
Mean1.58E + 002.19E − 080.00E + 002.43E − 021.64E − 030.00E + 000.00E + 00
f 3 Std.1.31E − 019.27E − 090.00E + 002.76E − 023.85E − 030.00E + 000.00E + 00
Rank7416511
Mean0.00E + 001.16E − 082.99E − 078.88E − 171.52E − 068.36E − 112.24E − 179
f 4 Std.0.00E + 001.78E − 082.24E − 072.71E − 161.11E − 064.58E − 100.00E + 00
Rank1563742
Mean7.91E + 019.12E − 051.04E − 191.34E − 049.39E − 810.00E + 000.00E + 00
f 5 Std.3.68E + 011.24E − 056.29E − 202.81E − 051.56E − 800.00E + 000.00E + 00
Rank7546311
Mean2.33E + 011.50E + 011.41E + 006.37E − 031.69E − 280.00E + 000.00E + 00
f 6 Std.2.60E + 001.10E + 002.77E − 012.11E − 033.90E − 280.00E + 000.00E + 00
Rank7654311
Mean9.10E + 021.38E − 045.29E − 201.50E − 042.00E − 790.00E + 000.00E + 00
f 7 Std.1.89E + 022.09E − 052.52E − 203.13E − 052.02E − 790.00E + 000.00E + 00
Rank7546311
Mean1.20E + 037.04E − 291.15E − 456.36E − 540.00E + 000.00E + 000.00E + 00
f 8 Std.1.44E + 037.10E − 294.92E − 452.80E − 530.00E + 000.00E + 000.00E + 00
Rank7654111
Mean4.00E + 018.82E − 113.24E − 236.60E − 120.00E + 000.00E + 000.00E + 00
f 9 Std.5.63E + 012.56E − 112.83E − 233.31E − 120.00E + 000.00E + 000.00E + 00
Rank7645111
Mean4.92E + 025.20E − 092.40E − 314.80E − 108.33E − 1580.00E + 000.00E + 00
f 10 Std.1.75E + 021.15E − 093.14E − 312.07E − 102.17E − 1570.00E + 000.00E + 00
Rank7645311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 11 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean2.70E + 025.06E + 017.70E − 036.54E − 091.83E + 020.00E + 000.00E + 00
f 12 Std.9.69E + 017.82E + 004.95E − 033.77E − 093.04E + 010.00E + 000.00E + 00
Rank7543611
Mean1.18E + 011.06E − 046.45E − 151.01E − 056.34E − 158.88E − 168.88E − 16
f 13 Std.8.40E − 011.31E − 059.01E − 162.35E − 066.49E − 160.00E + 000.00E + 00
Rank7645311
Mean2.17E + 011.13E − 039.04E − 113.83E − 051.85E − 160.00E + 000.00E + 00
f 14 Std.5.10E + 001.74E − 044.94E − 101.70E − 052.62E − 160.00E + 000.00E + 00
Rank6547311
Mean5.13E − 017.88E − 021.24E − 021.01E − 021.29E − 016.22E + 002.96E + 00
f 15 Std.1.22E − 017.61E − 032.09E − 033.16E − 032.24E − 021.41E + 016.63E + 00
Rank5321476
Mean2.73E + 001.02E − 011.00E − 011.00E − 011.00E − 010.00E + 000.00E + 00
f 16 Std.8.90E − 014.24E − 042.40E − 066.10E − 057.06E − 170.00E + 000.00E + 00
Rank7645311
Mean1.87E + 022.30E − 043.79E − 151.87E − 073.01E + 010.00E + 000.00E + 00
f 17 Std.3.36E + 017.25E − 052.08E − 141.08E − 078.15E + 000.00E + 000.00E + 00
Rank7534611
Mean1.17E − 161.21E − 201.27E − 201.21E − 203.40E − 201.98E − 087.73E − 12
f 18 Std.2.25E − 167.47E − 246.13E − 222.18E − 264.04E − 217.77E − 083.83E − 11
Rank5231476
Mean8.29E + 031.21E + 043.47E + 021.26E − 022.69E − 030.00E + 000.00E + 00
f 19 Std.3.39E + 031.99E + 038.55E + 017.02E − 035.59E − 030.00E + 000.00E + 00
Rank6754311
Mean3.38E + 030.00E + 000.00E + 000.00E + 003.33E − 020.00E + 000.00E + 00
f 20 Std.9.39E + 020.00E + 000.00E + 000.00E + 001.83E − 010.00E + 000.00E + 00
Rank7111611
Mean5.43E + 042.43E − 081.20E − 306.50E − 061.46E − 034.88E + 004.89E + 00
f 21 Std.1.22E + 055.25E − 097.11E − 313.55E − 053.80E − 031.61E − 015.37E − 01
Rank7213456
Average rank5.94.713.524.293.711.951.81
In order to further verify the scalability and high efficiency of SHMPSO analysis, the proposed strategy is used to solve the 100-dimension problem, and the set parameters are the same as those in Table 4. The experimental results are shown in the following table. In Table 6, it can be seen that SHMPSO still has high convergence accuracy and good robustness when solving high-dimensional problems. SHMPSO and MPCPSO rank first on the 100-dimensional problem. Further analysis, the performance of f15, f18, and f21 deteriorates drastically with increasing dimension. But the performance of other functions is still very good.
Table 6

Comparison of results of benchmark functions on various PSO variants (100-D).

PSOCLPSOBLPSOACPSOSLPSOMPCPSOSHMPSO
Mean4.10E + 022.01E − 119.65E − 402.73E − 078.53E − 1790.00E + 000.00E + 00
f 1 Std.2.77E + 023.26E − 121.85E − 397.60E − 080.00E + 000.00E + 000.00E + 00
Rank7546311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 2 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean3.82E + 001.15E − 091.11E − 179.76E − 039.86E − 040.00E + 000.00E + 00
f 3 Std.4.56E − 014.82E − 103.39E − 171.05E − 023.08E − 030.00E + 000.00E + 00
Rank7436511
Mean7.49E − 011.49E − 091.60E − 070.00E + 004.33E − 071.51E − 1822.26E − 217
f 4 Std.5.53E − 011.88E − 092.31E − 070.00E + 003.96E − 070.00E + 000.00E + 00
Rank7451632
Mean3.31E + 025.58E − 051.94E − 242.24E − 022.67E − 910.00E + 000.00E + 00
f 5 Std.2.00E + 025.49E − 063.15E − 242.89E − 034.86E − 910.00E + 000.00E + 00
Rank7546311
Mean2.96E + 011.92E + 014.20E + 002.50E − 011.43E − 120.00E + 000.00E + 00
f 6 Std.2.80E + 006.32E − 015.58E − 013.29E − 021.02E − 120.00E + 000.00E + 00
Rank7654311
Mean1.95E + 039.77E − 055.23E − 252.21E − 021.23E − 890.00E + 000.00E + 00
f 7 Std.2.03E + 021.21E − 054.15E − 253.14E − 031.18E − 890.00E + 000.00E + 00
Rank7546311
Mean1.08E + 051.73E − 308.17E − 417.07E − 281.51E − 3170.00E + 000.00E + 00
f 8 Std.9.88E + 041.04E − 301.90E − 406.30E − 280.00E + 000.00E + 000.00E + 00
Rank7546311
Mean2.30E + 021.64E − 102.77E − 175.44E − 070.00E + 000.00E + 000.00E + 00
f 9 Std.1.60E + 022.82E − 111.19E − 161.60E − 070.00E + 000.00E + 000.00E + 00
Rank7546111
Mean4.74E + 032.65E − 091.29E − 357.50E − 055.73E − 1760.00E + 000.00E + 00
f 10 Std.1.36E + 035.23E − 104.69E − 352.55E − 050.00E + 000.00E + 000.00E + 00
Rank7546311
Mean1.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 001.00E + 00
f 11 Std.0.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 000.00E + 00
Rank1111111
Mean1.54E + 032.38E + 023.27E + 009.07E − 109.24E + 020.00E + 000.00E + 00
f 12 Std.4.11E + 021.98E + 018.08E − 014.46E − 109.08E + 010.00E + 000.00E + 00
Rank7543611
Mean1.44E + 013.30E − 051.46E − 141.70E − 031.38E − 148.88E − 168.88E − 16
f 13 Std.5.44E − 012.68E − 062.87E − 152.56E − 044.04E − 150.00E + 000.00E + 00
Rank7546311
Mean6.11E + 012.45E − 036.56E − 167.36E − 031.16E − 150.00E + 000.00E + 00
f 14 Std.7.72E + 002.57E − 041.47E − 151.32E − 031.02E − 150.00E + 000.00E + 00
Rank7536411
Mean6.90E − 011.49E − 012.59E − 022.29E − 022.79E − 011.88E + 003.27E + 00
f 15 Std.9.15E − 021.46E − 022.70E − 033.60E − 033.54E − 024.41E − 029.98E + 00
Rank5321467
Mean1.23E + 011.04E − 011.00E − 011.51E − 011.00E − 010.00E + 000.00E + 00
f 16 Std.2.63E + 006.29E − 044.18E − 061.13E − 021.07E − 160.00E + 000.00E + 00
Rank7546311
Mean5.54E + 023.63E − 042.98E − 011.14E − 011.08E + 020.00E + 000.00E + 00
f 17 Std.3.99E + 017.70E − 055.32E − 014.63E − 022.79E + 010.00E + 000.00E + 00
Rank7354611
Mean7.45E − 284.68E − 426.04E − 425.51E − 421.62E − 414.49E − 111.68E − 18
f 18 Std.3.03E − 274.16E − 453.89E − 435.39E − 431.49E − 422.39E − 109.22E − 18
Rank5132476
Mean3.71E + 046.84E + 041.30E + 049.40E + 041.61E + 040.00E + 000.00E + 00
f 19 Std.1.14E + 045.96E + 031.84E + 035.15E + 057.73E + 030.00E + 000.00E + 00
Rank5637411
Mean1.67E + 040.00E + 000.00E + 000.00E + 004.67E − 010.00E + 000.00E + 00
f 20 Std.2.68E + 030.00E + 000.00E + 000.00E + 001.66E + 000.00E + 000.00E + 00
Rank7111611
Mean2.49E + 066.38E − 092.15E − 301.67E + 082.56E − 039.82E + 009.95E + 00
f 21 Std.1.56E + 061.11E − 097.87E − 309.17E + 084.73E − 032.31E − 012.38E − 01
Rank6217345
Average rank5.13.93.294.383.571.761.76
It can be seen from Figures 1 and 2 that the convergence performance of SHMPSO has obtained the global optimal value on most problems. Especially in f3, f8,f13, f19, f20, these five problems not only rank first in convergence accuracy but also the fastest convergence speed. Analyzing the reasons for convergence, there are three main reasons as follows. First of all, the information is highly shared among particles in the dominant population, and the guiding particles formed by the information sharing promote the dominant population to quickly converge to the global optimal value. Second, in order to prevent the population from falling into the local optimal value prematurely, a mutation operation is performed on the global optimal particle. Third, the use of random dominant population particles to guide the poor population particles not only improves the convergence speed of the poor population but also increases the diversity of the population. The particles of the dominant population are selected to guide the particles of the poor population, rather than the global optimal value of the poor population to guide the poor population. There are two reasons as follows: the first reason is that the fitness value of the poor population particles is larger than the fitness value of the population; the second reason is that the use of a single poor population global optimal particle is not conducive to increasing the diversity of the population.
Figure 2

Convergence curves of 13–21 functions. (a) f13. (b) f14. (c) f15. (d) f16. (e) f17. (f) f18. (g) f19. (h) f20. (i) f21.

According to the experiments in this paper, SHMPSO has good performance, and it has good performance in most functions. The success of SHMPSO mainly depends on the strategies proposed in this paper: First, based on the flexible candidate learning strategy, elite particles are selected from the dominant population to let the poor population particles learn, and the two populations share information, effectively jumping out of the local optimal solution. In addition, the mean dimension learning strategy can make the population particles have a better search range in complex and changeable multimodal functions, greatly improve the learning samples of particles, and provide more effective information for all particles. Therefore, SHMPSO has excellent properties and convergence accuracy.

5. Conclusion

Inspired by MPCPSO, this paper proposes a particle swarm optimization algorithm based on the strategy of subpopulation mixing. In this paper, the mean dimension learning strategy ensures the searching ability of the algorithm and the breadth of learnable samples, which provides the searching potential for the whole population. At the same time, the candidate learning strategy is used to improve the diversity of the population and prevent the population from falling into local optimum. At the same time, this paper compares SHMPSO with six well-known PSO variants to verify the effectiveness of SHMPSO proposed in this paper. Of course, SHMPSO still has some shortcomings, and f2 and f11 fall into local optimum. As the population searching ability needs to be improved, our future work will focus on the global searching ability of SHMPSO, and at the same time, we will deeply study the practical application of SHMPSO.
  3 in total

1.  A hierarchical particle swarm optimizer and its adaptive variant.

Authors:  Stefan Janson; Martin Middendorf
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  2005-12

2.  Adaptive particle swarm optimization.

Authors:  Zhi-Hui Zhan; Jun Zhang; Yun Li; Henry Shu-Hung Chung
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  2009-04-07

3.  Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization.

Authors:  Qiang Yang; Wei-Neng Chen; Tianlong Gu; Huaxiang Zhang; Jeremiah D Deng; Yun Li; Jun Zhang
Journal:  IEEE Trans Cybern       Date:  2016-10-24       Impact factor: 11.448

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.