Literature DB >> 33801605

A Diversity Model Based on Dimension Entropy and Its Application to Swarm Intelligence Algorithm.

Hongwei Kang1, Fengfan Bei1, Yong Shen1, Xingping Sun1, Qingyi Chen1.   

Abstract

The swarm intelligence algorithm has become an important method to solve optimization problems because of its excellent self-organization, self-adaptation, and self-learning characteristics. However, when a traditional swarm intelligence algorithm faces high and complex multi-peak problems, population diversity is quickly lost, which leads to the premature convergence of the algorithm. In order to solve this problem, dimension entropy is proposed as a measure of population diversity, and a diversity control mechanism is proposed to guide the updating of the swarm intelligence algorithm. It maintains the diversity of the algorithm in the early stage and ensures the convergence of the algorithm in the later stage. Experimental results show that the performance of the improved algorithm is better than that of the original algorithm.

Entities:  

Keywords:  dimension entropy; diversity model; swarm intelligence algorithm

Year:  2021        PMID: 33801605      PMCID: PMC8065515          DOI: 10.3390/e23040397

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

1.1. Optimization Problems and Swarm Intelligence Algorithms

Optimization problems have a long history. They involve the need to determine specific performance requirements for a certain problem if there are multiple alternative solutions, and to select one of them to maximize or minimize the determined performance requirement index [1]. In real life, optimization problems exist widely in engineering design [2], image segmentation [3], power systems [4], and other fields. In order to better solve optimization problems, evolutionary algorithms simulating the process and mechanism of biological evolution and swarm intelligence algorithms simulating the foraging mechanism of biological populations have gradually become a research hotspot in recent years. Swarm intelligence refers to the behavior of group cooperation and collective intelligence presented by a group composed of many simple individuals in nature [5]. Swarm intelligence is a kind of group-based computing method with self-organization, self-adaptation, and self-learning characteristics, which is put forward by referring to and utilizing various mechanisms of natural phenomena or organisms in nature. After years of development, a large number of swarm intelligence optimization algorithms have been born, among which the classic swarm intelligence optimization algorithms include the artificial bee colony algorithm [6], the ant colony algorithm [7], and the particle swarm optimization algorithm [8].

1.2. An Overview of the Diversity of Swarm Intelligence Algorithms

A major problem in swarm intelligence algorithms is premature convergence [9,10,11,12]; i.e., algorithms lose diversity prematurely. The root cause of premature convergence is the imbalance between local exploration and global development [13]. Too much local exploration leads to premature convergence of the algorithm into a local optimum, while too much global development causes the algorithm to lack precision and become difficult to converge [14]. To some extent, exploration and development can be seen as a pair of contradictory concepts, and the increase of one will inevitably reduce the other [15]. Population size, search strategy, and restart strategy are all effective methods to control exploration and development, but how to balance them scientifically is the key for a swarm intelligence algorithm to achieve excellent results. Evaluating the diversity of algorithms can fully detect the exploration and development of algorithms. As an important index of the swarm intelligence algorithm [16], diversity measures the richness of particle position, cognition, direction, and other properties in the swarm intelligence algorithm. Existing studies on diversity and entropy include: Folino et al. proposes a method to evaluate swarm intelligence algorithm using entropy value [17]; González et al. proposes a natural inspired strategy for optimization [18]; Muhammad et al. proposed a design with entropy evolution of optimal power flow problem [19]. Da et al. proposed a simplex crossover evolutionary algorithm aiming at genetic diversity [20]. According to the benchmark and representation method of the diversity model, there can be many diversity models. Therefore, the scope of this paper should be determined first. The diversity studied in this paper is a measure related to the position of individuals in a population. We believe that a measure model that can measure the true diversity of a population should have the following properties, besides being true and effective: It is robust to parameters such as population size and problem dimensions. It has repeatability for different populations. It can give feedback directly to changes in the population. For this reason, we have designed a new diversity model. The model calculates population entropy based on particle position, which is named “dimensional entropy”. Compared with other methods, dimensional entropy can clearly and intuitively define the diversity of a population and thus control the iteration of the algorithm. This paper is organized as follows: This section introduces some basic knowledge about the swarm intelligence algorithm and its diversity; in the second section, we describe a variety of diversity models and discuss the dimensional entropy model. Section 3 shows the method of updating the swarm intelligence algorithm guided by the dimensional entropy model; Section 4 shows the results; Section 5 presents our conclusions.

2. Diversity Model Based On Dimension Entropy

2.1. General Concept

Generally speaking, diversity [21,22] can be defined as the degree of individual heterogeneity between populations [23]. In swarm intelligence algorithms, there are many kinds of diversity evaluation models, which can be divided into two categories. The first type is the measurement based on the distance between particles [24,25]. This distance can be based on a central particle [26], the maximum distance between two particles in space [27,28], or the average distance between particles [29]. Euclidean distance is a common calculation method, because Euclidean space extends two-dimensional or three-dimensional space to any dimension, and at higher dimensions, populations exist and are defined in Euclidean space. The second type is based on the entropy measure. Entropy is a concept in thermodynamics. In 1948, Shannon put forward the concept [30]. The idea is that calculating the probability of several independent random discrete events can measure how much of an information system is uncertain. When only one event occurs in an information system, information entropy is the minimum; when the probability of occurrence of multiple discrete events is the same, the information entropy is the maximum. When applied to the swarm intelligence algorithm, because the population itself is a continuous concept, the first step is to segment the population, i.e., to discretize it. Thus, each segmented interval is abstracted as a random event, and the ratio of the number of particles contained in each interval to the total population is the probability of this “random event” occurring. Therefore, how to discretize and which standard to discretize become a key, difficult point in applying the entropy standard in the swarm intelligence algorithm. At the same time, the number of intervals divided by discretization has a direct impact on the estimation of diversity. When the ethnic scale is too small, the entropy method cannot be used to divide enough intervals. However, in the case of high dimensions, it is difficult to consider the situation of all dimensions when dividing the interval. In addition, the combination of entropy values between all the particles must be considered. For example, Gouvea Jr. and Araujo used a representative particle to represent population diversity [28]; i.e., they used the individual characteristics of a particle to represent population diversity. They mentioned that the chosen particle has to be important. Collins and Jefferson used average entropy values to determine population diversity [31].

2.2. Research Status of Diversity

To save space, a summary of the symbols used in this section is first given in Table 1 below.
Table 1

Symbol summary.

SymbolDefinition
i,j Variable
k Dimension number {1,2,,n}
m Interval number
M Total number of intervals
n Total number of dimension
Xi,k The position of the i particle on the k dimension
Xk- Average value of the population on the k dimension
pm,k Fraction of N that belongs to interval m on the dimension k
The most basic range-based diversity measure only considers the diameter of the population, i.e., the distance between the two farthest particles in the population [30]. The formula is shown in Equation (1): The second method [32] can be obtained by changing the farthest distance to the radius of the population and calculating the distance between the farthest particle and the average position of the population. The formula is shown in Equation (2): There are other extended methods of this method, such as calculating the average radius, which will not be detailed here. The third method was proposed by Olorunda and Engelbrecht. The idea is to consider the mean value of the mean distance around the population particles [27], as shown in Equation (3): This method has a huge calculation cost. In order to save the calculation cost, Wineberg and Opacher proposed a calculation method named “true diversity” [33], which represents the mean standard deviation of each particle in the population, as shown in Equation (4): where: The last diversity based on distance measure compared in this paper was proposed by Herrera and Lozano [26]. This diversity measure requires pre-determination of the most suitable particle in the population, because it uses this particle as a reference to measure the distance with other particles, as shown in Equation (5): where: In terms of the entropy value measurement, Shannon’s entropy is the most basic measurement method [34], which measures the disorder degree of the population [35]. Its entropy definition is shown in Equation (6): If we want to measure the entropy value of the swarm intelligence algorithm, it is important to establish a discrete measurement model. Chen et al. put forward an idea based on entropy calculation methods of fitness [36]. The idea is to investigate the historically best position. The interval is defined as the current particle fitness range, which can be divided by the same number of particles of between communities, so as to calculate the distribution of the current fitness [37]. The formula is shown in (7): where where . represents the number of particles contained in interval . Wang and Lei put forward the intuitionistic fuzzy population entropy as a measure of diversity [38,39]. In this method, the position of each generation is selected as the aggregation point, the distance from particle to the aggregation point is , and the scope radius of each aggregation point is , where is a random number between . If the distance from particle to the aggregation point is less than the scope radius of the aggregation point , then particle is considered to belong to the aggregation point and the counter is increased by 1. If particle does not belong to any of the scopes, then particle is considered as an “lone point” and the global counter is added 1. The formula is shown in (8): where: The intuitionistic fuzzy population entropy reflects the aggregation degree of particles in the algorithm solution process.

2.3. Some Fundamental Flaws in Current Metrics

First of all, population diameter is not an ideal diversity measurement method, because this method only considers the two farthest particles and ignores the distribution of the remaining particles, which cannot explain the true diversity of a population. A similar deficiency exists in population radius, where the diversity is based on the position of the particles furthest from the population center. When this indicator describes a fully diversified population, the value is close to 0.5. As the value approaches 1, it describes a population that is mostly clustered around one corner and has an outlier near the other corner. This diversity is also wrong. The diversity based on the distance measure needs to determine a reference particle in advance, but in fact, a suitable reference particle is difficult to choose. Meanwhile, for a linearly shrinking population, its diversity will remain unchanged because of the simultaneous contraction of the numerator and denominator, and the real decreased diversity cannot be described. It is difficult to determine an appropriate segmentation standard using the diversity measure based on entropy. The entropy calculation based on the fitness of particles has the same fitness and the same interval. It may work with a single peak function, but in the face of two likely multimodal functions for different peak particles with the same fitness, a simple classification by fitness is rigorous. Intuitionistic fuzzy population entropy considers the scope division of particles in space, but the difficulty of this division increases greatly with the increase of dimensions. In high dimensions, a particle is likely to belong to different scopes in different dimensions, so it is difficult to clearly divide. Finally, as we will see later, most of the metrics fail to deal with population dynamics.

2.4. Diversity Model Based on Dimension Entropy

Xu and Cui [40] confirmed that the swarm intelligence algorithm in the iteration is relatively independent of each dimension in the process of change. Inspired by this, this paper puts forward dimension entropy to measure the diversity of the swarm intelligence algorithm. Unlike other entropy value methods, we abandoned the concept of space. We have an independent view of the entropy value of each dimension. Dimension entropy is put forward. We provide relevant definitions: Dimension interval. The maximum and minimum values of each dimension were selected as the upper and lower limits and divided into intervals ( is the total number of particles) on an average basis. Each dimension was divided into intervals independently without interference. Dimensional entropy. In each dimension, the number of particles falling into each dimension interval is counted independently, and the dimension entropy is calculated, as shown in Equation (9): where: where is the number of particles contained in the interval of dimension . In the worst-case scenario, the population is trapped in the same interval in all dimensions; that is, the population completely converges, and the dimensional entropy is then 0. The larger the dimensional entropy, the greater the diversity of the population. Compared with the previous entropy method, our dimension entropy method perfectly overcomes the disadvantages of the usual entropy methods. With a low population size, even if some dimensions cannot be effectively divided, the entropy can be calculated as long as one of the dimensions can complete the interval division. At the same time, the independent thinking among the dimensions also enables us to face the challenges of higher dimensions simply and intuitively.

2.5. A Comparative Study

In order to compare the differences between different diversities, the various methods should be normalized first so as to evaluate on the same standard. In this study, the maximum value was used as the normalization factor. After normalization, the value range of each model was 0~1. In addition to the dimension of entropy, we selected the diversity model to participate in six kinds of contrast: the maximum distance [32], the radius of the population [32], the individual average distance [27], and the average standard deviation [33] , two entropy value methods: the fitness [36] and the intuitionistic fuzzy entropy [38]. These are marked, respectively, as , , , , , .

2.5.1. Population Expansion Experiment

The practical application scenario of the swarm intelligence algorithm is complex, which will inevitably face population expansion or decline. A good diversity model should have a correct and timely response to changes of population size. In order to compare the robustness of various diversity models of population size, we designed the following experiments. First, we designed a complete population of 20 particles with a population range of [−100, 100]. This population contains two dimensions and has the following characteristics: All particles in the population are uniformly distributed in all dimensions. No two particles are the same. The complete population is shown in Figure 1.
Figure 1

The complete population.

We randomly selected the composition of the initial population and the population expansion after the simulation operation from a population of 20 particles. The initial population consists of 10 particles. We randomly selected 1 out of 10 particles and joined the initial population. After 10 iterations, the entire population was produced, as shown in Figure 1. The process of population change is in Figure 2.
Figure 2

Schematic diagram of population change.

Since there is a significant difference between each particle in the complete population, it can be seen that, in the population expansion process in the figure above, each new particle is different from the old particle, so the diversity of the population must continue to increase in this process. We used the above several diversity models to evaluate the population diversity in this process. The entropy value method is based on the population fitness. We used a simple Sphere function () to calculate the entropy. Because the fuzzy population entropy needs to use the PBest value of the past dynasties as the aggregation point in the calculation process, it was not used in this experiment. The normalized results of various models are shown in Figure 3.
Figure 3

Results of population expansion experiments.

According to the above figure, the population diversity of increased only in the fourth iteration, because the particles added in the fourth iteration happened to be outside the existing population, while in the remaining iterations, the particles were added inside the population, which means that the simple method to calculate the population diameter could not judge the changes within the population at all. Although shows an overall upward trend, its value is always at a high level, which means that, if the new particle is not a significant outlier, then cannot make a clear response to it. Throughout all 10 expansion processes, the diversity value of decreased four times, because, while the new particles did not expand the maximum radius of the population, the enlarged denominator decreased the value of . has a similar problem. If the new particles do not increase the average distance between the particles, will not reflect the population change correctly. kept a downward trend during the iteration, because the increase in particles resulted in a decrease in the standard deviation, which proved that could not be used to describe a changing population. overall performs better, but still showed a diversity, which reduced the phenomenon. This is because the eighth iteration was added on the right, and existing particles around the zero point were symmetrical. The two particles were completely different; however, in our set, the fitness function is similar. The visible fitness does not have the unique attributes of a particle. How unique and accurate properties are chosen as division standards is a key part of the entropy value method. Finally, among all the methods, only keeps an upward trend at all times, and the upward trend is visible in each iteration, which is sufficient to prove that our proposed method can accurately observe and describe a changing population.

2.5.2. Dimensional Change Experiment

The swarm intelligence algorithm also faces a complex dimension problem. We hope that the diversity of an ideal method is robust for different dimensions; namely, if the population itself is fully diversified, then regardless of dimensions, its diversity values should be stable; otherwise, it would be difficult to assess or apply in the swarm intelligence algorithm. To this end, we designed an experiment. First, we built a completely differentiated network with 1 dimension and 100 particles, and then continuously expanded the dimensions on this basis, ensuring that each expanded dimension was also completely differentiated. This step was repeated until it reached 30 dimensions. In this process, we used the above diversity model to compare the values of each model for different dimensions, and the results are shown in Figure 4.
Figure 4

Dimensional robustness testing.

As can be seen in Figure 4, with the increase in dimensions, all distance diversity models show an increasing trend of diversity. This is because, with the increase in dimensions, the spatial span calculated by the diversity model based on spatial distance also increases proportionally, which is almost an inevitable defect of the distance diversity model. In contrast, the entropy method has better resistance to dimensional changes, and compared with , the method we defined is more accurate and stable.

2.5.3. Practical Problem Testing

The purpose of this experiment is to test the performance of each model under the actual test function environment. First of all, we are given a specific Rastrigin function test. The test function exists, and there is only one global optimal value at the zero point; therefore, while the algorithm is run, almost all particles, according to certain trends, will move toward the optimal point 0 mobile. As the iteration times begin to approach zero, and when the late algorithm flocks to near zero, all particles moving toward the zero distance sum partly reflect the diversity of the population under the test function. The formula of the Rastrigin test function is shown in Equation (10): The standard PSO algorithm was used to carry out the experiment. The experimental dimension was 5 dimensions, there were 50,000 fitness evaluations, the population size was set as 100, and the diversity models involved in the experiment were , , , , , , and . The PSO setup is described in the next section. The Spearman correlation coefficient is a nonparametric index to measure the dependence of two variables. It uses monotone equations to evaluate the correlation between two statistical variables. When the two variables are completely monotone and positively correlated, the Spearman correlation coefficient is 1; when the two variables are completely monotone and negatively correlated, the Spearman correlation coefficient is −1. That is to say, the closer the Spearman correlation coefficient is to 1, the closer the two variables are. The diversity model mentioned above was used to compare the distance changes of all particles to the point 0. After each iteration, we calculate and record the sum of the distances of the all particles from zero point and the values of the above diversity model. At the end of the algorithm, we selected the values of each diversity model in each iteration and compared them with the sum of the distances of the all particles from zero point to calculate Spearman correlation coefficient. The experiment was repeated ten times, the results are shown in Table 2.
Table 2

Rastrigin tests.

Time Ddp Drp Dall Dtd Efit Efuzzy Edim
10.782 0.692 0.997 0.983 0.967 0.779 0.977
20.925 0.855 1.000 0.980 0.950 0.889 0.984
30.708 0.889 0.997 0.973 0.947 0.840 0.976
40.715 0.642 0.997 0.978 0.942 0.758 0.977
50.821 0.714 0.998 0.986 0.961 0.747 0.989
60.931 0.852 0.992 0.942 0.951 0.875 0.979
70.949 0.618 1.000 0.990 0.978 0.907 0.980
80.925 0.844 0.998 0.989 0.975 0.907 0.980
90.799 0.699 0.996 0.976 0.952 0.805 0.984
100.818 0.712 0.937 0.982 0.953 0.807 0.972
Mean0.837 0.752 0.991 0.978 0.958 0.831 0.980
Rank5713462
In the case of fixed population size and dimension, performs best, followed by . Among all entropy methods, has a significant advantage.

3. Swarm Intelligence Algorithm Control Method Based on Dimension Entropy

3.1. Introduction to Swarm Intelligence Algorithms

Firstly, the comparison algorithm used in this study is briefly introduced. PSO is derived from an analogy of bird flight [41]. In PSO, each particle flies in a -dimensional solution space by learning its own experience and the experience of its neighbor. The position and velocity of each particle are expressed by and , respectively. represents the current position of the particle, and it also represents a possible solution of the optimization problem. represents the current velocity of the particle, which determines the direction and step size of the particle’s movement. Each particle learns from its own and global historical best positions, represented by and , respectively. The position and velocity of each particle are dynamically adjusted according to equations (11) and (12): where is the inertial weight, and are the acceleration factors, and and are two uniformly distributed random numbers in the range of [0,1]. Bare-bones PSO eliminates the velocity attribute of particles, and the position update formula of particles is obtained by random sampling in accordance with Gaussian distribution, which can be described in a mathematical language as follows: The particle is searched randomly in the -dimensional space, the position of particle is , and the optimal flight position of particle is . is the global optimal position of the particle, and the symbol N(0,1) is the standard Gaussian distribution. The position update formula of particle is shown in Equation (13):

3.2. The Swarm Intelligence Algorithm Control Method Based on Dimension Entropy

The disadvantage of traditional swarm intelligence algorithms is the premature loss of diversity. We want to come up with a mechanism: control the diversity to decrease slowly, maintain higher diversity in the early stage, and reduce the diversity in the late stage to promote convergence. So we propose a method to calculate and control population diversity according to dimensional entropy. Firstly, the following definitions are proposed: Redundant particle: Divide ( is the population size) intervals on average in each dimension , count the number of particles in each interval. If a particle is in the largest set of dimensions , let count +1, the particle with the largest count in a population is redundant particle. It should be noted that redundant particles are not necessarily repeated particles or redundant particles, but according to the definition of dimensional entropy, we know that deleting redundant particles will inevitably increase dimensional entropy, while copying redundant particles will inevitably reduce dimensional entropy. Based on this, we propose a strategy to control the diversity. When the diversity is too large, we copy a redundant particle and delete a particle with the worst fitness to reduce the diversity. On the contrary, when the diversity is too low, a redundant particle is deleted and a new random particle is added to achieve the purpose of increasing the diversity. The relevant algorithm pseudocode is shown in Algorithm 1: We want to set a standard for diversity to go down on a trajectory, is the entropy value we expect to achieve in each iteration. This value is determined by the base curve. We have experimented with four different kinds of base curves: straight line, convex curve, concave curve and broken line. These curves are shown in Figure 5.
Figure 5

Four different kinds of base curves.

In order to compare the differences between different convergence criteria, we set a simple test function set for experiment. This test function set contains 7 basic test functions, the information is shown in Table 3.
Table 3

test functions. U:(unimodal),M:(multimodal).

NumFunction NamePropertyBest Value
1Sphere’s FunctionU0
2Rosenbrock’s FunctionM0
3Rastrigin’s FunctionM0
4Griewank’s FunctionM0
5Ackley’s FunctionM0
6Schwefel’s Problem 2.22M0
7Schwefel’s Problem 1.2M0
In the above test function, taking the linear diversity reduction standard as shown in Figure 5 as an example, the PSO algorithm uses the strategy of Algorithm 1, and its diversity changes are shown in Figure 6.
Figure 6

Comparison of diversity.

According to the diversity change curve, the diversity of the original algorithm will be reduced to the lowest value after about 50 iterations, and then the algorithm will stagnate. This phenomenon of premature loss of diversity is one of the reasons why swarm intelligence algorithm cannot achieve better results. After using our strategy, diversity slowly declines on a curve that we have defined. In the early stage of the algorithm, there is still a phenomenon of a sharp decrease in the diversity. This is due to the rapid aggregation of particles at the beginning of the algorithm. In the middle and late stages of the algorithm, our method can achieve precise control. We take these four convergent curves as input to the algorithm, PSO algorithm was used as the experimental algorithm, the experimental dimension is 10 dimensions. The experiment was repeated for 30 times. Compare the difference of optimization performance and population diversity brought by different curves on different functions. Advantageous test function results are marked in bold. The result is shown in Table 4.
Table 4

Comparison of the diversity and optimization results of the four curves.

No LineConvex CurveConcave CurveBroken Line
1Min 9.47 × 10−57 3.26 × 10−576.86 × 10−586.69 × 10−58
Max 3.45 × 10−51 6.43 × 10−512.48 × 1025.81 × 10−48
Mean 1.93 × 10−52 3.51 × 10−528.27 × 1001.94 × 10−49
DimEnt 0.820 1.039 0.574 0.756
2Min 8.83 × 10−1 9.40 × 10−19.00 × 10−19.45 × 10−1
Max 4.80 × 101 2.26 × 1021.69 × 1056.99 × 102
Mean 8.30 × 100 2.23 × 1015.65 × 1035.65 × 101
DimEnt 0.831 1.019 0.596 0.744
3Min 8.53 × 10−14 9.95 × 10−10.00 × 1008.27 × 10−12
Max 6.81 × 100 6.96 × 1002.22 × 1024.97 × 100
Mean 1.94 × 10−0 2.89 × 1002.00 × 1012.04 × 100
DimEnt 0.834 1.039 0.626 0.771
4Min1.23 × 10−2 1.23 × 10−2 1.97 × 10−21.23 × 10−2
Max1.26 × 10−1 1.43 × 10−1 1.65 × 10−11.68 × 101
Mean6.08 × 10−2 5.83 × 10−2 6.69 × 10−26.15 × 10−1
DimEnt0.792 1.012 0.540 0.750
5Min4.44 × 10−15 4.44 × 10−15 8.88 × 10−16 4.44 × 10−15
Max1.21 × 101 4.44 × 10−15 1.42 × 101 4.44 × 10−15
Mean4.03 × 10−1 4.44 × 10−15 8.08 × 10−1 4.44 × 10−15
DimEnt0.830 1.036 0.585 0.753
6Min3.98 × 10−33 1.33 × 10−32 1.09 × 10−323.11 × 10−32
Max1.20 × 10−27 3.46 × 10−29 1.32 × 1014.30 × 10−29
Mean4.34 × 10−29 3.78 × 10−30 1.71 × 1005.57 × 10−30
DimEnt0.818 1.047 0.664 0.751
7Min7.57 × 10−1 6.30 × 10−1 2.19 × 10−21.60 × 10−1
Max2.18 × 103 3.08 × 101 5.34 × 1031.23 × 103
Mean9.04 × 101 7.58 × 100 5.60 × 1025.27 × 101
DimEnt1.003 1.188 0.886 0.964
According to the table, first of all, the concave curve has the highest average diversity, followed by the straight line, followed by the broken line, and the concave curve has the lowest average diversity. In terms of the optimization results, the convex curve has higher pre - and mid-term diversity, which guarantees its wider development capability in the pre - and mid-term. This ability enables it to better discover hidden global optimal values, which may be the reason for its excellent performance in multi-modal functions. In the case of unimodal function, appropriately accelerating the convergence can enhance the exploration ability of the algorithm, so the linear curve with slightly faster multiplicity convergence can achieve better results. And for any function, a rapid loss of diversity is not a good strategy, as concave curves demonstrate.

4. Experiment and Discussion

In this section, we apply the improved strategy to the three algorithms mentioned in the previous chapter. CEC17 is selected as the test function, which is a test function set containing 29 functions. Among them, and are single-peak functions, ~ are simple multi-peak functions, ~ are mixed functions, and ~ are compound functions. CEC17 functions are shown in Table 5.
Table 5

CEC17 functions. U: (unimodal),M: (multimodal),H: (hybrid),C: (composition).

NumFunction NamePropertyBest Value
F01Shifted and Rotated Bent Cigar FunctionU100
F03Shifted and Rotated Zakharov FunctionM300
F04Shifted and Rotated Rosenbrock’s FunctionM400
F05Shifted and Rotated Rastrigin’s FunctionM500
F06Shifted and Rotated Expanded Scaffer’s F6 FunctionM600
F07Shifted and Rotated Lunacek Bi_Rastrigin FunctionM700
F08Shifted and Rotated Non-Continuous Rastrigin’s FunctionM800
F09Shifted and Rotated Levy FunctionM900
F10Shifted and Rotated Schwefel’s FunctionM1000
F11Hybrid Function 1 (N = 3)H1100
F12Hybrid Function 2 (N = 3)H1200
F13Hybrid Function 3 (N = 3)H1300
F14Hybrid Function 4 (N = 4)H1400
F15Hybrid Function 5 (N = 4)H1500
F16Hybrid Function 6 (N = 4)H1600
F17Hybrid Function 6 (N = 5)H1700
F18Hybrid Function 6 (N = 5)H1800
F19Hybrid Function 6 (N = 5)H1900
F20Hybrid Function 6 (N = 6)H2000
F21Composition Function 1C2100
F22Composition Function 2C2200
F23Composition Function 3C2300
F24Composition Function 4C2400
F25Composition Function 5C2500
F26Composition Function 6C2600
F27Composition Function 7C2700
F28Composition Function 8C2800
F29Composition Function 9C2900
F30Composition Function 10C3000
The algorithms before and after the improvement are respectively referred to as PSO and PSOG. BBPSO and BBPSOG. The dimension of the experiment is 10, and the experiment was repeated 30 times. Parameter setting of the algorithm is shown in Table 6.
Table 6

Parameter setting.

AlgorithmParameter Setting
PSOω:0.5, c1=c2=2
Considering that the swarm intelligence algorithm is a random algorithm, we repeated the experiments of each algorithm 30 times and recorded the optimal value, the worst value, the average value and the variance. Advantageous test function results are marked in bold. Statistical results are shown in Table 7, Table 8, Table 9 and Table 10.
Table 7

PSO 10-dimensional improvement comparison.

funPSO(Dim = 10)PSOG(Dim = 10)
minmaxmeanstdminmaxmeanstd
f1 1.02 × 1022.54 × 1031.16 × 1038.87 × 102 1.01 × 102 1.53 × 103 5.89 × 102 4.20 × 102
f3 3.00 × 1023.00 × 1023.00 × 1020.00 × 1003.00 × 1023.00 × 1023.00 × 1020.00 × 100
f4 4.00 × 1024.35 × 1024.26 × 1021.46 × 101 4.00 × 102 4.35 × 102 4.09 × 102 1.36 × 101
f5 5.07 × 1025.34 × 1025.18 × 1026.35 × 100 5.04 × 102 5.17 × 102 5.13 × 102 3.95 × 100
f6 6.00 × 1026.07 × 1026.00 × 1021.22 × 1006.00 × 1026.00 × 1026.00 × 1020.00 × 100
f7 7.13 × 1027.38 × 1027.21 × 1025.54 × 100 7.12 × 102 7.21 × 102 7.18 × 102 2.12 × 100
f8 8.06 × 1028.36 × 1028.16 × 1026.94 × 100 8.07 × 102 8.21 × 102 8.14 × 102 4.62 × 100
f9 9.00 × 1029.00 × 1029.00 × 1021.63 × 10−29.00 × 1029.00 × 1029.00 × 1020.00 × 100
f10 1.13 × 1031.85 × 1031.48 × 1031.96 × 102 1.13 × 103 1.45 × 103 1.31 × 103 9.45 × 101
f11 1.10 × 1031.14 × 1031.12 × 1038.54 × 100 1.10 × 103 1.12 × 103 1.11 × 103 4.58 × 100
f12 2.05 × 1034.35 × 1052.75 × 1047.79 × 104 1.88 × 103 2.13 × 104 9.67 × 103 6.94 × 103
f13 1.34 × 1039.10 × 1033.39 × 1032.16 × 103 1.31 × 103 3.43 × 103 2.06 × 103 6.48 × 102
f14 1.43 × 1031.77 × 1031.49 × 1036.32 × 101 1.44 × 103 1.47 × 103 1.45 × 103 7.40 × 100
f15 1.51 × 1031.76 × 1031.56 × 1036.06 × 101 1.51 × 103 1.53 × 103 1.52 × 103 5.95 × 100
f16 1.60 × 1031.86 × 1031.72 × 1036.14 × 101 1.60 × 103 1.72 × 103 1.64 × 103 5.27 × 101
f17 1.73 × 1031.78 × 1031.75 × 1031.39 × 101 1.71 × 103 1.75 × 103 1.73 × 103 8.66 × 100
f18 1.84 × 1031.29 × 1045.07 × 1032.91 × 103 1.93 × 103 5.67 × 103 3.66 × 103 1.04 × 103
f19 1.90 × 1031.96 × 1031.92 × 1031.07 × 101 1.91 × 103 1.92 × 103 1.91 × 103 3.93 × 100
f20 2.01 × 1032.20 × 1032.07 × 1035.31 × 101 2.00 × 103 2.04 × 103 2.03 × 103 7.21 × 100
f21 2.20 × 1032.20 × 1032.20 × 1032.67 × 10−132.20 × 1032.20 × 1032.20 × 1032.09 × 10−13
f22 2.30 × 1032.30 × 1032.30 × 1032.39 × 10−132.21 × 1032.30 × 1032.30 × 1032.06 × 101
f23 2.40 × 1032.82 × 1032.71 × 1037.80 × 101 2.40 × 103 2.67 × 103 2.62 × 103 9.54 × 101
f24 2.50 × 1032.80 × 1032.61 × 1035.79 × 101 2.50 × 103 2.60 × 103 2.59 × 103 3.08 × 101
f25 2.89 × 1032.95 × 1032.94 × 1032.04 × 101 2.90 × 103 2.95 × 103 2.93 × 103 2.32 × 101
f26 2.80 × 1033.49 × 1032.94 × 1032.04 × 102 2.60 × 103 2.90 × 103 2.83 × 103 7.33 × 101
f27 3.10 × 1033.50 × 1033.29 × 1031.15 × 102 3.10 × 103 3.23 × 103 3.16 × 103 4.45 × 101
f28 3.10 × 1033.23 × 1033.15 × 1032.52 × 101 3.10 × 103 3.15 × 103 3.13 × 103 2.40 × 101
f29 3.15 × 1033.30 × 1033.18 × 1033.45 × 101 3.14 × 103 3.17 × 103 3.16 × 103 1.05 × 101
f30 3.49 × 1033.85 × 1049.59 × 1037.45 × 103 3.71 × 103 7.49 × 103 5.22 × 103 1.13 × 103
count024
Table 8

PSO 30-dimensional improvement comparison.

funPSO(Dim = 30)PSOG(Dim = 30)
minmaxmeanstdminmaxmeanstd
f1 1.00 × 1021.22 × 1091.37 × 1083.26 × 108 1.00 × 102 1.01 × 102 1.00 × 102 2.95 × 10−1
f3 3.05 × 1023.93 × 1023.34 × 1022.29 × 101 3.09 × 102 3.53 × 102 3.31 × 102 1.51 × 101
f4 4.00 × 1026.38 × 1024.89 × 1025.06 × 101 4.04 × 102 4.71 × 102 4.66 × 102 1.48 × 101
f5 5.73 × 1026.71 × 1026.05 × 1022.42 × 101 5.64 × 102 5.95 × 102 5.80 × 102 9.27 × 100
f6 6.00 × 1026.23 × 1026.08 × 1025.93 × 100 6.00 × 102 6.08 × 102 6.04 × 102 2.85 × 100
f7 7.68 × 1028.46 × 1028.10 × 1022.12 × 101 7.77 × 102 8.22 × 102 7.99 × 102 1.40 × 101
f8 8.67 × 1029.89 × 1029.18 × 1022.93 × 101 8.75 × 102 9.39 × 102 9.09 × 102 1.90 × 101
f9 9.08 × 1024.85 × 1032.61 × 1031.02 × 103 9.31 × 102 2.88 × 103 1.87 × 103 6.15 × 102
f10 2.80 × 1035.20 × 1034.07 × 1036.36 × 102 2.96 × 103 4.15 × 103 3.63 × 103 3.18 × 102
f11 1.18 × 1031.43 × 1031.25 × 1035.91 × 101 1.17 × 103 1.27 × 103 1.23 × 103 3.01 × 101
f12 2.64 × 1033.35 × 1081.12 × 1076.12 × 107 3.63 × 103 1.50 × 104 7.33 × 103 3.51 × 102
f13 1.35 × 1031.02 × 1042.05 × 1031.80 × 103 1.41 × 103 2.71 × 103 1.85 × 103 4.06 × 102
f14 1.48 × 1031.97 × 1031.68 × 1031.12 × 102 1.53 × 103 1.71 × 103 1.64 × 103 5.51 × 101
f15 1.53 × 1031.92 × 1031.59 × 1037.02 × 101 1.52 × 103 1.61 × 103 1.57 × 103 2.55 × 101
f16 1.86 × 1032.91 × 1032.35 × 1032.68 × 102 1.96 × 103 2.44 × 103 2.23 × 103 1.67 × 102
f17 1.80 × 1032.52 × 1032.09 × 1031.83 × 102 1.79 × 103 2.02 × 103 1.89 × 103 7.44 × 101
f18 5.96 × 1031.26 × 1053.88 × 1042.60 × 104 9.55 × 103 4.17 × 104 2.33 × 104 1.11 × 104
f19 1.98 × 1032.93 × 1046.50 × 1036.01 × 103 1.95 × 103 4.41 × 103 2.77 × 103 8.62 × 102
f20 2.20 × 1032.71 × 1032.42 × 1031.08 × 102 2.13 × 103 2.44 × 103 2.31 × 103 1.05 × 102
f21 2.20 × 103 2.20 × 103 2.20 × 103 2.25 × 10−1 2.25 × 1032.25 × 1032.25 × 1034.67 × 10−13
f22 2.30 × 103 2.30 × 103 2.30 × 103 2.24 × 10−1 2.35 × 1032.35 × 1032.35 × 1034.55 × 10−13
f23 3.04 × 1034.20 × 1033.54 × 1033.24 × 102 2.83 × 103 2.88 × 103 2.87 × 103 1.30 × 101
f24 2.60 × 1032.61 × 1032.60 × 1031.55 × 1002.60 × 1032.60 × 1032.60 × 1034.43 × 10−13
f25 2.90 × 1033.05 × 1032.94 × 1034.21 × 1012.90 × 1032.97 × 1032.94 × 1032.79 × 101
f26 2.80 × 1032.90 × 1032.80 × 1031.83 × 1012.80 × 1032.80 × 1032.80 × 1035.00 × 10−13
f27 3.78 × 1035.06 × 1034.39 × 1033.41 × 102 3.38 × 103 3.59 × 103 3.51 × 103 5.25 × 101
f28 3.17 × 1033.95 × 1033.31 × 1031.39 × 102 3.17 × 103 3.28 × 103 3.24 × 103 3.29 × 101
f29 3.35 × 1034.11 × 1033.59 × 1032.12 × 102 3.29 × 103 3.65 × 103 3.49 × 103 1.10 × 102
f30 4.19 × 1031.88 × 1051.60 × 1043.39 × 104 4.44 × 103 1.60 × 104 9.79 × 103 3.31 × 103
count224
Table 9

BBPSO 10-dimensional improvement comparison.

funBBPSO(Dim = 10)BBPSOG(Dim = 10)
minmaxmeanstdminmaxmeanstd
f1 1.28 × 1022.54 × 1031.28 × 1036.85 × 102 1.50 × 102 2.12 × 103 1.21 × 103 5.45 × 102
f3 3.00 × 1023.00 × 1023.00 × 1020.00 × 1003.00 × 1023.00 × 1023.00 × 1020.00 × 100
f4 4.00 × 1025.21 × 1024.30 × 1022.23 × 101 4.00 × 102 4.35 × 102 4.17 × 102 1.65 × 101
f5 5.04 × 1025.27 × 1025.13 × 1025.83 × 100 5.05 × 102 5.12 × 102 5.09 × 102 2.08 × 100
f6 6.00 × 1026.01 × 1026.00 × 1021.76 × 10−16.00 × 1026.00 × 1026.00 × 1023.69 × 10−14
f7 7.08 × 1027.26 × 1027.18 × 1024.38 × 1007.13 × 1027.22 × 1027.18 × 1022.75 × 100
f8 8.05 × 1028.22 × 1028.12 × 1024.40 × 100 8.05 × 102 8.13 × 102 8.09 × 102 2.63 × 100
f9 9.00 × 1029.02 × 1029.00 × 1024.54 × 10−19.00 × 1029.00 × 1029.00 × 1020.00 × 100
f10 1.03 × 1031.77 × 1031.34 × 1032.04 × 102 1.04 × 103 1.35 × 103 1.17 × 103 1.10 × 102
f11 1.10 × 1031.12 × 1031.11 × 1035.17 × 100 1.10 × 103 1.11 × 103 1.10 × 103 1.91 × 100
f12 2.40 × 1034.36 × 1053.59 × 1047.77 × 104 3.97 × 103 2.58 × 104 1.19 × 104 5.72 × 103
f13 1.31 × 1039.37 × 1034.41 × 1032.86 × 103 1.32 × 103 4.04 × 103 2.27 × 103 9.62 × 102
f14 1.43 × 1031.54 × 1031.46 × 1032.87 × 101 1.43 × 103 1.44 × 103 1.43 × 103 5.87 × 100
f15 1.51 × 1031.69 × 1031.59 × 1035.23 × 101 1.51 × 103 1.59 × 103 1.54 × 103 2.34 × 101
f16 1.60 × 1031.81 × 1031.68 × 1037.19 × 101 1.60 × 103 1.64 × 103 1.61 × 103 1.26 × 101
f17 1.71 × 1031.85 × 1031.75 × 1033.17 × 101 1.72 × 103 1.74 × 103 1.73 × 103 5.67 × 100
f18 1.86 × 1032.34 × 1046.48 × 1035.58 × 103 2.09 × 103 5.37 × 103 3.03 × 103 8.68 × 102
f19 1.90 × 1032.12 × 1031.94 × 1035.16 × 101 1.90 × 103 1.92 × 103 1.91 × 103 4.37 × 100
f20 2.00 × 1032.08 × 1032.03 × 1031.79 × 1012.00 × 1032.04 × 1032.03 × 1031.05 × 101
f21 2.20 × 103 2.20 × 103 2.20 × 103 2.53 × 10−13 2.25 × 1032.27 × 1032.26 × 1037.21 × 100
f22 2.21 × 103 2.30 × 103 2.30 × 103 1.70 × 101 2.24 × 1032.39 × 1032.35 × 1034.83 × 101
f23 2.65 × 1032.71 × 1032.68 × 1031.31 × 101 2.65 × 103 2.67 × 103 2.67 × 103 5.07 × 100
f24 2.50 × 1032.82 × 1032.74 × 1031.21 × 102 2.50 × 103 2.81 × 103 2.72 × 103 1.39 × 102
f25 2.89 × 1032.97 × 1032.93 × 1032.54 × 101 2.89 × 103 2.94 × 103 2.91 × 103 1.86 × 101
f26 2.90 × 1033.62 × 1033.21 × 1032.42 × 102 2.60 × 103 3.37 × 103 3.00 × 103 1.92 × 102
f27 3.14 × 1033.31 × 1033.17 × 1034.30 × 101 3.12 × 103 3.15 × 103 3.14 × 103 5.87 × 100
f28 3.10 × 1033.37 × 1033.18 × 1036.51 × 101 3.10 × 103 3.15 × 103 3.13 × 103 2.50 × 101
f29 3.14 × 1033.29 × 1033.18 × 1033.12 × 101 3.14 × 103 3.17 × 103 3.16 × 103 7.45 × 100
f30 3.97 × 1032.32 × 1051.58 × 1044.10 × 104 4.66 × 103 1.03 × 104 7.68 × 103 1.52 × 103
count222
Table 10

BBPSO 30-dimensional improvement comparison.

funBBPSO(Dim = 30)BBPSOG(Dim = 30)
minmaxmeanstdminmaxmeanstd
f1 1.00 × 1025.10 × 1091.58 × 1091.52 × 109 1.00 × 102 2.91 × 103 1.22 × 103 1.41 × 103
f3 9.72 × 1033.63 × 1042.08 × 1046.19 × 103 6.85 × 103 2.74 × 104 2.05 × 104 5.65 × 103
f4 4.06 × 1029.89 × 1026.01 × 1021.48 × 102 4.63 × 102 4.76 × 102 4.69 × 102 4.71 × 100
f5 5.52 × 1027.29 × 1026.31 × 1023.50 × 101 5.51 × 102 5.91 × 102 5.76 × 102 1.04 × 101
f6 6.00 × 1026.26 × 1026.06 × 1025.35 × 100 6.00 × 102 6.01 × 102 6.00 × 102 2.04 × 10−1
f7 7.72 × 1029.96 × 1028.53 × 1025.20 × 101 7.81 × 102 8.34 × 102 8.16 × 102 1.54 × 101
f8 8.69 × 1021.02 × 1039.28 × 1023.52 × 101 8.53 × 102 8.85 × 102 8.71 × 102 1.00 × 101
f9 1.39 × 1031.06 × 1042.93 × 1031.90 × 103 9.16 × 102 1.22 × 103 1.03 × 103 9.76 × 101
f10 2.63 × 1035.80 × 1034.24 × 1037.55 × 102 2.76 × 103 3.82 × 103 3.40 × 103 2.87 × 102
f11 1.21 × 1031.97 × 1031.43 × 1031.49 × 102 1.18 × 103 1.30 × 103 1.25 × 103 3.68 × 101
f12 2.25 × 1045.70 × 1085.68 × 1071.31 × 108 6.28 × 103 9.46 × 104 3.70 × 104 2.61 × 104
f13 1.92 × 1039.56 × 1054.38 × 1041.73 × 105 3.58 × 103 1.03 × 104 6.82 × 103 1.86 × 103
f14 1.46 × 1031.38 × 1064.90 × 1042.51 × 105 1.50 × 103 1.64 × 103 1.55 × 103 4.16 × 101
f15 1.88 × 1033.36 × 1049.47 × 1038.20 × 103 1.83 × 103 3.81 × 103 2.37 × 103 4.74 × 102
f16 1.83 × 1033.09 × 1032.53 × 1033.53 × 102 1.97 × 103 2.53 × 103 2.28 × 103 1.73 × 102
f17 1.89 × 1032.51 × 1032.07 × 1031.41 × 102 1.87 × 103 2.06 × 103 1.95 × 103 5.21 × 101
f18 2.58 × 1047.94 × 1051.65 × 1051.55 × 105 2.09 × 104 1.25 × 105 7.43 × 104 2.93 × 104
f19 2.02 × 1034.68 × 1041.18 × 1041.19 × 104 2.00 × 103 1.23 × 104 5.57 × 103 3.89 × 103
f20 2.09 × 1032.57 × 1032.30 × 1031.22 × 102 2.09 × 103 2.29 × 103 2.21 × 103 6.24 × 101
f21 2.10 × 1032.82 × 1032.25 × 1031.33 × 1022.25 × 1032.25 × 1032.25 × 1034.30 × 10−13
f22 2.26 × 103 2.40 × 103 2.31 × 103 3.30 × 101 2.35 × 1032.35 × 1032.35 × 1034.55 × 10−13
f23 2.88 × 1033.06 × 1032.93 × 1033.57 × 101 2.85 × 103 2.88 × 103 2.86 × 103 8.17 × 100
f24 3.43 × 1033.55 × 1033.47 × 1033.19 × 101 3.38 × 103 3.41 × 103 3.40 × 103 9.35 × 100
f25 2.90 × 1033.25 × 1033.00 × 1038.65 × 101 2.91 × 103 2.98 × 103 2.93 × 103 2.68 × 101
f26 5.26 × 1036.77 × 1035.93 × 1033.56 × 102 3.54 × 103 5.50 × 103 5.13 × 103 5.39 × 102
f27 3.44 × 1033.79 × 1033.58 × 1038.82 × 101 3.41 × 103 3.50 × 103 3.46 × 103 2.89 × 101
f28 3.28 × 1035.56 × 1034.34 × 1039.07 × 102 3.18 × 103 5.16 × 103 3.80 × 103 9.12 × 102
f29 3.41 × 1034.13 × 1033.70 × 1031.82 × 102 3.31 × 103 3.57 × 103 3.46 × 103 7.66 × 101
f30 7.84 × 1039.17 × 1051.52 × 1052.42 × 105 5.25 × 103 4.65 × 104 2.07 × 104 1.24 × 104
count127
In summary, the algorithm using a population diversity control strategy achieves a better optimization effect than the original algorithm in most test functions, which indicates that our strategy of “controlling the population diversity according to dimensional entropy to maintain diversity in the early stage while controlling convergence in the late stage is effective”. From the vertical perspective, the algorithm improved by the strategy has a more obvious optimization effect in the high dimension, which indicates that our concept of dimensional entropy is robust and adaptable in the high dimension.

5. Conclusions

This paper proposes a species diversity measure based on the dimension entropy mechanism, which creatively combines dimension learning and entropy. An independent view of the different dimensions of entropy affords strong robustness, while the entropy value method enjoys strong adaptability to changes in population size and an ability to have a clear response. Dimension entropy controls the population diversity of the swarm intelligence algorithm update strategy, which relies on the dimension entropy calculation of population diversity. By controlling the redundant control particle diversity, the slow algorithm, under the control of a lower diversity, improved the previous rapid loss of diversity in the swarm intelligence algorithm caused by local optimal results of stagnation of the algorithm, and maintained the diversity to ensure algorithm convergence. The 29 on the cec17 test function verified the effectiveness of this strategy.
  1 in total

1.  A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm.

Authors:  Aizhu Zhang; Genyun Sun; Jinchang Ren; Xiaodong Li; Zhenjie Wang; Xiuping Jia
Journal:  IEEE Trans Cybern       Date:  2016-12-30       Impact factor: 11.448

  1 in total
  1 in total

1.  A New Two-Stage Algorithm for Solving Optimization Problems.

Authors:  Sajjad Amiri Doumari; Hadi Givi; Mohammad Dehghani; Zeinab Montazeri; Victor Leiva; Josep M Guerrero
Journal:  Entropy (Basel)       Date:  2021-04-20       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.