Literature DB >> 35885113

A Multi-Strategy Adaptive Comprehensive Learning PSO Algorithm and Its Application.

Ye'e Zhang1, Xiaoxia Song1.   

Abstract

In this paper, a multi-strategy adaptive comprehensive learning particle swarm optimization algorithm is proposed by introducing the comprehensive learning, multi-population parallel, and parameter adaptation. In the proposed algorithm, a multi-population parallel strategy is designed to improve population diversity and accelerate convergence. The population particle exchange and mutation are realized to ensure information sharing among the particles. Then, the global optimal value is added to velocity update to design a new velocity update strategy for improving the local search ability. The comprehensive learning strategy is employed to construct learning samples, so as to effectively promote the information exchange and avoid falling into local extrema. By linearly changing the learning factors, a new factor adjustment strategy is developed to enhance the global search ability, and a new adaptive inertia weight-adjustment strategy based on an S-shaped decreasing function is developed to balance the search ability. Finally, some benchmark functions and the parameter optimization of photovoltaics are selected. The proposed algorithm obtains the best performance on 6 out of 10 functions. The results show that the proposed algorithm has greatly improved diversity, solution accuracy, and search ability compared with some variants of particle swarm optimization and other algorithms. It provides a more effective parameter combination for the complex engineering problem of photovoltaics, so as to improve the energy conversion efficiency.

Entities:  

Keywords:  CLPSO; multi-strategy; photovoltaic optimization; search ability

Year:  2022        PMID: 35885113      PMCID: PMC9317180          DOI: 10.3390/e24070890

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.738


1. Introduction

Many problems in reality can be transformed into optimization problems. These optimization problems have complex characteristics, such as multiple constraints, high dimensionality, nonlinearity, and uncertainty, making them difficult to solve by the traditional optimization methods [1,2]. Therefore, an efficient new method is sought to solve these complex problems. Swarm intelligence optimization algorithms are a new evolutionary computing technology, which refers to some intelligent optimization algorithms with distributed intelligent behavior characteristics inspired by the swarm behavior of insects, herds, birds, fish, etc. [3,4,5]. This has become the research focus of more and more researchers. It has a special relationship with artificial life, and includes Harris hawk optimization (HHO), slime mold algorithm (SMA), artificial bee colony (ABC), firefly optimization, cuckoo search, and brainstorming optimization algorithm [6,7,8,9] for engineering scheduling, image processing, the traveling salesman problem, cluster analysis, and logistics location. PSO is a swarm intelligence optimization technology developed by Kennedy and Eberhart [10]. The main idea is to solve the optimization problem through individual cooperation and information sharing. The PSO takes on a simple, strong parallel structure. Therefore, it has been used in multi-objective optimization, scheduling optimization, vehicle routing problems, etc. Although the PSO shows good optimization performance, it has slow convergence in solving complex optimization problems. Thus, a variety of improvement strategies for PSO are presented. Nickabadi et al. [11] presented a new adaptive inertia weight approach. Wang et al. [12] presented a self-adaptive learning model based on PSO for solving application problems. Zhan et al. [13] presented an orthogonal learning strategy for PSO. Li and Yao [14] presented a cooperative PSO. Xu [15] presented an adaptive tuning for the parameters of PSO based on a velocity and inertia weight strategy to avoid the velocity close to zero in the early stages. Wang et al. [16] presented a hybrid PSO using a diversity mechanism and neighborhood search. Chen et al. [17] presented an aging leader and challenger PSO. Qu et al. [18] presented a distance-based PSO. Cheng and Jin [19] presented a social learning PSO based on controlling dimension-dependent parameters. Tanweer et al. [20] presented a self-regulating PSO with the best human learning. Taherkhani et al. [21] presented an adaptive PSO approach. Moradi and Gholampour [22] presented a hybrid PSO based on a local search strategy. Gong et al. [23] developed a new hybridized PSO framework with another optimization method for “learning”. Nouiri et al. [24] presented an effective and distributed PSO. Wang et al. [25] presented a hybrid PSO with adaptive learning to guarantee exploitation. Aydilek [26] presented a hybrid PSO with a firefly algorithm mechanism. Xue et al. [27] presented a self-adaptive PSO. Song et al. [28] presented a variable-size cooperative co-evolutionary PSO with the idea of “divide and conquer”. Song et al. [29] presented a bare-bones PSO with mutual information. The comprehensive learning PSO (CLPSO) algorithm is a variant of PSO, and has good application in multimodal problems. However, because the CLPSO algorithm uses the current search velocity and individual optimal value to update the search velocity, the search velocity value in the later iteration is very small, resulting in slow convergence and reducing the computational efficiency. In order to improve the CLPSO algorithm, researchers have conducted some useful works. Liang et al. [30] presented a variant of PSO (CLPSO) using a new learning strategy. Maltra et al. [31] presented a hybrid cooperative CLPSO by cloning fitter particles. Mahadevan and Kannan [32] presented a learning strategy for PSO to develop a CLPSO to overcome premature convergence. Ali and Khan [33] presented an attributed multi-objective CLPSO for solving well-known benchmark problems. Hu et al. [34] presented a CLPSO-based memetic algorithm. Zhong et al. [35] presented a discrete CLPSO with an acceptance criterion of SA. Lin and Sun [36] presented a multi-leader CLPSO based on adaptive mutation. Zhang et al. [37] presented a local optima topology (LOT) structure with the CLPSO for solving various functions. Lin et al. [38] presented an adaptive mechanism to adjust the comprehensive learning probability of CLPSO. Wang and Liu [39] presented a novel saturated control method for a quadrotor to achieve three-dimensional spatial trajectory tracking with heterogeneous CLPSO. Cao et al. [40] presented a CLPSO with local search. Chen et al. [41] presented a grey-wolf-enhanced CLPSO based on the elite-based dominance scheme. Wang et al. [42] presented a heterogeneous CLPSO with a mutation operator and dynamic multi-swarm. Zhang et al. [43] presented a novel CLPSO using the Bayesian iteration method. Zhou et al. [44] presented an adaptive hierarchical update CLPSO based on the strategies of weighted synthesis. Tao et al. [45] presented an enhanced CLPSO with dynamic multi-swarm. These improved CLPSO algorithms use the individual optimal information of particles to guide the whole iterative process, have better diversity and search range, and can solve complex multimodal problems. However, because the global optimal value does not participate in the particle velocity and position, the particle velocity is too small in the later search, and the convergence speed is slow. At the same time, due to the lack of measures for avoiding the local optimization, once the optimal values of most particles fall into the local optimization, the convergence is unable to find the global optimal value, and the performance is unstable. Therefore, to improve the optimization performance of CLPSO, a novel multi-strategy adaptive CLPSO (MSACLPSO) based on making use of comprehensive learning, multi-population parallel, and parameter adaptation was designed for this paper. The MSACLPSO effectively promotes information exchange in different dimensions, ensures information sharing in the population, enhances the convergence and stability, and balances the search ability compared with the other related algorithms. The main contributions and novelties of this paper are described as follows. A novel multi-strategy adaptive CLPSO (MSACLPSO) based on comprehensive learning, multi-population parallel, and parameter adaptation is presented. A multi-population parallel strategy is designed to improve population diversity and accelerate convergence. A new velocity update strategy is designed by adding the global optimal value in the population to the velocity update. A new adaptive adjustment strategy of learning factors is developed by linearly changing the learning factors. A parameter optimization method of photovoltaics is designed to prove the actual application ability.

2. PSO

PSO is a population-based search algorithm that simulates the social behavior of birds within a range. In PSO, all individuals are referred to as particles, which are flown through the search space to delete the success of other individuals. The position of particles changes according to the individual’s social and psychological tendencies. The change of one particle is influenced by knowledge or experience. As a modeling result of the social behavior, the search is processed to return to previously successful areas in the search space. The particle’s velocity () and position () are changed by the particle best value () and global best value (). The formula for updating velocity and position is given as follows: where is the velocity of the particle at the iteration, is the position of particle at the iteration, and the position of the particle is related to its velocity. is an inertia weight factor, which is used to reflect the motion habits of particles and represent the tendency of particles to maintain their previous speed. is a self-cognition factor, which reflects the particle’s memory of its own historical experience, and represents that the particle has a tendency to approach its best position. is a social cognition factor, which reflects the population’s historical experience of collaboration and knowledge sharing among particles, and represents that particles tend to approach the best position in the population or field history. and represent random numbers in [0, 1], which denote the remembrance ability for the research. Generally, the value in the can be clamped to the range [−, ] in order to control the excessive roaming of particles outside the search space. The PSO is terminated until the maximal number of iterations is reached or the best particle position cannot be further improved. The PSO achieves better robustness and effectiveness in solving optimization problems. The basic flow of the PSO is shown in Figure 1.
Figure 1

The basic flow of the PSO.

3. CLPSO

PSO can easily fall into local extrema, which leads to premature convergence. Thus, a new update strategy is presented to develop a CLPSO algorithm. In the PSO, each particle learns from its own optimal value and the global optimal value. Therefore, in the velocity update formula of CLPSO, the social part of the global optimal solution of particle learning is not used. In addition, in the velocity update formula of the traditional PSO algorithm, each particle learns from all dimensions of its own optimal value, but its own optimal value is not optimal in all dimensions. Therefore, the CLPSO algorithm introduces a comprehensive learning strategy to construct learning samples using the of all particles to promote the information exchange, improve population diversity, and avoid falling into local extrema. The comprehensive learning strategy is to use the individual historical optimal solution of all particles in the population to update the particle position in order to effectively enhance the exploration ability of the PSO and achieve excellent optimization performance in solving multimodal optimization problems. The velocity update of particle and position is described as follows: where P and D. P is the size of the population and is the search space dimension. is the particle position, is the velocity of particle , is the search range of particle , is the velocity range, is the inertia weight, is the learning factor, is a randomly distributed number on (0, 1), refers to other particles that particle needs to learn in the D-dimension, and can be the optimal position of any particle. The determination method of is described as follows: For each particle dimension, a random probability is produced. If the random probability is greater than the learning probability , then this particle dimension learns from the corresponding dimension of its own individual optimal value. On the other hand, two particles are randomly selected to learn the better optimal value. To ensure the population’s polymorphism, the CLPSO also sets an update interval number m; that is, when the individual optimal value of particle has not been updated for iterations, it is regenerated.

4. MSACLPSO

PSO has simplicity, practicality, and fixed parameters, but it has the disadvantage of easily falling into local optima, as well as weak local search ability. The CLPSO has slow velocity in the later search, low convergence speed, and unstable performance. To solve these problems, a multi-strategy adaptive CLPSO (MSACLPSO) algorithm is proposed by introducing a comprehensive learning strategy, multi-population parallel strategy, velocity update strategy, and parameter adaptive strategy. In MSACLPSO, a comprehensive learning strategy is introduced to construct learning samples using the pBest of all particles to promote information exchange, improve population diversity, and avoid falling into local extrema. To overcome the lack of local search ability in the later stage, the global optimal value of the population is used for the velocity update, and a new update strategy is proposed to enhance the local search ability. The multi-population parallel strategy is employed to divide the population into subpopulations, and then iterative evolution is carried out appropriately to achieve particle exchange and mutation, enhance the population diversity, accelerate the convergence, and ensure information sharing between the particles. The linearly changing strategy of the learning factors is employed to realize the iterative evolution in different stages and the adaptive adjustment strategy of learning factors, which can enhance the global search ability and improve the local search ability. The -shaped decreasing function is adopted to realize the adaptive adjustment of inertia weight to ensure that the population has high speed in the initial stage, reduce the search speed in the middle stage—so that the particles will more easily converge to the global optimum—and maintain a certain speed for the final convergence in the later stage.

4.1. Multi-Population Parallel Strategy

The idea of multi-population parallel is based on the natural phenomenon of the evolution of the same species in different regions. It divides the population into multiple subpopulations, and then each subpopulation searches for the optimal value in parallel to improve the search ability. The indirect exchange of the optimal value and dynamic recombination of the population can enhance the population diversity and accelerate the convergence. A multi-population parallel strategy is proposed here. The main ideas of the multi-population parallel strategy are described as follows: The population is divided into N subpopulations in the process of evolution. For each subpopulation, the particle carries out iterative evolution, and the particle exchange and particle mutation under appropriate conditions are executed according to certain rules, so as to ensure information sharing between the particles of the population through the exchange of particles between subpopulations. Therefore, to enhance the local search ability of the CLSPO algorithm in the later stage, a new update strategy is presented after the g0 generation is completed. That is, the global optimal value of the population is added to the velocity update, as shown in Equation (4): where and are learning factors, is the optimal value of the particle in each subpopulation , is the optimal value of each subpopulation , and are randomly distributed numbers on (0, 1).

4.2. Adaptive Learning Factor Strategy

In PSO, the values of and are set in advance according to experiences, reducing the self-learning ability. Therefore, the linearly changing strategy of the learning factors is developed for and . In the early evolution stage, the self-cognition item is reduced and the social cognition item is increased to improve the global search ability. In the later evolution stage, the local search ability is guaranteed by encouraging particles to converge towards the global optimum. Therefore, the adaptive learning factor strategy is described as follows: where and are the maximum value and minimum value, respectively.

4.3. Adaptive Inertia Weight Strategy

In PSO, when the particles in the population tend to be the same, the last two terms in the particle velocity update formula—namely, the social cognition part, and the individual’s own cognition part—will gradually tend towards 0. If the inertia weight is less than 1, the particle speed will gradually decrease, or even stop moving, which result in premature convergence. When the optimal fitness of the population has not changed (i.e., has stagnated) for a long time, the inertia weight should be adjusted adaptively according to the degree of premature convergence. If the same adaptive operation is adopted for the population, when the population has converged to the global optimum, the probability of destroyed excellent particles will increase with the increase in their inertia weight, which will degrade the performance of the PSO algorithm. To better balance the search ability, an -shaped decreasing function is adopted to ensure that the population has high speed in the initial stage, and the search speed decreases in the middle stage, so that the particles can easily converge to the global optimum value and, finally, converge at a certain speed in the later stage. The -shaped decreasing function for the inertia weight is described as follows: where and are the maximum and minimum values, respectively— and —and is the control coefficient to adjust the speed change, where a = 13.

4.4. Model of MSACLPSO

The flow of MSACLPSO is shown in Figure 2.
Figure 2

The flow of MSACLPSO.

The steps of MSACLPSO are described as follows: Step 1: Divide the population into subpopulations, and initialize all parameters. Step 2: Execute the CLPSO algorithm for each subpopulation. The objective function is used to find out the individual optimal value of the particle, the optimal value of the subpopulation, and the global optimal value of the population. To ensure the high global search ability in the early stage, T0 is set for the early stage, and each subpopulation updates all particle states according to Equation (3). To enhance the local search ability of CLSPO in the later stage, after the T0 iteration is completed, each subpopulation updates all particle states according to Equation (4). Step 3: If the optimal value of one subpopulation does not update for successive R1 iterations, the population may fall into local optimization. To avoid falling into the local optimum for the subpopulation, the mutation strategy is used here. Each dimension of each particle in the subpopulation is mutated with the probability . The mutation mode is described as follows: where is the random number on (−1, 1). Step 4: After T0 iterations are executed, to enhance population diversity, the particles are randomly exchanged between populations every interval R iteration to recombine subpopulations. The recombination of subpopulations is described as follows: All subpopulations randomly select 50% of the particles, which are randomly exchanged with the particles of other populations. Then, according to the fitness values of all particles in all subpopulations, 1/N particles with the best fitness values in each subpopulation are selected to construct a new population. It is worth noting that the exchanged particle can be any particle in any other population. Step 5: Determine whether the end conditions are met. If they are met, the optimal result is output; otherwise, return to Step 2.

5. Experiment Simulation and Analysis

5.1. Test Functions

To verify the performance of MSACLPSO, 10 famous benchmark functions were selected. The detailed description is shown in Table 1.
Table 1

The detailed description.

Function NameFunction Expression S Fmin fbias
Sphere F1=i=1Dxi 100, 100D 0−450
Schwefel 1.2 F2=i=1Dj=1ixj2 100, 100D 0−450
High Conditioned Elliptic F3=i=1D106i1D1xi2 100, 100D 0−450
Schwefel 1.2 with Noise F4=i=1Dj=1ixj21+0.4|N(0,1 100, 100D 0−450
Schwefel 2.6 F5=maxx1+2x27,2x1+x25 100, 100D 0−310
Rosenbrock F6=i=1D1(100(xi2xi+1)2+xi1)2 100, 100D 0390
Griewank F7=i=1D1xi24000i=1Dcosxii+1 100, 100D 0390
Ackley F8=20exp0.21Di=1Dxi2exp1Di=1Dcos(2πxi)+20 32, 32D 0−140
Rastrigin F9=i=1Dxi210cos2πxi+10 5, 5D 0−330
Expanded Schaffer F10=0.5+sin2x2+y20.5(1+0.001x2+y2)2 100, 100D 0−300

5.2. Experimental Environment and Parameter Setting

The experimental environment mainly included Core I5-4200H, Win10, RAM-16GB, and MATLAB R2018b. The optimization performance of MSACLPSO was compared with other state-of-the-art algorithms, including the basic version of PSO (PSO) [46], self-organizing hierarchical PSO (HPSO) [47], fully-informed PSO (FIPS) [48], unified PSO (UPSO) [49], CLPSO [30], and static heterogeneous particle swarm optimization (sHPSO) [50]. In MSACLPSO, the population is divided into two subpopulations, and four main parameters are adjusted to balance exploration and exploitation. These parameters include population size, acceleration coefficients, iteration number, and dimensions. In our experiment, a large number of alternative values were tested, and some classical values were selected from other literature, and then these parameter values were experimentally modified until the most reasonable parameter values were selected. These selected parameter values attained the optimal solution, so that they could accurately and efficiently verify the effectiveness of MSACLPSO in solving optimization problems. Some parameters that were tuned included the population size N = 40, the number of subpopulations N = 2, = 0.5 and = 2.5, the dimension D = 30, run times T = 30, the maximum number of iterations G = 200, and function evaluations FEs = 300,000. The specific settings are shown in Table 2.
Table 2

The parameter settings.

Algorithms ω c c1 c2 Pci NP FES
PSO0.9~0.42.02.060300,000
HPSO2.5~0.50.5~2.540300,000
FIPS240300,000
UPSO1.4944540300,000
OLPSO0.9~0.4240300,000
CLPSO0.9~0.41.494450.540300,000
sHPSO0.722.5~0.50.5~2.540300,000
MSACLPSO0.95~0.33.0~1.52.5~0.50.5~2.50.540300,000

5.3. Experimental Results and Analysis

The population was divided into two subpopulations, and different numbers of individuals were set. The error mean (mean) value and standard deviation (Std) value were applied to evaluate the optimization performance of MSACLPSO. The obtained experimental results with the different numbers of individuals for 30-dimensional problems are shown in Table 3. The best results are the bold.
Table 3

The different numbers of individuals ( and ) in two subpopulations for MSACLPSO.

FunctionsIndices10 + 3015 + 2520 + 2025 + 1530 + 1040 + 0
F1 Mean 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Std 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
F2 Mean 1.9862 × 10−8 1.6547 × 10−62.2056 × 10−41.6547 × 10−23.3274 × 10−13.7543 × 10−1
Std 3.5974 × 10−8 1.7430 × 10−63.3469 × 10−41.3275 × 10−24.5401 × 10−14.7341 × 10−1
F3 Mean 5.5673 × 10−5 6.1432 × 1058.4102 × 1051.3610 × 1062.1977 × 1062.3560 × 106
Std 1.7703 × 10−5 2.4205 × 1053.2359 × 1056.6034 × 1059.2605 × 1056.6496 × 105
F4 Mean 4.2485 × 102 5.0328 × 1026.0462 × 1028.1743 × 1021.4058 × 1031.4135 × 103
Std 2.4452 × 102 2.8874 × 1024.4718 × 1024.0569 × 1027.1673 × 1026.4532 × 102
F5 Mean2.7830 × 103 2.5065 × 103 2.8913 × 1033.1673 × 1033.2137 × 10333.5478 × 103
Std5.5702 × 1024.1407 × 102 4.0531 × 102 5.9613 × 1024.2715 × 1025.5379 × 102
F6 Mean3.16372.2637 2.1975 7.1762 × 10−12.9757 × 10−13.3405 × 10−1
Std 3.3643 4.06233.45379.2546 × 10−15.2504 × 10−16.6492 × 10−1
F7 Mean 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Std 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
F8 Mean 1.9745 × 101 1.9805 × 1011.9832 × 1011.9867 × 1011.9835 × 1012.0645 × 101
Std8.3746 × 10−27.1485 × 10−25.8043 × 10−2 5.7903 × 10−2 8.8562 × 10−27.8530 × 10−2
F9 Mean1.06731.0245 × 10−1 0.0000 0.0000 0.0000 1.2473
Std1.03053.8672 × 10−1 0.0000 0.0000 0.0000 1.2865
F10 Mean 1.0782 × 101 1.0954 × 1011.1065 × 1011.1438 × 1011.1714 × 1011.1904 × 101
Std 4.0645 × 10−1 5.4680 × 10−14.3591 × 10−15.4613 × 10−14.1527 × 10−14.3681 × 10−1
As can be seen from Table 3, the subpopulation size = 10 and = 30 obtained the best optimization performance for the 10 test benchmark functions compared with other subpopulation sizes. However, for the functions , , and , MSACLPSO did not obtain satisfactory optimization performance. Therefore, the subpopulation size = 10 and = 30 was selected for performance evaluation of MSACLPSO. MSACLPSO was compared with some variants of PSO algorithms. The optimization performance was obtained according to the mean and Std of the 20 obtained results. The obtained experimental results using different algorithms for test functions with 30 dimensions are shown in Table 4. The obtained best results are highlighted in bold.
Table 4

The obtained experimental results using different algorithms.

FunctionsIndicesPSOHPSOFIPSOLPSOUPSOsHPSOCLPSOHCLPSOMSACLPSO
F1 Mean 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Std 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
F2 Mean3.70 × 10−13.79 × 10−67.79 × 1011.38 × 1012.65E-071.44 × 10−21.14 × 1031.70 × 10−6 1.99 × 10−8
Std3.20 × 10−12.82 × 10−62.71 × 1018.332.42E-077.10 × 10−22.53 × 1021.71 × 10−6 3.60 × 10−8
F3 Mean6.53 × 1067.72 × 1052.45 × 1071.60 × 1071.54 × 1068.75 × 1051.22 × 1076.42 × 105 5.57 × 105
Std4.17 × 1062.96 × 1056.29 × 1067.04 × 1064.75 × 1055.34 × 1053.34 × 1062.61 × 105 1.77 × 105
F4 Mean 3.81 × 102 2.48 × 1041.15 × 1032.18 × 1037.28 × 1032.02 × 1048.77 × 1035.22 × 1024.25 × 102
Std3.31 × 1025.71 × 1033.73 × 1021.09 × 1032.79 × 1039.94 × 1031.85 × 1033.09 × 102 2.45 × 102
F5 Mean3.85 × 1039.20 × 103 2.22 × 103 3.30 × 1036.32 × 1036.94 × 1034.47 × 1032.97 × 1032.78 × 103
Std8.00 × 1021.81 × 1035.14 × 102 3.75 × 102 1.63 × 1031.43 × 1034.26 × 1024.55 × 1025.57 × 102
F6 Mean7.02 × 1015.04 × 1013.77 × 1012.07 × 1016.82 × 1011.15 × 102 2.39 2.39 3.16
Std9.51 × 1015.05 × 1013.50 × 1012.50 × 1019.64 × 1012.29 × 1023.844.27 3.36
F7 Mean7.60 × 10−11.00 × 10−23.00 × 10−21.00 × 10−22.00 × 10−24.00 × 1027.00 × 10−12.00 × 10−2 0.00
Std1.411.00 × 10−22.00 × 10−21.00 × 10−21.00 × 10−24.00 × 1021.50 × 10−12.00 × 10−2 0.00
F8 Mean2.09 × 1012.07 × 1012.09 × 1012.10 × 1012.10 × 1012.02 × 1012.10 × 1012.09 × 101 1.97 × 101
Std7.00 × 10−21.50 × 10−16.00 × 10−28.00 × 10−2 5.00 × 10−2 1.90 × 10−16.00 × 10−29.00 × 10−28.37 × 10−2
F9 Mean1.90 × 1011.07 × 1015.71 × 101 0.00 8.52 × 1018.25 × 1011.00 × 102 0.00 1.07
Std5.374.961.46 × 101 0.00 1.69 × 1012.44 × 1011.25 × 101 0.00 1.03
F10 Mean1.27 × 1011.23 × 1011.31 × 1011.31 × 1011.28 × 1011.31 × 1011.26 × 1011.19 × 101 1.09 × 101
Std4.30 × 10−13.70 × 10−12.10 × 10−1 2.00 × 10−1 3.30 × 10−13.90 × 10−12.20 × 10−15.80 × 10−14.06 × 10−1
As shown in Table 4, all algorithms performed equally on test function , and PSO obtained the best solution on test function . FIPS obtained the best solution on test function . MSACLPSO performed well on test functions ~. For multimodal functions, MSACLPSO performed well on all functions, and obtained the best performance on test functions , , and , and the second-best performance on test functions and . On the other hand, CLPSO and HCLPSO obtained the best solution on test function , and OLPSO and HCLPSO obtained the best solution on test function . Overall, MSACLPSO obtained the best performance on 6 out of 10 test functions. Therefore, MSACLPSO performs well, and obtains the best optimization performance for multimodal problems. In our experiment, MSACLPSO used several strategies of comprehensive learning, multi-population parallel, and parameter adaptation. Although the strategies of comprehensive learning and parameter adaptation need more running time, the multi-population parallel strategy can reduce the running time. Therefore, the time complexity of MSACLPSO is similar to that of the other compared algorithms. To test the statistical difference between MSACLPSO and the other variants of PSO algorithms, the non-parametric Wilcoxon signed-rank test was used to compare the results of MSACLPSO and the results of the other variants of PSO. The obtained results of MSACLPSO against other algorithms are shown in Table 5.
Table 5

The test results under α = 0.05.

FunctionsPSOHPSOFIPSOLPSOUPSOsHPSOCLPSOHCLPSO
F1 = = = = = = = =
F2 + + + + + + + +
F3 + + + + + + + +
F4 + + + + + + +
F5 + + + + + + +
F6 + + + + + +
F7 + + + + + + + +
F8 + + + + + + + +
F9 + + + + + +
F10 + + + + + + + +
+/=/− 8/1/1 9/1/0 8/1/1 8/1/1 9/1/0 9/1/0 8/1/1 7/1/2
As shown in Table 5, MSACLPSO performs better than the other variants of PSO algorithms through the number of (+/=/−) in the last row of the Wilcoxon signed-rank test results under α = 0.05. To sum up, it can be seen that the optimized values of parameters for MSACLPSO are = 0.43, = 2.1, = 1.8, = 2.1, and = 0.5 for solving these complex optimization problems.

6. Case Analysis

Renewable energy has always been the focus of dealing with the key issues of traditional energy consumption, which uses nonrenewable energy. Solar energy is an up-and-coming resource, in which PV plays a vital role. However, the PV device is usually placed in an exposed environment, which leads to its degradation. This seriously affects the efficiency of PV. Therefore, MSACLPSO was employed to effectively and accurately optimize the PV parameters to establish an optimized PV model. The values of parameters for MSACLPSO were the same as given in Section 5.3.

6.1. Modeling for PV

A lot of PV models have been designed, and were applied to illustrate the I–V characteristics. The SDM and DDM are the most widely used [51]. The PV model is described in Table 6.
Table 6

The modelling for PV.

PV IL
SDM IL=IphIsdexpq(VL+ILRs)nkT1VL+ILRsRsh
DDM IL=IphIsd1expq(VL+ILRs)n1kT1 Isd2expq(VL+ILRs)n2kT1VL+ILRsRsh
It is crucial to search for the optimal parameter values in order to minimize the error of the PV models. The error functions are described as follows: For the SDM, For the DDM, To evaluate the PV model, the RMSE is described as follows:

6.2. Modeling for PV

To validate the performance of MSACLPSO, the PSO, BLPSO, CLPSO, CPMPSO, IJAYA, GOTLBO, SATLBO, DE/BBO, DBBO, STLBO, WOA, CWOA, LWOA, GWO, EGWO, WDO, DE, JADE, and MPPCEDE [52,53,54,55,56,57,58,59,60,61,62,63,64] algorithms were used for comparison. The parameter values of MSACLPSO were the same as given in Section 5.2. The parameter values of the other compared algorithms were the same as in the literature. The maximum number of iterations was G = 200, and these algorithms were executed for 20 runs. Therefore, the statistical results of the SRE, LRE, MRE, and Std were obtained. The value of RMSE was employed to quantify the solution accuracy, while the Std of the RMSE described the reliability. The statistical results of the experiment with the SDM and DDM are shown in Table 7 and Table 8, respectively. The obtained best results are highlighted in bold.
Table 7

The obtained results of RMSE for the SDM.

AlgorithmsSRELREMREStdSymbol
PSO2.44805 × 10−3 9.86022 × 10−4 1.31844 × 10−35.24500 × 10−4+
BLPSO1.74592 × 10−31.03122 × 10−31.31377 × 10−31.90400 × 10−4+
CLPSO1.25274 × 10−39.92075 × 10−41.06081 × 10−37.04200 × 10−5+
CPMPSO 9.86022 × 10−4 9.86022 × 10−4 9.86022 × 10−4 2.17556 × 10−17+
IJAYA9.86841 × 10−4 9.86022 × 10−4 9.86051 × 10−41.49300 × 10−7+
GOTLBO1.39559 × 10−39.86608 × 10−41.08300 × 10−39.70900 × 10−5+
SATLBO1.00674 × 10−39.86025 × 10−49.88799 × 10−44.81300 × 10−6+
DE/BBO1.84123 × 10−3 9.86022 × 10−4 1.25173 × 10−32.08225 × 10−4+
DBBO2.36083 × 10−39.86820 × 10−41.38755 × 10−32.70008 × 10−4+
STLBO1.02033 × 10−3 9.86022 × 10−4 9.87207 × 10−46.25700 × 10−6+
WOA1.00397 × 10−21.10759 × 10−33.25587 × 10−32.16463 × 10−3+
CWOA3.28588 × 10−29.98677 × 10−45.44921 × 10−36.33831 × 10−3+
LWOA1.92042 × 10−29.99621 × 10−43.44545 × 10−33.33774 × 10−3+
GWO4.43070 × 10−21.28030 × 10−31.13440 × 10−21.48470 × 10−2+
EGWO5.24900 × 10−32.11210 × 10−33.50150 × 10−31.59880 × 10−3+
WDO4.42600 × 10−31.22101 × 10−32.18020 × 10−37.63880 × 10−4+
DE1.81059 × 10−3 9.86022 × 10−4 1.02116 × 10−31.44688 × 10−4+
JADE1.41030 × 10−39.86060 × 10−41.08330 × 10−31.09000 × 10−4+
MPPCEDE 9.86022 × 10−4 9.86022 × 10−4 9.86022 × 10−4 0.00000 +
MSACLPSO 9.86022 × 10−4 9.864574 × 10−4 9.83758 × 10−4 7.52967 × 10−18
Table 8

The obtained results of the RMSE for the DDM.

AlgorithmsSRELREMREStdSymbol
PSO4.34952 × 10−2 9.82485 × 10−4 4.37645 × 10−31.01270 × 10−2+
BLPSO1.93654 × 10−31.08218 × 10−31.53462 × 10−32.45890 × 10−4+
CLPSO1.38835 × 10−39.94316 × 10−41.13959 × 10−39.39950 × 10−5+
CPMPSO9.86022 × 10−4 9.82485 × 10−4 9.83137 × 10−41.33980 × 10−6+
IJAYA9.99410 × 10−49.82494 × 10−49.86860 × 10−43.22120 × 10−6+
GOTLBO1.53359 × 10−39.85097 × 10−41.16335 × 10−31.51770 × 10−4+
SATLBO1.23062 × 10−39.82824 × 10−41.00544 × 10−35.02710 × 10−5+
DE/BBO1.63508 × 10−39.87990 × 10−41.19281 × 10−32.03849 × 10−4+
DBBO9.84995 × 10−42.29052 × 10−31.22395 × 10−33.08780 × 10−4+
STLBO1.52433 × 10−39.82561 × 10−41.03435 × 10−31.41980 × 10−4+
WOA1.15633 × 10−31.16011 × 10−23.42961 × 10−32.23226 × 10−3+
CWOA8.86567 × 10−31.13004 × 10−33.50587 × 10−32.15341 × 10−3+
LWOA1.04935 × 10−31.11900 × 10−23.12337 × 10−31.81559 × 10−3+
GWO4.07970 × 10−21.02742 × 10−39.90850 × 10−41.29040 × 10−2+
EGWO5.00690 × 10−31.80620 × 10−33.06700 × 10−31.70500 × 10−3+
WDO4.93450 × 10−31.68120 × 10−33.29180 × 10−38.41370 × 10−4+
DE2.00941 × 10−39.82936 × 10−41.06862 × 10−32.23325 × 10−4+
JADE2.23830 × 10−39.83510 × 10−41.46570 × 10−33.81000 × 10−4+
MPPCEDE 9.82908 × 10−4 9.82485 × 10−4 9.82504 × 10−4 8.02951 × 10−8 +
MSACLPSO 9.82743 × 10−4 9.82368 × 10−4 9.81463 × 10−4 9.68924 × 10−9
As can be seen from Table 7, CPMPSO, MPPCEDE, and MSACLPSO obtained the SRE, LRE, and MRE values. For the Std of RMSE, MSACLPSO performed well. Therefore, the optimization performance of MSACLPSO was better than that of the compared algorithms for SDM. As can be seen from Table 8, MSACLPSO obtained the best results for the SRE, LRE, MRE, and Std of RMSE. For the Std of RMSE, MSACLPSO obtained the best Std. Therefore, MSACLPSO is the best algorithm for DDM. To sum up, it can be seen that the performance of MSACLPSO was demonstrated by optimizing the PV model parameters All of the compared results containing the optimized parameters, along with the SRE, LRE, MRE, and Std values, show that MSACLPSO can obtain the optimal parameters. This provides a more effective parameter combination for the complex engineering problems of photovoltaics, so as to improve the energy conversion efficiency.

7. Conclusions

In this paper, a multi-strategy adaptive CLPSO with comprehensive learning, multi-population parallel, and parameter adaptation is proposed. A multi-population parallel strategy was designed to improve population diversity and accelerate convergence. Then, a new velocity update strategy was designed for the velocity update, and a new adaptive adjustment strategy of learning factors was developed. Additionally, a parameter optimization method for photovoltaics was designed to prove the actual application ability. Ten benchmark functions were used to prove the effectiveness of MSACLPSO in comparison with different variants of PSO. On 6 out of 10 functions, MSACLPSO obtained the best performance. MSACLPSO performed well and obtained the best optimization performance for multimodal problems. In addition, the actual SDM and DDM were selected for parameter optimization. The experimental results show that the actual application ability of the MSACLPSO was confirmed in comparison with the other algorithms. MSACLPSO is an alternative optimization technique for solving complex problems and actual engineering problems. However, MSACLPSO is still insufficient in solving large-scale parameter optimization problems, such as time complexity and easy stagnation, among others. In the future, these applications should be considered [65,66,67,68,69,70,71,72]. The algorithm should be deeply studied, and the parameter adaptability of MSACLPSO in different stages and scales should also be further explored in future works.
  5 in total

1.  Genetic Learning Particle Swarm Optimization.

Authors:  Yue-Jiao Gong; Jing-Jing Li; Yicong Zhou; Yun Li; Henry Shu-Hung Chung; Yu-Hui Shi; Jun Zhang
Journal:  IEEE Trans Cybern       Date:  2015-09-17       Impact factor: 11.448

2.  Fuzzy Clustering to Identify Clusters at Different Levels of Fuzziness: An Evolutionary Multiobjective Optimization Approach.

Authors:  Avisek Gupta; Shounak Datta; Swagatam Das
Journal:  IEEE Trans Cybern       Date:  2021-04-15       Impact factor: 11.448

3.  Solving the Family Traveling Salesperson Problem in the Adleman-Lipton Model Based on DNA Computing.

Authors:  Xian Wu; Zhaocai Wang; Tunhua Wu; Xiaoguang Bao
Journal:  IEEE Trans Nanobioscience       Date:  2021-12-30       Impact factor: 2.935

4.  Self-Paced Dynamic Infinite Mixture Model for Fatigue Evaluation of Pilots' Brains.

Authors:  Edmond Q Wu; Mengchu Zhou; Dewen Hu; Longjun Zhu; Zhiri Tang; Xu-Yi Qiu; Ping-Yu Deng; Li-Min Zhu; He Ren
Journal:  IEEE Trans Cybern       Date:  2022-07-04       Impact factor: 11.448

5.  Custom-Molded Offloading Footwear Effectively Prevents Recurrence and Amputation, and Lowers Mortality Rates in High-Risk Diabetic Foot Patients: A Multicenter, Prospective Observational Study.

Authors:  Xi Zhang; Hongyan Wang; Chenzhen Du; Xiaoyun Fan; Long Cui; Heming Chen; Fang Deng; Qiang Tong; Min He; Mei Yang; Xingrong Tan; Lin Li; Zerong Liang; Yaqin Chen; Deqing Chen; David G Armstrong; Wuquan Deng
Journal:  Diabetes Metab Syndr Obes       Date:  2022-01-10       Impact factor: 3.168

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.