Literature DB >> 35535198

Differential Human Learning Optimization Algorithm.

Pinggai Zhang1,2, Ling Wang2, Jiaojie Du2, Zixiang Fei3, Song Ye1, Minrui Fei2, Panos M Pardalos4.   

Abstract

Human Learning Optimization (HLO) is an efficient metaheuristic algorithm in which three learning operators, i.e., the random learning operator, the individual learning operator, and the social learning operator, are developed to search for optima by mimicking the learning behaviors of humans. In fact, people not only learn from global optimization but also learn from the best solution of other individuals in the real life, and the operators of Differential Evolution are updated based on the optima of other individuals. Inspired by these facts, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), into which the Differential Evolution strategy is introduced to enhance the optimization ability of the algorithm. And the two optimization algorithms, based on improving the HLO from individual and population, are named DEHLO1 and DEHLO2, respectively. The multidimensional knapsack problems are adopted as benchmark problems to validate the performance of DEHLOs, and the results are compared with the standard HLO and Modified Binary Differential Evolution (MBDE) as well as other state-of-the-art metaheuristics. The experimental results demonstrate that the developed DEHLOs significantly outperform other algorithms and the DEHLO2 achieves the best overall performance on various problems.
Copyright © 2022 Pinggai Zhang et al.

Entities:  

Mesh:

Year:  2022        PMID: 35535198      PMCID: PMC9078769          DOI: 10.1155/2022/5699472

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

In the past decades, traditional optimization algorithms are widely used in science, engineering, economics, and industry to solve optimization problems [1]. However, the traditional optimization algorithms need to learn the mathematical characteristics of the optimal solution in advance, which can result in added complexity in the algorithm's designation. In addition, the traditional algorithms cannot escape the local optimal of complex problems effectively. With the development of technology, engineering problems with optimization objectives are becoming more and more complicated and the conventional algorithm to solve the NP problems has become very difficult, which forces researchers to study metaheuristic algorithms [2]. Metaheuristics are general frameworks to build heuristics for combinatorial and global optimization problems [3]. The application of natural or biology-inspired metaheuristic optimizations, such as Genetic Algorithm [4], Particle Swarm Optimization [5], Harmony Search [6], Differential Evolution (DE) [7-10], Artificial Bee Colony [11], Fruit Fly Optimization [12], Distributed Grey Wolf Optimizer (DGWO) [13], Moth Search Algorithm (MSA) [14], Slime Mould Algorithm (SMA) [15], Gaining Sharing Knowledge-Based Optimization [16, 17], Cuckoo Search with Exploratory (ECS) [18], Discrete Jaya with Refraction Learning and Three Mutation (DJRL3M) [19], and Monarch Butterfly Optimization (MBO) [20], Hunger Games Search (HGS) [21], Runge Kutta Method (RUN) [22], and Harris Hawks Optimization (HHO) [23], has been very successful to solve the complex optimization problems, such as feature selection [24-28], image segmentation [29], controller designation [30], flow-shop scheduling problem [31, 32], and the node placement of wireless sensor networks [33]. Human beings are the smartest creature in the world because of their strongest learning ability; they are smarter than other living beings, such as birds, ants, and fish. To solve complex problems effectively, humans are always repetitively learning to improve their skills for adapting to the external environment better. Many human learning activities are similar to the search process of metaheuristics. For example, when a person learns something new, he or she repeatedly practices to improve new skills and evaluates his or her performance for guiding the following study. The process of human learning just like the metaheuristic algorithms iteratively generates a new solution and calculates the corresponding fitness for adjusting the following search. Therefore, it is reasonable to consider that the metaheuristic algorithm based on the human learning mechanisms may have advantages over other biological systems-based algorithms on complicated problems. Inspired by this thought, Wang et al. [34] proposed the Human Learning Optimization Algorithm (HLO) based on a simplified human learning model, in which three learning operators, i.e., the random learning operator (RLO), the individual learning operator (ILO), and the social learning operator (SLO), are developed to search out the optimal solution, which represents that a person may learn randomly due to the lack of prior knowledge or exploring new strategies, learn from his or her previous experience, and learn from his or her friends and books, respectively. To strengthen the search efficiency of HLO, a few enhanced variants have been subsequently developed. An adaptive simplified human learning optimization algorithm (ASHLO) [35] is proposed in which the pr and pi, two control parameters determining the rates of performing RLO, ILO, and SLO, are linearly adjusted to achieve the balance between the global search and local search. Encouraged by the success of ASHLO, a sine-cosine adaptive human learning optimization algorithm (SCHLO) [36] is proposed in which the pr and pi are dynamically tuned in a reasonable range by the sine and cosine functions so that SCHLO can efficiently escape from the local optimal. Later, a new improved adaptive human learning optimization algorithm (IAHLO) [37] is presented to accurately tune the control parameter pr so that IAHLO can keep the diversity better at the early stage and perform the local search more efficiently at the later stages of iterations. Besides, inspired by the intelligence quotient (IQ) of humans, a diverse human learning optimization algorithm (DHLO) [38] is presented in which the control parameter pi is initialized by a Gaussian distribution and dynamically adjusted according to the pi value of the best individual. To further extend HLO, a novel hybrid-coded HLO (HcHLO) [39] is proposed to tackle mix-coded problems, in which real-coded parameters are optimized by a new continuous HLO (CHLO) [39] and the binary and discrete variables are handled by the binary learning operators of HLO. Until now, HLO has been successfully applied to engineering design problems [37], knapsack problems [40], optimal power flow calculation [41], extractive text summarization [42], financial markets forecasting [43], furnace flame recognition [44], scheduling problems [45], and intelligent control [46]. In particular, HLO obtained the best-so-far results on two well-studied sets of multidimensional knapsack problems, i.e., 5.100 and 10.100 [40], as well as the set of mixed-variable optimization problems [39] which implies the promising advantages of HLO. In HLO, social learning adopts the greedy strategy to generate a new candidate, i.e., simply yet efficient copying the bit value from the SKD, which makes the algorithm easy to fall into local optimal. So, the relearning operator is introduced into HLO [40] to help the algorithm to escape from the local optimal. However, the relearning operator may destroy the existing optimal information, which further reduces the performance of the algorithm. On the other hand, the social learning of the HLO just learns from the global solution, which is inconsistent with the actual society. In real life, people could learn from the best solution of other individuals in the population. The Modified Binary Differential Evolution (MBDE, modified binary DE which is the previous work) [47] reverses the updating strategy of the standard Differential Evolution (DE) [7] so that DE can better keep the robustness of parameter settings and the diversity of the population to search for optimal bit information effectively. Therefore, this paper proposes two novel differential human learning optimization algorithms (DEHLOs), in which the strategy of MBDE is introduced into HLO to further improve the performance of DEHLOs algorithm by using the optimal information of other individuals. This paper is organized as follows. Section 2 gives a brief review of the HLO and MBDE, respectively. Section 3 presents the concepts, operators, and implementation of the proposed DEHLO1 and DEHLO2 in detail. Section 4 verified that the proposed DEHLOs have significant advantages over the compared algorithms on the multidimensional knapsack problems. Finally, conclusions are drawn in Section 5.

2. Related Works

2.1. Human Learning Optimization

The HLO adopts the binary-coding framework, and consequently an individual in HLO is represented by a binary string aswhere x denotes the i-th individual, N is the size of the population, and M is the dimension of solutions. Each bit of binary string is initialized as “0” or “1” randomly. Random learning operator: At the beginning of the learning process, people always keep exploring new strategies to solve problems because there is no prior knowledge [48]. Besides, an individual cannot fully replicate their previous experience and social knowledge because of the disturbance of external and forgetting. To emulate these phenomena of human random learning, the HLO executes random learning operator (RLO) with a certain probability aswhere r1 is a stochastic number between 0 and 1. Individual learning operator: Individual learning is defined as the ability to build knowledge through individual reflection about external stimuli and sources [49], which could be regarded as individual behavior in the trial and error process of continuous improvement. To mimic human individual learning, the best individual solutions are reserved in the individual knowledge database (IKD) aswhere IKD denotes the individual knowledge database of the person i, K is the predefined number of solutions saved in the IKD, and ikd represents the p-th best experiment of the person i. When HLO conducts the individual learning operator, (4) is operated to generate a new candidate solution. Social learning operator: During social learning, people can acquire knowledge and experience from other individuals to further develop their ability directly or indirectly [50], and the efficiency and effectiveness of learning will be improved from experience share [51]. To simulate the social learning of humans in HLO, the social knowledge database (SKD) is adopted to reserve the best knowledge of the population aswhere S is the size of the SKD and skd is the q-th solution in the SKD. q is a stochastic number; it decides which one of the SKD will be used. HLO performs social learning operator as (6) to generate the new candidate solution during the search process. In summary, the above operators can be integrated and operated aswhere r is a stochastic number between 0 and 1, and pr and pi are the control parameters to determine the rates of HLO performing the three learning operators. Specifically, pr, (pi-pr), and (1-pi) are the probabilities of random learning, individual learning, and social learning, respectively. Algorithm 1 describes the implementation of HLO, and more details can be found in [35].

2.2. Modified Binary Differential Evolution

The MBDE [47] adopts the binary-coding scheme and reserves the updating formulas of the standard DE, including the mutation operator, the crossover operator, and the selection operator. A probability estimation operator is introduced into MBDE to integrate the mutant operator. Probability estimation operator: The probability estimation operator is used to build the probability distribution vector f(p) of the parent individuals. The new mutant binary individual u′ is generated from parents' sampling randomly through the probability estimation vector as equations (8) and (9),where F is the scaling factor and b denotes the bandwidth factor which is a positive real constant; p, p, and p are the j-th bits of three randomly chosen individuals of G generation. rand() is random number; u′ is the mutation of the current target individual according to the probability estimation vector f(p). Crossover operator: The crossover operator is used to produce the trailing individual by mixing the target individual and its mutant individual in MBDE. The trail vector v′ can be obtained aswhere v′ is the element of the trailing individual v′ and CR is the crossover probability ranged (0,1). The rand() is a stochastic number uniformly distributed within (0,1); rand i is a random integer with 1,2,…, N where N is the length of the individual. Selection: The selection operator is defined as the following equation: As shown in (11), the MBDE reserved the selection operator of the standard DE. The trail individual v replaces the target individual x if its fitness value is better. Otherwise, the target individual is reserved for the next generation.

3. Differential Human Learning Optimization Algorithm

The three operators of HLO represent human learning randomly, learning from their own experience, and collective experience. However, people could learn from other excellent individuals in actual life. The operator of Differential Evolution (DE) is updated based on the optimal information of other individuals in the population. Inspired by this thought, this paper proposes the differential human learning optimization algorithm (DEHLO), in which the learning strategy of the MBDE is introduced into the HLO to develop a novel probability estimation operator for generating the offspring individuals. And this paper modified the HLO from two levels, i.e., individual and population, and named DEHLO1 and DEHLO2, respectively.

3.1. DEHLO1

During the real learning process, different teams always adopt different strategies to search for the optimal solution for the same complex problem. To emulate the phenomena of dividing into groups, the operators of HLO and MBDE are utilized to generate the new solution in DEHLO1, so that the DEHLO1 algorithm could obtain the performance of HLO and MBDE. In DEHLO1, half of the population is updated by using the operator of HLO as (7) to generate a new solution, and the rest of the population is updated by using the mutation operator of MBDE as equations (8)–(10) to acquire the new individual. The DEHLO1 algorithm could possess both the advantages and shortcomings of the HLO and MBDE, and a dynamic competition strategy is used in DEHLO1 to avoid the disadvantages of the HLO and MBDE. At the beginning of a search, the population is divided into two equal parts which adopt the strategy of HLO and MBDE, respectively. With the progress of the search, the optimal fitness of the HLO and that of MBDE are compared under the specified iterations, and the individual proportion of better fitness value corresponding algorithm will be increased while the individual proportion of the other algorithm will be decreased correspondingly. Therefore, the DEHLO1 algorithm can adaptively compete and use the optimal learning strategy to search for the optimal solution, which effectively enhances the optimization ability of the algorithm. The procedure of DEHLO1 can be illustrated in Figure 1.
Figure 1

The flowchart of DEHLO1.

3.2. DEHLO2

In real society, the same problem could be solved by using different approaches. But there might be a mainstream method in a certain period, and the mainstream method might be switched to another method due to the needs of the problem. Exactly as the way of human learning: “practice, knowledge, again practice, and again knowledge” [52], this form repeats itself in endless cycles, and with each cycle, the content of practice and knowledge rises to a higher level. This learning process is a vivid metaphor for the spiral. In DEHLO2, the HLO and the MBDE on the whole population are mixed and executed alternately by mimicking these learning behaviors. Firstly, the entire population adopts the HLO algorithm to search for the optimal solution. If it cannot be updated after a specified iteration, the learning process of HLO will be considered to encounter the bottleneck; then the strategy of MBDE will be executed, which might make the algorithm escape from the bottleneck and vice versa: if the MBDE algorithm cannot find the optimal solution after certain iterations, the HLO algorithm will be executed to update the individual of the population. The flowchart of DEHLO2 is shown in Figure 2.
Figure 2

The flowchart of DEHLO2.

The procedure of DEHLO2 can be described as follows: Step 1: Set control parameters, including the population size (popSize), the maximum generation (Gmax), the iterations of the search strategy, and the control parameters of HLO and MBDE; Step 2: Initialize the population randomly, calculate the fitness of each individual, and initialize the IKD and SKD; Step 3: Update the individual of the population as equations (8)–(11) of the MBDE algorithm; when the global optimal of MBDE cannot update after the set iterations, use the HLO algorithm to update the individual of the population as equation (7), and so forth, to generate the new population; Step 4: Calculate the fitness of the new individual and update the IKD and SKD; Step 5: If the terminal conditions are met, terminate the iteration; otherwise go to step 3; Step 6: Output the optimal solution.

3.3. Algorithm Complexity

DEHLO1 and DEHLO2 both have two phases, i.e., the population initialization and the iterative search. The running times of generating the initial population X, individual knowledge database (IKD), and social knowledge database (SKD) are N × M, N × M, and (M+log  N), respectively, where M and N represent the dimension of solutions and the size of the population, respectively. So, the overall running time of the population initialization is ((2N+1) × M+log  N). During the iterative search of DEHLOs, generating new individuals costs time N × M, performing crossover operation costs time N × M, and updating the IKD and SKD costs times N × (M+log  K) and (log  N+log  S+M), respectively, where K is the predefined number of solutions saved in the IKD and S denotes the size of the SKD. Therefore, the running time of each iterative step is ((3N+1) × M+log(N × S × K)). Assume that the maximum generation of DEHLOs algorithms is G, so the iterative search phase takes time G × ((3N+1) × M+log(N × S × K)). In general, the maximum generation G is much greater than N, K, and S, and therefore the time complexity of DEHLOs is O((3N+1) × G × M).

4. Experimental Results and Discussions

To verify the performance of the two algorithms, i.e., DEHLO1 and DEHLO2, the proposed DEHLOs as well as other six binary-coding optimization algorithms, i.e., Improved Adaptive Human Learning Optimization (IAHLO) [37], Simple Human Learning Optimization (SHLO) [34], Modified Binary Differential Evolution (MBDE) [47], Novel Binary Differential Evolution (NBDE) [53], Improved Binary Particle Swarm Optimization (IBPSO) [54], and Novel Binary Gaining Sharing Knowledge-based optimization (NBGSK) [17], were applied to solve multidimensional knapsack problems [55]. The parameters pr, pi, CR, F, and b adopt the default values of HLO and MBDE, and a set of fair parameters, i.e., Cn and K of DEHLO1 and NM and NH of DEHLO2, is chosen for DEHLO1 and DEHLO2 by trial and error in this paper, that is, Cn = 100, K = 5%, NM = 100, and NH = 50. For a fair comparison, the recommended parameters of all compared algorithms were used to tackle the problem, which is listed in Table 1. Since DEHLOs are designed for solving “single-objective” problems, the sizes of IKDs and SKD are both set to 1 [35] to enhance search efficiency and reduce the cost of computation. Besides, the IKD of DEHLOs was reinitialized to further enhance the diversity if it is not updated in the successive 100 generations. The computations were carried out using a PC with Intel Core i5-6402P @ 2.8 GHz CPU and 8 GB RAM while running Java 1.70 on Windows 8.1, 64-bit operating system.
Table 1

The recommended parameter values of all the algorithm.

AlgorithmParameters settings
DEHLO1 pr = 5/M, pi = 0.85 + 2/M, CR = 0.2, F = 0.8, b = 20, Cn = 100, K = 5%
DEHLO2 pr = 5/M, pi = 0.85 + 2/M, CR = 0.2, F = 0.8, b = 20, NM = 100, NH = 50
IAHLO [37] pr min1 = 0.02, prmin2 = 0.05, prmax = 0.15, pi = 0.85 + 2/M, Sp = 0.2 × Gmax
SHLO [34] pr = 5/M, pi = 0.85 + 2/M
MBDE [47]CR = 0.2, F = 0.8, b = 20
NBDE [53] F = 1.0, CR = 0.5, filp = 0.2, Umin = 0.1 × M, Umax = 0.9 × M
IBPSO [54] ωmin = 0.0, ωmax = 2.0, c1 = 1.75, c2 = 2.00, Vmin = −6, Vmax = 6
NBGSK [17]NPmin = 12, Npmax = 200, kf = 1.0, kr = 0.9, p = 0.1, δ = 100, λ = −100

Note. M is the dimension of solutions.

4.1. A Set of Multidimensional Knapsack Problems

Knapsack problems have been studied intensively in the last few decades, and multidimensional knapsack problems (MKPs) [55] are multiconstrained problems instead of only one constraint. It can be formulated aswhere the binary decision variables x are used to indicate whether the item j is included in the knapsack or not. Without loss of generality, knapsack problems assume that all profits and weights are positive and all the weights are smaller than the capacity C. Since the maximal volume of the knapsack is limited in knapsack problems and the total volume of the items packed in the knapsack may exceed the constraint, the violation is unacceptable and must be checked. Thus, the penalty function method as (13) is adopted to deal with the infeasible solutions,where the penalty coefficient β is a big constant which can lead the algorithm to escape from the infeasible area. For a comprehensive comparison, a total of 30 multidimensional knapsack problems (MKPs), i.e., the instances 5.250.00-29, are adopted to test the performance of DEHLOs as well as the other metaheuristics. The population size and the maximum generation of all the algorithms are set to 100 and 5000. Four indicators, i.e., the best fitness value (Best), the mean best fitness value (Mean), the worst fitness value (Worst), and the standard deviation (Std), are used to evaluate the performance of DEHLOs. Each algorithm ran 100 times on all the problems independently. The numerical results are given in Table 2.
Table 2

The results of all algorithms on the multidimensional knapsack problems.

ProblemAlgorithmBestMeanWorstStd t-test W-test
5.250.0DEHLO25920859071.355896845.81
DEHLO15919659054.475894146.5211
IAHLO5854158145.8257831130.2111
SHLO5917058990.195884565.4711
MBDE5890058765.985864347.1711
NBDE5874558269.0357715229.5811
IBPSO5893558521.4557942188.2711
NBGSK5748656579.4455336411.2011

5.250.1DEHLO26144661381.946126850.44
DEHLO16137761308.046120946.2511
IAHLO6055060117.6859695158.2811
SHLO6143561274.526113862.0911
MBDE6113961096.416096940.3211
NBDE6107860269.8859566380.6011
IBPSO6121360795.9660073214.5911
NBGSK5932458075.2156888516.3811

5.250.2DEHLO26205761959.726187645.92
DEHLO16202861946.216185543.0611
IAHLO6101360599.8760309154.3311
SHLO6200861865.906168254.8911
MBDE6205761937.516185041.7711
NBDE6141760780.8560265225.6711
IBPSO6164061166.5660485240.1811
NBGSK6020559110.0658296395.3511

5.250.3DEHLO25934359235.195914339.21
DEHLO15931559233.845912341.6700
IAHLO5861558294.1858042117.8511
SHLO5930459162.285898861.1411
MBDE5933459238.465915840.9400
NBDE5876058388.2657986184.5611
IBPSO5916858752.8458406163.9811
NBGSK5785557014.1656243340.6911

5.250.4DEHLO25891358799.335866544.29
DEHLO15893558791.135869647.2700
IAHLO5786557540.1557145143.8211
SHLO5887858703.245856460.2111
MBDE5887758758.465863144.5511
NBDE5817657666.0057090239.2211
IBPSO5860858171.0557670190.8811
NBGSK5697255896.4555107417.3011

5.250.5DEHLO26000559884.275978643.45
DEHLO15998059865.345975252.3811
IAHLO5876058457.1257975149.4411
SHLO5996959784.465964565.5811
MBDE5994559842.435969647.8111
NBDE5922058724.9558246209.7811
IBPSO5971459151.8658576258.9011
NBGSK5803256999.9856025441.22

5.250.6DEHLO26036360300.416022229.38
DEHLO16035860281.026019932.3811
IAHLO5937858953.0258536163.9511
SHLO6035360221.835996458.8411
MBDE6034160295.396021631.2700
NBDE5996859306.7558585334.5811
IBPSO6012859697.4258954210.2111
NBGSK5825657192.3955838529.4211

5.250.7DEHLO26144361364.976125838.19
DEHLO16144361354.316122745.1200
IAHLO6040160031.7059625147.3311
SHLO6144361276.946114161.8011
MBDE6144361329.166118545.6011
NBDE6074160127.3359586285.4311
IBPSO6119560793.6960209183.4511
NBGSK5939758110.1657055496.4111

5.250.8DEHLO26188561783.266169837.56
DEHLO16187361776.096168838.6000
IAHLO6083260330.4059847192.4911
SHLO6184961711.026157953.1011
MBDE6183161750.806162736.3711
NBDE6133260640.4959841293.9911
IBPSO6162661116.2460530208.0911
NBGSK5989658378.4057110608.3411

5.250.9DEHLO25890658825.175876826.75
DEHLO15891558818.135875531.4300
IAHLO5808557822.1557505127.8211
SHLO5886558759.375861851.0811
MBDE5891858831.575869543.9400
NBDE5865158235.2257531240.9811
IBPSO5880358407.1957940165.9011
NBGSK5745456359.2055279444.4411

5.250.10DEHLO2109031108945.4110887835.61
DEHLO1109051108935.4710885037.1200
IAHLO108164107737.36107401157.6011
SHLO109013108879.4210872349.8511
MBDE109047108930.0310887529.7411
NBDE108652108235.63107873188.0211
IBPSO108820108358.03107786183.1211
NBGSK107078105016.71102248830.7111

5.250.11DEHLO2109788109724.0210967130.13
DEHLO1109821109715.0910962034.9700
IAHLO108832108389.65108106157.9011
SHLO109778109643.7910952655.6111
MBDE109821109731.7110966633.9400
NBDE109407109035.96108574182.3611
IBPSO109498109134.90108575203.1811
NBGSK107415105664.99102848960.8611

5.250.12DEHLO2108480108421.3610834131.26
DEHLO1108481108391.5910827144.1111
IAHLO107602107248.20106838147.3811
SHLO108472108308.7410815463.9111
MBDE108504108402.6110831736.5011
NBDE108108107752.60107255177.6711
IBPSO108202107802.48107355188.5411
NBGSK106129104260.07101348956.8111

5.250.13DEHLO2109352109291.7910922928.48
DEHLO1109356109279.6410921031.7211
IAHLO108392108113.52107871117.4311
SHLO109325109220.6710908145.8811
MBDE109351109276.3210920831.6311
NBDE109124108621.42108222192.7811
IBPSO109113108650.60107755230.0011
NBGSK107356105919.36104001825.8311

5.250.14DEHLO2110654110559.0611047637.70
DEHLO1110639110537.8611045935.6911
IAHLO109510109124.47108774150.2411
SHLO110602110469.7911034256.6611
MBDE110632110553.1211046233.9800
NBDE110256109752.20109320231.0211
IBPSO110359109948.59109246222.1711
NBGSK108155106374.74104159818.6811

5.250.15DEHLO2110202110108.4011000636.40
DEHLO1110191110092.1810999242.8011
IAHLO109213108875.59108564125.8111
SHLO110136110005.0310979758.1111
MBDE110175110078.9011000140.3811
NBDE109892109405.00108941221.3511
IBPSO109885109526.95108827227.8411
NBGSK107897106311.66103800828.5111

5.250.16DEHLO2108990108921.8910885229.26
DEHLO1109002108905.3210881133.7511
IAHLO107916107558.11107196146.0511
SHLO108987108837.3810871252.2211
MBDE109002108914.4610883725.7210
NBDE108638108251.11107792185.9211
IBPSO108741108383.70107829186.1511
NBGSK106606105029.12103040813.4511

5.250.17DEHLO2108978108880.6410879838.02
DEHLO1108979108875.6410879440.7300
IAHLO107931107553.41107164154.4211
SHLO108942108807.0510866258.1611
MBDE108931108861.8510875633.7211
NBDE108555108011.37107658197.8811
IBPSO108695108306.38107821190.3411
NBGSK106414104892.29102497910.0711

5.250.18DEHLO2109944109831.2410975933.43
DEHLO1109908109821.0310974637.5700
IAHLO109171108759.55108514122.9811
SHLO109858109722.0310957562.9511
MBDE109956109814.8210965457.8611
NBDE109703109325.19108829164.6111
IBPSO109647109241.38108573212.8511
NBGSK108304106184.021033431013.9611

5.250.19DEHLO2107023106945.4910687127.69
DEHLO1106999106927.5610683327.8911
IAHLO106167105667.04105270154.4711
SHLO107009106872.1710678649.6211
MBDE107023106952.8710684427.0000
NBDE106694106226.87105724248.5811
IBPSO106679106364.73105897181.8311
NBGSK104423102663.2999947962.9611

5.250.20DEHLO2149623149543.3114948429.41
DEHLO1149634149533.3914946834.6411
IAHLO148681148320.02147978140.7411
SHLO149573149470.0714938241.1411
MBDE149539149342.64149032110.9311
NBDE148884148622.34148307123.1011
IBPSO149306148955.74148331181.6111
NBGSK147760146521.63143993672.8311

5.250.21DEHLO2155940155897.4315583823.65
DEHLO1155944155875.4015580630.4611
IAHLO155065154738.49154326144.4711
SHLO155890155820.9915567741.3011
MBDE155898155721.5415546199.5511
NBDE155431155258.0915491291.9611
IBPSO155691155382.19154855175.2011
NBGSK154255152302.20150353840.2511

5.250.22DEHLO2149301149239.9414918727.44
DEHLO1149301149218.0614914732.7611
IAHLO148471148143.82147699146.2611
SHLO149301149172.2614907545.6311
MBDE149229149013.95148749114.1511
NBDE148639148381.64147994137.0011
IBPSO149091148772.64148339160.9711
NBGSK147441146336.57144699605.2311

5.250.23DEHLO2152130152084.2715200920.64
DEHLO1152124152070.1815199924.2311
IAHLO151098150707.83150292169.0111
SHLO152114152007.4115187149.6211
MBDE152073151899.5015171990.6511
NBDE151686151389.37150953159.6111
IBPSO151898151463.97151054178.6611
NBGSK150151148785.67146882693.6611

5.250.24DEHLO2150353150297.6015022920.04
DEHLO1150351150277.7715019930.3311
IAHLO149405148986.69148598153.2011
SHLO150310150235.8615013640.9811
MBDE150353150096.92149785137.6811
NBDE149678149484.92149221103.7311
IBPSO150095149672.29148886212.0611
NBGSK148524146966.44145005709.2011

5.250.25DEHLO2150045149978.5214987031.92
DEHLO1150045149954.5114986838.7611
IAHLO149308148912.90148632131.5011
SHLO149983149871.3614972053.0011
MBDE149918149742.8614938799.8911
NBDE149352149183.6914887883.2011
IBPSO149895149532.97148973165.3511
NBGSK148482147229.26144434827.8611

5.250.26DEHLO2148574148507.4914844624.57
DEHLO1148553148499.8514842528.7111
IAHLO147764147416.29147078146.9611
SHLO148542148445.7314830646.3111
MBDE148512148362.4614814791.1411
NBDE148199147972.07147504106.0211
IBPSO148405148015.40147518206.7411
NBGSK146709145373.45143358782.8511

5.250.27DEHLO2149767149746.9714971414.04
DEHLO1149782149736.7714968420.5711
IAHLO148940148436.47147929186.7511
SHLO149767149694.3514957936.1311
MBDE149767149523.50149257103.6011
NBDE148887148601.80148006185.6011
IBPSO149628149230.72148773172.7311
NBGSK147575146086.06144103771.8311

5.250.28DEHLO2155075155012.0415496125.70
DEHLO1155075154993.4815491431.9111
IAHLO154135153707.79153291165.9711
SHLO155029154927.5815481438.0411
MBDE155032154900.1215471569.2411
NBDE154664154414.09153963144.5811
IBPSO154806154514.11153986160.6611
NBGSK153292151840.26149513704.2811

5.250.29DEHLO2154668154640.6015459017.70
DEHLO1154668154623.5615454221.9711
IAHLO153751153406.13153011140.9511
SHLO154668154562.8315443452.2111
MBDE154653154460.9615423976.4211
NBDE154298154056.73153720108.1211
IBPSO154641154136.17153595209.8811
NBGSK152952151403.32148808859.7311
To better compare the performance of DEHLOs with other algorithms, the results of student's t-test (t-test) and Wilcoxon signed-rank test (W-test) are also listed in Table 2 where “1” indicates that DEHLO2 is significantly better than the compared algorithms at the 95% confidence, “−1” represents that DEHLO2 is significantly worse than the compared algorithms, and “0” denotes that the performance of DEHLO2 is equivalent to other algorithms. Note that the t-test, a parameter test, needs to satisfy the normality and homogeneity of variance, while the W-test, a nonparametric test, does not need. Therefore, the t-test is more reliable when the Gaussian distribution assumption is met while the W-test would be more powerful when this assumption is violated [35]. For convenience, the results of the t-test and W-test are summarized in Table 3.
Table 3

The summary results of the t-test and W-test on multidimensional knapsack problems.

MetricDEHLO2DEHLO1IAHLOSHLOMBDENBDEIBPSONBGSK
t-test121303024303030
09006000
−10000000

W-test121303023303030
09007000
−10000000
Table 2 shows that the proposed DEHLO2 obtains the best numerical results on 26 out of 30 instances. Besides, the summary results of the t-test show that DEHLO2 is obviously better than DEHLO1, IAHLO, HLO, MBDE, NBDE, IBPSO, and NBGSK on 21, 30, 30, 24, 30, 30, and 30 out of 30 instances. And W-test results also show that DEHLO2 is significantly superior to DEHLO1, IAHLO, HLO, MBDE, NBDE, IBPSO, and NBGSK on 21, 30, 30, 23, 30, 30, and 30 out of 30 instances. Based on Tables 2 and 3, it is fair to say that DEHLO2 outperforms other algorithms on the multidimensional knapsack problems.

4.2. Another Set of Multidimensional Knapsack Problems

To further verify the performance of the proposed algorithm, another set of multidimensional knapsack problems [53] is adopted as the test benchmark, which is listed in Table 4. The results of all algorithms on the MKPs are given in Table 5 where the best solutions have been highlighted in bold. And the summary results of the t-test and W-test are summarized in Table 6. To analyze the superiority of the proposed DEHLOs, the convergence curves of all algorithms on the MKPs are drawn in Figure 3.
Table 4

The multidimensional knapsack problem benchmarks.

Benchmark NO.Benchmark nameBest known n M
1mknapcb1–5.100–002443811005
2mknapcb1–5.100–01242741005
3mknapcb2–5.250–00593122505
4mknapcb2–5.250–01614722505
5mknapcb3–5.500–001201305005
6mknapcb3–5.500–011178375005
7mknapcb4–10.100–002306410010
8mknapcb4–10.100–012280110010
9mknapcb5–10.250–005918725010
10mknapcb5–10.250–015866225010
11mknapcb6–10.500–0011772650010
12mknapcb6–10.500–0111913950010
13mknapcb8–30.250–2915003825030
14mknapcb9–30.500–2930102150030
Table 5

The results of all algorithms on the multidimensional knapsack problems.

ProblemAlgorithmBestMeanWorstStd t-test W-test
NO.1DEHLO22438124373.92243378.95
DEHLO12438124364.372431518.7611
IAHLO2438124297.242418741.3011
SHLO2435724347.092429214.4111
MBDE2433224327.72242886.5911
NBDE2438124285.062418542.2211
IBPSO2438124177.0423862106.1811
NBGSK2404723721.8723395140.0811

NO.2DEHLO22427424274.00242740.00
DEHLO12427424262.902414935.5011
IAHLO2427424136.402391189.9311
SHLO2425024243.752412527.3811
MBDE2422524222.672410116.4311
NBDE2427424194.462387895.5011
IBPSO2427423964.7623575143.6011
NBGSK2389323388.8222930174.2211

NO.3DEHLO25920859071.355896845.81
DEHLO15919659054.475894146.5211
IAHLO5854158145.8257831130.2111
SHLO5917058990.195884565.4711
MBDE5890058765.985864347.1711
NBDE5874558269.0357715229.5811
IBPSO5893558521.4557942188.2711
NBGSK5748656579.4455336411.2011

NO.4DEHLO26144661381.946126850.44
DEHLO16137761308.046120946.2511
IAHLO6055060117.6859695158.2811
SHLO6143561274.526113862.0911
MBDE6113961096.416096940.3211
NBDE6107860269.8859566380.6011
IBPSO6121360795.9660073214.5911
NBGSK5932458075.2156888516.3811

NO.5DEHLO2119661119457.1711924375.81
DEHLO1119588119409.8011922380.5300
IAHLO116330115483.56114961249.7511
SHLO119582119303.70119008110.0211
MBDE119372119153.9511898593.9611
NBDE116080115220.19114501406.6111
IBPSO118959118292.17117429361.2211
NBGSK115208112449.12111021919.0511

NO.6DEHLO2117579117494.6211735644.63
DEHLO1117662117498.5911735954.8511
IAHLO114647113959.66113396248.2011
SHLO117543117345.7411709989.9811
MBDE117501117326.3811714180.5311
NBDE115477113941.85112855586.8911
IBPSO116956116314.68115314330.6111
NBGSK113416111349.51109234887.3411

NO.7DEHLO22306423054.91230263.19
DEHLO12305723052.572295911.4901
IAHLO2305523040.132290136.6811
SHLO2304123032.01230271.1711
MBDE2301823009.34230091.3611
NBDE2306423029.702284551.3211
IBPSO2305522863.9022574117.6911
NBGSK2287622593.5722282113.8511

NO.8DEHLO22280122714.702254160.08
DEHLO12280122713.562254760.0300
IAHLO2273922517.762234478.2711
SHLO2280122690.792250279.5011
MBDE2275522666.182253953.8011
NBDE2280122478.812232377.1811
IBPSO2272522386.5021994127.6511
NBGSK2242222067.6221844113.6011

NO.9DEHLO25907158853.875867973.36
DEHLO15901258796.655861472.7211
IAHLO5830958031.4457679128.9311
SHLO5907158768.925855195.5511
MBDE5843858254.095811254.1011
NBDE5841057849.6857416212.9811
IBPSO5875658337.2457861182.2111
NBGSK5737856515.9255741420.4411

NO.10DEHLO25863758519.045835962.07
DEHLO15856758449.575832453.7411
IAHLO5794657355.5157014155.6711
SHLO5859958447.365829270.0611
MBDE5859658457.515834854.7811
NBDE5771557135.8256790177.7611
IBPSO5827757812.4957285209.4811
NBGSK5693155925.4355228289.0411

NO.11DEHLO2117149116895.63116606103.48
DEHLO1117001116672.01116433112.3611
IAHLO114617114048.13113553230.2211
SHLO117194116847.53116390130.5211
MBDE116734116456.38116209118.6311
NBDE114440113394.71112891300.9511
IBPSO116597115690.33114316391.0211
NBGSK112953111386.10110305639.6211

NO.12DEHLO2118732118554.1211828198.71
DEHLO1118663118426.2511821695.6411
IAHLO116171115720.44115233236.8211
SHLO118768118446.03118100122.6211
MBDE118501118219.57118029103.1711
NBDE115669114706.44114207314.9811
IBPSO118270117310.97116181383.9411
NBGSK115125112837.49110855878.6211

NO.13DEHLO2149595149437.5914934642.40
DEHLO1149593149432.1414929149.7300
IAHLO148784148447.93148047151.6211
SHLO149496149374.3114922263.7811
MBDE149510149352.9314927060.6611
NBDE149204148977.35148506128.0811
IBPSO149249148737.54147408321.8511
NBGSK148428146898.01144999821.8411

NO.14DEHLO2300152299931.2229975668.23
DEHLO1300093299889.3329970487.6911
IAHLO295779295030.14294131367.1411
SHLO300070299778.88299484117.6911
MBDE300107299854.7829969871.8211
NBDE298960298199.49295981600.0211
IBPSO299290298355.63296002736.9911
NBGSK296573293231.652870692210.5811
Table 6

The summary results of the t-test and W-test on multidimensional knapsack problems.

MetricDEHLO2DEHLO1IAHLOSHLOMBDENBDEIBPSONBGSK
t-test110141414141414
04000000
−10000000

W-test111141414141414
03000000
−10000000
Figure 3

The convergence curves of the MKP (maximum generation = 5000).

It can be seen from Tables 5 and 6 and Figure 3 that DEHLO2 provides the best results and obtained the minimum error among the other algorithms. Specifically, DEHLO2 attains the best numerical results on 13 out of 14 instances and is only inferior to DEHLO1 on the instance 5.500.01. The summarized t-test and W-test results indicate that the proposed DEHLO2 significantly surpasses IAHLO, HLO, MBDE, NBDE, IBPSO, and NBGSK on all the instances while it is better than, competitive to, and worse than DEHLO1 on 10, 4, and 0 instances on the t-test and 11, 3, and 0 instances on the W-test, respectively. Furthermore, Figure 3 shows that the proposed DEHLOs algorithm has a faster convergence rate and higher solution accuracy than the compared algorithms. Therefore, with the introduction of the strategy of MBDE, the optimization performance of the DEHLOs algorithm is significantly enhanced.

5. Conclusions and Future Work

Human learning optimization is a simplified model of human learning; it develops three learning operators, i.e. the random learning operator, the individual learning operator, and the social learning operator, to search for the optimal solution. However, the standard HLO just learns from the global optimal solution; this is inconsistent with reality. In real life, people can learn from the optimal solution of other individuals. And the operators of Differential Evolution (DE) are updated based on the optimal solution of other individuals. Inspired by this fact, this paper introduces the optimization strategy of MBDE into HLO and presents two novel differential human learning optimization algorithms based on individual and population. To comprehensively and fairly evaluate the performance of proposed algorithms, the multidimensional knapsack problems were adopted as the benchmark problems to test DEHLOs, as well as the standard HLO, MBDE, and other metaheuristics. The experimental results demonstrate that the proposed DEHLOs can utilize the learning ability of the two algorithms to search for the optimal solution more efficiently and have a robust search ability for different problems. It is well known that humans can adaptively choose and adjust these approaches to solve problems efficiently and effectively. However, the impact of adaptive learning strategy on algorithm parameters is not considered in this paper. Therefore, one of our future works is to develop adaptive switching learning strategies to better release the power of different learning strategies for different problems, which will be very challenging for future work.
  4 in total

1.  A Self-Adaptive Differential Evolution Algorithm for Scheduling a Single Batch-Processing Machine With Arbitrary Job Sizes and Release Times.

Authors:  Shengchao Zhou; Lining Xing; Xu Zheng; Ni Du; Ling Wang; Qingfu Zhang
Journal:  IEEE Trans Cybern       Date:  2021-02-17       Impact factor: 11.448

2.  Group discussions and test-enhanced learning: individual learning outcomes and personality characteristics.

Authors:  Tova Stenlund; Fredrik U Jönsson; Bert Jonsson
Journal:  Educ Psychol (Lond)       Date:  2016-02-18
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.