Literature DB >> 36161208

A hybrid salp swarm algorithm based on TLBO for reliability redundancy allocation problems.

Tanmay Kundu1, Pramod K Jain2.   

Abstract

A novel optimization algorithm called hybrid salp swarm algorithm with teaching-learning based optimization (HSSATLBO) is proposed in this paper to solve reliability redundancy allocation problems (RRAP) with nonlinear resource constraints. Salp swarm algorithm (SSA) is one of the newest meta-heuristic algorithms which mimic the swarming behaviour of salps. It is an efficient swarm optimization technique that has been used to solve various kinds of complex optimization problems. However, SSA suffers a slow convergence rate due to its poor exploitation ability. In view of this inadequacy and resulting in a better balance between exploration and exploitation, the proposed hybrid method HSSATLBO has been developed where the searching procedures of SSA are renovated based on the TLBO algorithm. The good global search ability of SSA and fast convergence of TLBO help to maximize the system reliability through the choices of redundancy and component reliability. The performance of the proposed HSSATLBO algorithm has been demonstrated by seven well-known benchmark problems related to reliability optimization that includes series system, complex (bridge) system, series-parallel system, overspeed protection system, convex system, mixed series-parallel system, and large-scale system with dimensions 36, 38, 40, 42 and 50. After illustration, the outcomes of the proposed HSSATLBO are compared with several recently developed competitive meta-heuristic algorithms and also with three improved variants of SSA. Additionally, the HSSATLBO results are statistically investigated with the wilcoxon sign-rank test and multiple comparison test to show the significance of the results. The experimental results suggest that HSSATLBO significantly outperforms other algorithms and has become a remarkable and promising tool for solving RRAP.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.

Entities:  

Keywords:  Constrained optimization; Reliability redundancy allocation problem; Salp swarm algorithm; TLBO

Year:  2022        PMID: 36161208      PMCID: PMC9481865          DOI: 10.1007/s10489-021-02862-w

Source DB:  PubMed          Journal:  Appl Intell (Dordr)        ISSN: 0924-669X            Impact factor:   5.019


Introduction

Since 1950, reliability optimization plays a progressively decisive role because of its critical requirements on several engineering and industrial applications, and has become a hot research topic in the engineering field. To be more competitive in daily life, the basic goal of a reliability engineer is always to improve the reliability of product components or manufacturing systems. Obviously, an excellent reliability design facilitates a system to run more safely and reliably. In general, reliability optimization problems can be classified into two classes: integer reliability problems (IRP) and mixed-integer reliability problems (MIRP). In IRP, the components reliability of the system is known and the main task is only to allocate the redundant components number. In case of MIRP, both the component reliability and the redundancy allocation of the system are to be designed simultaneously. This kind of problem in which the reliability of the system is maximized through the choices of redundancy and component reliability is also known as the reliability-redundancy allocation problem (RRAP). To optimize a RRAP, redundancy levels and component reliabilities of the system are considered as integer values and continuous values lies between zero and one respectively. Several researchers works on this field to solve RRAP with the objective of maximizing system reliability under constraints such as the system cost, volume, and weight etc., [44–48, 66, 76]. RRAP has been considered to be an NP-hard combinatorial optimization problem because of its complexity and it has been considered as the subject of much prior research work over various optimization approaches. Recently, meta-heuristics algorithms (MHAs) have been successfully applied in dealing with the computational difficulty to solve a wide range of practical optimization problems. Solving optimization problems, these methods apply probabilistic rules and also approximate the optimal solution by using a random population in the search space that makes it more flexible to find better solutions compared to deterministic methods. Inspiring by biological phenomena and human characteristics, several authors have been developed a variety of population-based optimization techniques to address complex optimization and in terms of the inspiring source, this can be broadly classified into three categories – According to the “No Free Launch (NFL)” theorem [83], there exist no MHAs best fitted to solve all optimization problems. Alternatively, it may happen that a particular algorithm gives efficient solutions for some optimization problems, but it may fail to perform well on another set of problems. Thus, no MHAs are perfect and its limitation affects the performance of the algorithm. Therefore, NFL provokes researchers to develop new MHAs or upgrade some original methods for solving a wider range of complex optimization problems (COPs). The hybridization of two algorithms is a remarkable and better choice between all strategies to upgrade an existing algorithm and to overcome shortcomings. In this process, two different operators are merged to get better solutions. For example, an improved HHO method named HHO-DE, based on the hybridization with DE algorithm is proposed for the multilevel image thresholding task by Bao et al., [11]. Later, Ibrahim et al., [38] present a hybrid optimization method that combines the salp swarm algorithm (SSA) with the particle swarm optimization for solving the feature selection problem. Again, in 2020, a hybrid method GNNA combining grey wolf optimization (GWO) and neural network algorithm (NNA) is proposed by Zhang [94]. Note that all above cited hybrid algorithms have been shown to be more competitive compared to the corresponding original methods. Considering the efficiency of the hybrid methods, this paper introduces a new hybrid algorithm based on SSA and TLBO to solve different kinds of reliability optimization problems. Swarm intelligence algorithms: These types of algorithms mimic the behaviours or intelligence of animals or plants in nature, such as artificial bee colony (ABC) [7], ant colony optimization (ACO) [18], grey wolf optimization (GWO) [60], particle swarm optimization (PSO) [42], whale optimization algorithm (WOA) [59], bat algorithm (BA) [85], cuckoo search (CS) [86], jellyfish search (JS) [16], mayfly algorithm (MA) [92] and salp swarm algorithm (SSA) [58] etc. Evolutionary algorithms: These algorithms mimic the mechanism of biological evolution, such as genetic algorithm (GA) [29], differential evolution (DE) [73], biogeography-based optimization (BBO) [71], and evolutionary programming (EP) [88] etc. Human-related algorithms: These algorithms are inspired by some human nature/activities, such as passing vehicle search (PVS) [69], sine cosine algorithm (SCA) [57], teaching-learning based optimization (TLBO) [65], and coronavirus herd immunity optimizer (CHIO) [4] etc. Mirjalili et al. (2017) [58] first proposed an innovative population-based optimization method named salp swarm algorithm (SSA) that mimics the swarming behaviour of salps. The important characteristics like, simple structure, robustness, and scalability, makes SSA an efficient method for solving various kinds of real world problems (e.g., engineering design and optimization [80], feature selection [38], job shop scheduling [75], optimal power flow problem [22], parameter optimization of power system stabilizer [21], power generation [68], image segmentation [39, 84], parameter estimation for soil water retention curve [95], PID controller for AVR system [20], target localization [54]). Also, SSA shows the following outstanding features like: (1) It can be easily applied to different optimization problems without adjusting other parameters except population size and stopping criterion and it is worth mentioning that these parameters are essential for all MHAs; (2) It has a powerful neighbourhood search ability and it can easily fitted for wide search space [8]. Therefore, these advantages make SSA an efficient technique and a rapid growth of the SSA studies has also been noticed recently. Despite of these efficiency, the basic SSA has some major drawbacks in solving some optimization problems. Firstly, the SSA algorithm suffers from the problem of local optima stagnation. Secondly, the SSA experience is adequate in exploration but lacking of exploitation forces an improper balance between exploration and exploitation. Finally, SSA has poor convergence tendency and sometimes, it need more time to evaluate a new solution for some optimization problems. To address these issues, researchers have applied different search mechanisms and adopted modified operators to upgrade the original SSA. To mention a few- Gupta et al., [30] have introduced a new variant of the SSA called m-SSA. In this work, two different search strategies levy-fight search and opposition-based learning are utilized to increase the convergence speed and also, to establish an appropriate balance between exploration and exploitation. Abasi et al., [1] proposed a new hybrid algorithm named H-SSA combining the SSA and the β −hill climbing algorithm (β HC) to enhance the convergence speed as well as the local searching ability of the conventional SSA. A new type improved SSA based on inertia weight search mechanism is introduced by Hegazy et al. [33] to maximize reliability, optimizing accuracy, and convergence speed. Kassaymeh et al., [41] have embedded backpropagation neural network (BPNN) in the salp swarm algorithm (SSA) to solve the software fault prediction (SFP) problem that helps to enhance the prediction accuracy. Sayed et al., [70] proposed a new hybrid algorithm CSSA that is based on the chaos theory and the basic SSA. Here, ten different types of chaotic maps are utilized to maximize the convergence speed and to get more accurate optimization results. A simplex method based salp swarm algorithm is introduced by Wang et al., (2018) [80] that improves the local searching ability of the algorithm. To maintain a proper balance between exploration and exploitation, Wu et al., [82] introduced an improved SSA based on a dynamic weight factor and an adaptive mutation searching strategy. Further, Tubishat et al., [77] proposed an improved version of SSA based on the concepts of opposition based learning and new local search strategy. These two improvement helps to enhance the exploitation capability of SSA. Singh et al., (2020) [72] developed a hybrid algorithm named HSSASCA that combines the sine-cosine algorithm (SCA) and the SSA to improve the convergence efficiency in both local and global search. Also, an enriched review of recent variants of SSA and its applications has been discussed by Abualigah et al., [3]. Apart from the SSA, inspired by the conventional teaching circumstances of the classroom, Rao et al., (2011) [65] proposed another significant algorithm for solving the optimization problems named TLBO. Teaching and learning are two common human social behaviours and are also an important motivating process in which an individual tries to learn from others. A regular classroom teaching-learning environment is motivational process that allows students to improve their cognitive levels. After their appearance, several researchers [2, 12, 13, 15, 36, 62, 80, 87] used this algorithm to solve the real-world optimization problems. However, in order to solve large complex global optimization problems, it often falls to the local optimum. The main advantages of the TLBO algorithm is that without any effort for calibrating initial parameters, it leads to first convergence speed and also, the computational complexity of the algorithm is much better than several existing algorithms like GA, ABC, CS, SCA and PSO etc. Motivated by the advantages of SSA and TLBO, a hybrid algorithm called HSSATLBO has been developed in this paper. Generally, searching processes with similar nature may lead to the loss of diversity in the search space, and also there is a chance of getting trapped into a local optimum. But, the different searching techniques of two different algorithms can maximize the capacity of escaping from the local optimal. In this algorithm, the basic structure of the SSA has been renovated by embedding the features of the TLBO. In this context, a probabilistic selection strategy is defined, which helps to determine whether to apply the basic SSA or the TLBO to construct a new solution. In the searching process, TLBO helps to accelerate the convergence speed of HSSATLBO, whereas the excellent global exploration ability of SSA helps to find a better global optimal solution. Therefore, in the search process of HSSATLBO, TLBO aims at the local search, and SSA accentuates the global search, which may help to maintain a convenient balance between exploration and exploitation and produces efficient and effective results for solving RRAP. Therefore, in this study, a population diversity definition of the proposed method is introduced and also, performed the exploration-exploitation evaluation for investigating the search behaviour of both HSSATLBO and the conventional SSA. To reduce the computational complexity and improve the searching abilities, HSSATLBO can reduce its population by keeping the diversity too low. The measurement of exploration and exploitation also helps to identify how the proposed HSSATLBO performs better on an optimization problem. Keeping all of the above points in mind, the basic objective of the study is to present an efficient and effective algorithm to solve various types of reliability optimization problems. The main contributions of the paper are listed as follows: A hybrid algorithm HSSATLBO is developed by combining the features of SSA and TLBO algorithms. Proposed algorithm mainly contains the structure of the basic SSA algorithm and, meantime, it has been reconstructed by embedding the searching strategy of TLBO. The proposed method makes a proper balance between exploration and exploitation in which the basic SSA looks after the exploration part and the presence of the searching strategy of TLBO increases the exploitation capability of the algorithm. Again, to generate a new solution, a new probabilistic selection strategy is introduced to determine whether to apply the original SSA or the TLBO algorithm. To validate the effectiveness and efficiency of the HSSATLBO algorithm, it is examined against seven well-known reliability redundancy optimization problems [9, 14, 23, 24, 28, 36, 43, 51, 56, 63, 78, 81, 89]. For a fair comparison, the test problems are also examined by the conventional SSA and its three different variants (LSSA, CSSA and GSSA). Finally, a comparative study between HSSATLBO and the three different variants of SSA are also performed in this study. In order to state the statistically significant results or not, a number of tests have been carried out, such as rank-tie, Wilcoxon-rank test, Kruskal Wallis test, and multiple comparison tests, on the results obtained from the proposed and the existing algorithms. From these computed results, it is verified that the proposed algorithm produces an effective result and also delivers superior performance compared to other existing algorithms in terms of the best optimal solutions. The rest of the papers is organized as follows: Section 2 briefly describes the traditional SSA, three variants of SSA (i.e., LSSA, CSSA & GSSA), and TLBO algorithms. Section 3 presents the proposed algorithm HSSATLBO, a probabilistic selection procedure, and exploration-exploitation measurement. The reliability redundancy allocation problems are described in Section 4. Section 5 presents the computational results of the proposed HSSATLBO algorithm and compares them with several existing algorithms. The results obtained have also been validated through statistical test analysis. Finally, Section 6 concludes the paper.

Basis algorithms

In this section, basic SSA, three improved variants of SSA, and the conventional TLBO algorithms are briefly described.

Salp swarm algorithm (SSA)

Salp swarm algorithm (SSA) is one of the population based algorithm recently developed by Mirjalili et al.,(2017) [58] to solve numerous kinds of optimization problems. Salp is a kind of marine creature which belongs to the Salpidae family and has a thin barrel-shaped body with openings at the end in which the water is pumped through the body as propulsion to move forward. These marine creature shows an interesting behaviour which is of interest in the paper is their swarming behaviour. In oceans, Salps having a swarm behaviour called salp chain may support salps in exploring and a better movement may be possible using fast cordial changes. Based on this conduct, a mathematical form for the salp chains is designed by the authors and examined in optimization problems. Firstly, the population of salp is divided into two groups: leaders and followers. The first salp of the chain is known as the leader, and rest of them are called the followers. The position of all salps (N) is stored in a two-dimensional matrix X given in equation (1). These salps looking for a food source that implies the target of the swarm. Then the salp with the best fitness (i.e., leader) is find out by calculating the fitness value of each salp. The position of the leader should be refurbished in regular basis, so the following equation (2) is proposed- Where is the position of the first salp (leader) in the j dimension and F is the food position in the j dimension. lb and ub represents the lower and upper bound of the j dimension respectively. D1, D2 and D3 are random numbers lies between 0 and 1. Equation (2) shows that the leader only updates its position concerning the food source. Here, D1 is a very important parameter in this algorithm as it plays a vital role in balancing the exploration and exploitation phase and the is determined by the equation (3) Where T and it represents the maximum number of iterations and the current iteration respectively. The parameter D2 and D3 controlled both the direction and the step size of the j dimension of the next position. After updating the leader’s position, the follower’s position is updated using equation (4). Where, i ≥ 2, shows the position of the i follower salp in the j dimension, μ0 is the initial speed, t is the time and , where . In optimization, the time indicates the iteration, so the disparity between iterations is considered as 1 and, taking μ0 = 0, the following equation (5) is applied for this problem. In may happen that some salps cross the search space, but, using equation (6) they can be bring back to the search space. The detailed steps of the basic SSA are explained in Algorithm 1.

Improved variants of SSA with mutation strategies

Although the conventional SSA is a highly competitive and effective method for solving different kinds of complex optimization problems, it may trap into local optima and also, suffers from the improper balance between exploration and exploitation which encounters slow convergence. To get rid of this difficulty and to explore the solution space more adequately, Nautiyal et al., (2021) [61] introduced improved variants of SSA named LSSA, CSSA, and GSSA using three new mutation operators Lévy flight based mutation, Cauchy mutation, and Gaussian mutation respectively to enhance the overall performance of SSA. These different mutation operators make the algorithm more competent in exploring and exploiting the search space. In these improved SSA, the mutation scheme is performed after the greedy search is completed between two consecutive position at t iteration and at (t + 1) iteration corresponding to each salp which is given by the (7). After the completion of the greedy search for each salp, the mutation strategy is performed with a mutation rate m. In this study, the value of this parameter m is taken as 0.7. During this process, the fitness value of the newly generated muted salps are compared with the original salps and if it found better, it replaces the original salps otherwise discarded. The detailed pseudo-code of the mutation-based SSA is presented in Algorithm 2.

I. Lévy flight based SSA (LSSA):

The concept of lévy flight based mutation is used to increase salps diversity in the SSA. When mutation rate allows, lévy-mutation can improve the global search ability more adequately by mutating the salps. Each muted salp in the LSSA is generated using (8) as follows where LF(δ) corresponds to lévy distributed random number with δ variable size and that can be obtained using (9) where u and v are standard normal distribution. β is a default constant set to 1.5.

II. Cauchy-SSA (CSSA):

In this Cauchy-SSA, a random number is generated, and if its value allows to generate the new salps using the mutation scheme based on the mutation rate m, then each muted salp of the swarm in CSSA is generated using the (10) as follows where Cauchy(δ) is a random number generated using the Cauchy distribution function given by the (11) as follows and the Cauchy density function is given by where, y is a uniformly distributed random number within (0,1) and η = 1 is a scale parameter.

III. Gaussian-SSA (GSSA):

In GSSA, the mutation follows the (13) where Gaussian(δ) is a random number generated using the Gaussian distribution and the Gaussian density function is given by (14) where σ2 is a variance for each salp. To generate random numbers, the above equation is reduced by taking standard deviation σ as 1.

Teaching-Learning Based Optimization (TLBO)

Rao et al., (2011) [65] first developed an algorithm like other population-based algorithms, which reproduced the conventional teaching-learning aspects of a classroom. In TLBO, a group of learners is recognised as the population and various subjects taught to learners represents different design variables. The fitness value indicates the students’ grade after learning, and the student with the best fitness value is witnessed as the teacher. This algorithm describes two basic modes of learning: (1) through teacher (known as teacher phase) and (2) interacting with the other learners (known as the learner phase). The working procedure of the TLBO algorithm is explained below - In the teacher phase, let us assume, at any iteration t, the number of subjects or course offered to the learners is d and N denotes the population size (i.e. number of learners). In this phase, the basic intention of a teacher is to transfer knowledge among the students and also to improvise the average result of the class. Here, the parameter Mean(t) indicates the mean result of the learners in j subject (j = 1,2,…,d) and at generation t, it is given by (15). X(t) indicates the learner with the best objective function value at iteration t and is recognised as the teacher. The teacher tries to give his/her maximum effort to increase the knowledge of each student in the class, but learners will gain knowledge according to their talent and also by the quality of teaching. Then, the difference vector between the teacher and the average results of students can be calculated given by the equation (16). where rand indicates a random number lies between 0 and 1. T denotes the teaching factor and its value is decided randomly as given in equation (17) The existing solution is now updated in the teacher phase and the updated solution is given by the following equation (18) If the new learner in generation t is found to be a better than , then it will replace otherwise keeps the previous solution. Interaction with other students is an effective way to enhance their knowledge. A learner can also gain new information from other learners having more knowledge than him or her. The learning circumstances of the learner phase is given below. A student X randomly select classmate X (≠X) to obtain more knowledge in the learner phase. If X performs better, X moves towards X; if X performs worse, X moves away from it. The following formulas (19) and (20) can be used to describe this process: Where and are fitness values of and respectively. The pseudocode of the basic TLBO is given in Algorithm 3.

Proposed method

Probabilistic selection procedure

It is very important to maintain a proper balance between exploration and exploitation for a well-organized and well-designed meta-heuristics optimization algorithm. Therefore, in this study, a hybrid algorithm HSSATLBO is introduced by modifying the basic structure of SSA. A probabilistic selection parameter (PSP) is implemented in the proposed algorithm to decide whether to apply the search equation of SSA (Algorithm 1) or TLBO (Algorithm 3) to generate the new solution. The formula for the parameter PSP is given by equation (21) Here, PSP and PSP denotes the minimum and maximum values of the parameter PSP, respectively. Again, MaxIter indicates the maximum number of generations, and iter shows the current generation. Equation (21) shows that, at the early stage of iterations, the value of the parameter PSP is very large, and its force to choose the search equation of SSA. However, as the number of iterations increases, the probability of electing the search equation of TLBO is also increased. In this study, PSP and PSP takes the value as 0.3 and 0.9 respectively, i.e., the value of the parameter PSP is a random number lies between [0.3, 0.9].

The proposed HSSATLBO

The framework of the proposed algorithm HSSATLBO that associates with the SSA and the TLBO algorithm, is demonstrated in this section. The conventional SSA algorithm shows excellent efficiency in exploration but undergoes poor exploitation, and as a result, it fails to manage the convenient balance between exploitation and exploration and also, most of the time it cannot generate a global optimum solution. To avoid this situation, the updating phase of the salps position is enhanced by reconstructing the basic formation of the SSA. During this modification the searching mechanism of TLBO is implemented into the main structure of the SSA. The TLBO algorithm having first convergence speed and much better computational complexity than several existing algorithms makes it an exceptional search algorithm. Thus, the inclusion of TLBO adds more flexibility to the SSA and subsequently, the exploration and exploitation abilities of the SSA algorithm are also improved. The detailed framework of the HSSATLBO algorithm is presented in Fig. 1. The first step in the HSSATLBO is to initialize the parameters for both the SSA and TLBO and a random population is generated that represents a set of salp positions. Then the fitness value for each solution is computed to evaluate the performance and the best one is determined. After that, the current population of both the leader and follower position is to be updated either by using the searching technique of SSA or TLBO algorithm depending on a probabilistic selection parameter (PSP) (Section 3.1). This parameter is basically designed to control the probability of selecting the above searching strategies. If a random number lies between [0,1] is less than PSP, then the SSA, otherwise, the TLBO is used for updating the current salps position. After that, the fitness value of the current population is evaluated and the current best solution is compared with the previous best fitness value, and accordingly the best solution is need to be updated. This procedure is continued until the stopping criterion is satisfied. For the proposed HSSATLBO algorithm the maximum iteration number is considered as a stopping criterion.
Fig. 1

The framework of HSSATLBO

The framework of HSSATLBO

Exploration and exploitation measurement

In this study, an in-depth empirical analysis is performed to examine the searching behaviour of the proposed HSSATLBO in terms of diversity. Through diversity measurement, it is possible to measure explorative and exploitative capabilities of the algorithm. In the exploration phase, the difference expands between the values of dimension D within the population and hence swarm individuals are scattered in the search space. On the other hand, in exploitation phase, the difference reduces and swarm individuals are clustered to a dense area. These two concepts are ubiquitous in any MHAs. In case of finding the globally optimal location, the exploration phase maximizes the efficiency in order to visit unseen neighbourhoods in the search space. Contrarily, through exploitation, an algorithm can successfully converge to a neighbourhood with high possibility of global optimal solution. A proper balance between this two abilities is a trade-off problem. For better illustration about the exploration and exploitation concept, see Fig. 2 . According to Hussain [37], diversity in population is measured mathematically, using the following equations (22) and (23): Where, denotes the j dimension of i swarm individual in N population in iteration t, whereas is median of dimension j. and Div indicates the diversity in the j dimension and the average of diversity of all dimensions respectively. After determining the population diversity Div for all the iterations, it is now possible to calculate the exploration and exploitation percentage ratios during search process, using equation (24) and equation (25) respectively: where Expl(%) and Expt(%) denotes exploration and exploitation percentages respectively for iteration t, whereas is the maximum population diversity in all iterations (T).The MATLAB code for measuring population diversity and exploration-exploitation for MHAs has been made publicly available at https://github.com/usitsoft/Exploration-Exploitation-Measurementhttps://github.com/usitsoft/Exploration-Exploitation-Measurement.
Fig. 2

Candidate population representation for exploration-exploitation

Candidate population representation for exploration-exploitation

Problem formulation

Reliability-redundancy allocation problem

The requirement of reliability analysis to evaluate the performance of products, equipment, and several engineering systems is increasing day by day. Reliability optimization can figure out these issues and capable of finding a high-quality products and equipment that performs efficiently and safely in a given period. In this section, seven reliability optimization problems are discussed to examine the performance of the HSSATLBO algorithm.The general form of the reliability redundancy problem is The goal of the problem is to maximize system reliability by computing the number of redundant components n and the components’ reliability r in each subsystem.

Series system [Fig. 3(a)] [6, 9, 23, 27, 31, 35, 36, 43, 81, 89, 91, 93]

The series system is a non-linear mixed-integer programming problem and the formulation is given as follows where, The parameters β and α are physical features of system components. Constraints g1(r,n), g2(r,n), and g3(r,n) represents volume, cost and weight constraint respectively. The coefficients of the series system are shown in the literature (Garg, 2015a)[23] and Table 1.
Table 1

Values of parameters used in the literature

i 105αi βi vi wi CV W
Parameter used for 4.1.1 and 4.1.2
12.3301.517
21.4501.528
30.5411.538175110200
48.0501.546
51.9501.529
Parameter used for 4.1.3
12.5001.523.5
21.4501.544.0
30.5411.554.0175180100
40.5411.583.5
52.1001.543.5
Parameter used for 4.1.4
11.01.516400250500
22.31.526
30.31.538
42.31.527
The schematic diagram of (a) series system, (b) complex (bridge) system, (c) series-parallel system, and (d) overspeed system for a gas turbine Values of parameters used in the literature

Complex (bridge) system [Fig. 3(b)] [6, 9, 23, 25, 27, 31, 35, 36, 43, 63, 81, 89, 91, 93, 96, 97]

Complex (bridge) system consists of five subsystems and the formulation of it is described as follows: subject to, the same constraint given by the equation equation (27), (28) and (29) respectively. And also, 0.5 ≤ r ≤ 1, 1 ≤ n ≤ 5, n ∈Z+, i = 1,2,...,5. The coefficients of the complex system are shown in the literature (Garg, 2015a) [23] and Table 1.

Series-Parallel System [Fig. 3(c)] [23, 25, 27, 31, 32, 35, 36, 40, 43, 51, 53, 63, 78, 79, 81, 89, 91]

The mathematical formulation is as follows: subject to, the same constraint given by the equation equation (27),(28) and (29) respectively. And also, 0.5 ≤ r ≤ 1, 1 ≤ n ≤ 5, n ∈Z+, i = 1,2,...,5.The coefficients of the series-parallel system are shown in the literature (Garg, 2015a) [23] and Table 1.

Overspeed protection system for a gas turbine [Fig. 3(d)] [6, 19, 23, 24, 35, 36, 43, 51, 53, 63, 81, 93, 98]

This reliability problem is formulated as follows: where, The coefficients of the overspeed protection system are shown in the literature (Garg, 2015a) [23] and Table 1.

Convex quadratic reliability problem [28, 50, 63, 91]

The mathematical formulation of this problem is as follows: n ∈ [1,6], i = 1,2,...,10. j = 1,2,3,4.The parameters r, a and C are generated from uniform distributions that lies between [0.80, 0.99], [0,10] and [0,10] respectively. A randomly generated set of values of these coefficients are given as follows:

Mixed series-parallel system [28, 50, 63, 91]

The mathematical formulation of this problem is as follows: The coefficients of the mixed series-parallel system are taken from the literature (Gen et al., 1999) [26] and are listed in Table 2.
Table 2

Parameter used for 4.1.6

i123456789101112131415
ri 0.900.750.650.800.850.930.780.660.780.910.790.770.670.790.67
ci 549775694567986
wi 896788967897657
Parameter used for 4.1.6

Large-scale system reliability problem [27, 28, 63, 78, 79, 81, 97]

The mathematical formulation of this problem is as follows: Here, l indicates the lower bound of n. The parameter 𝜃 indicates the tolerance error that implies 33% of the minimum requirement of each available resource l. The average minimum resource requirements for the reliability system with m subsystems is given by and the average values of which is given by . In this way, we set the available system resources (Zou et al., 2010) [97] for reliability systems with 36,38,40,42, and 50 subsystems, respectively, as shown in Tables 3 and 4.
Table 3

Available system resources for each system for 4.1.7

ni1234
36bi3912577381454
38bi4162787781532
40bi4352898231621
42bi4583068701712
50bi54335210402048
Table 4

Constant coefficients for 4.1.7

i1 − riαiβiγiδi i1 − riαiβiγiδii1 − riαiβiγiδii1 − riαiβiγiδii1 − riαiβiγiδi
10.005841326110.028651428210.030621530310.021751528410.0231051733
20.0261041632120.0211031530220.027621224320.023951122420.040831835
30.0351041223130.039911734230.018722040330.030631529430.012811835
40.029631224140.0131042039240.013851938340.026731427440.026641938
50.032711326150.038741428250.006951529350.009651529450.038641326
60.0031041631160.0371021325260.029811835360.0191051733460.015811937
70.020921938170.0211011529270.022831632370.005951937470.036741428
80.018931529180.023831938280.017931529380.0191051122480.0321021937
90.004741223190.0271051836290.0021011835390.002621734490.038831530
100.038641631200.028741326300.031921937400.015831733500.0131021122
Available system resources for each system for 4.1.7 Constant coefficients for 4.1.7

Results & discussions

In this section, we presented the results of all of the above-mentioned reliability optimization problems identified by the use of the proposed HSSATLBO algorithm. This section is divided into the following six parts. Section 5.1 introduces the experiment settings including parameters settings and maximum possible improvement (MPI). Section 5.2 describes the results obtained by the proposed algorithm and compared the performance with a number of existing approaches that are presented Table 5. The performance comparisons between HSSATLBO, SSA, and variants of SSA are presented in Section 5.3. A parameter sensitivity analysis for the parameter PSP is performed in Section 5.4. The performance in terms of population diversity and the exp- loation-exploitation measurement of HSSATLBO, SSA, and variants of SSA are described in Section 5.5. Finally, the statistical analysis of the proposed algorithm and all compared algorithms are illustrated in section Section 5.6.
Table 5

Some existing meta-heuristic algorithms from the literature for solving reliability optimization problems (4.1.1 to 4.1.7)

AlgorithmsMethodsAuthors & published year
1.SCaSoft computing approach(Gen and Yun, 2006) [27]
2.SAASimulated annealing algorithm(Kim et al., 2006) [43]
3.GAGenetic algorithm (GA)(Yokota et al., 1996) [91]
4.IAImmune based two-phase approach(Hsieh and You, 2011) [35]
5.ABC1Artificial bee colony algorithm(Yeh and Hsieh, 2011) [89]
6.IPSOImproved particle swarm optimization(Wu et al., 2011) [81]
7.CS1Cuckoo search (CS) algorithm(Valian and Valian, 2013) [79]
8.CS2Cuckoo search algorithm(Garg, 2015a) [23]
9.PSO/SSO/PSSOParticle-based swarm optimization algorithm(Huang, 2015) [36]
10.ICSImproved CS algorithm(Valian et al., 2013) [78]
11.CS-GAHybrid CS and genetic algorithm(Kanagaraj et al., 2013) [40]
12ABC2Artificial bee colony(Garg et al., 2013) [25]
13.TS-DEHybrid TS–DE algorithm(Liu and Qin, 2014b)[52]
14.INGHSImproved novel global harmony search(Ouyang et al., 2015) [63]
15.MPSOModified particle swarm optimization(Liu and Qin, 2014a) [51]
16.EBBOEfficient biogeography-based optimization(Garg, 2015b) [24]
17.EGHSEffective global harmony search algorithm(Zou et al., 2011) [96]
18.NMDENovel modified DE(Zou et al., 2011) [98]
19.NGHSNovel global HS algorithm(Zou et al., 2010) [97]
20.CPSOCo-evolutionary PSO(He and Wang, 2007a) [32]
21.IABCImproved ABC algorithm(Ghambari and Rahati,2018) [28]
22.NAFSANovel artificial fish swarm algorithm(He et al., 2015) [31]
23.MICAModified imperialist competitive algorithm(Afonso et al., 2013) [6]
24.GA-SRSRRAP with cold-standby redundancy strategy(Ardakan and Hamadani,2014) [9]
25.LXPM-IPSO-GSIPSO-based hybrid approache(Zhang et al.,2013) [93]
26.PSFSAPenalty guided stochastic fractal search approach(Mellal and Zio, 2016) [56]
27.GA-PSOHybrid GA-PSO approach(Duan et al., 2010) [19]
28.DEDE algorithm combined with levy flight(Liu and Qin, 2015) [53]
29.HDEHybrid DE algorithm(Liao, 2010) [50]
30.NNANeural network algorithm(Sadollah et al., 2018) [67]
31.HHOHarris hawks optimization(Heidari et al., 2019) [34]
32.SMASlime mould algorithm(Li et al., 2020) [49]
33.SCASine cosine algorithm(Mirjalili, 2016) [57]
Some existing meta-heuristic algorithms from the literature for solving reliability optimization problems (4.1.1 to 4.1.7)

Experiment settings

Parameter settings

The proposed algorithm is implemented in MATLAB (2015a) on the personal laptop with AMD Ryzen 3 2200 U with Radeon Vega Mobile Gfx 2.50GHz and 4.00 GB of RAM in Windows 10. The initial population sizes of ABC, NNA, TLBO, SSA, HHO, SMA, SCA and HSSATLBO were set as 100 for each and also the parameters of these compared algorithms are considered as: ABC (Maximum number of trials i.e., limit = 100), NNA (modification factor, β = 1), TLBO (teaching factor, TF = 1 or 2), HHO (β = 1.5), SMA (control parameter, z = 0.03), SCA (parameter, a = 2); Due to the stochastic nature of metaheuristics algorithms, it might be unreliable if one considers the results obtained in a single run. Therefore, 30 independent runs were performed for all applied algorithms ABC, NNA, TLBO, SSA, HHO, SMA, SCA and HSSATLBO for solving every reliability optimization problems. In our experiment, for every independent run, the maximum number of iterations for each algorithm is taken as 300.

Maximum possible improvement (MPI)

For each reliability optimization problem, the system reliability is to be maximized by computing both the components reliability r and number of redundant components n for each subsystem. During the computational procedure, the redundant components n are firstly considered as real variables and after completion of the optimization process, the real values are converted to their respective nearest integer values. In this study, we introduce the maximum possible improvement (MPI) index to evaluate the performance of HSSATLBO and is expressed by the (45) Where R(HSSATLBO) denotes the best optimal solution obtained by the proposed algorithm and R(Others) implies the best result obtained by the other compared approaches and the greater MPI indicates greater improvement.

HSSATLBO comparison with existing optimizers

This section describes the performance evaluation of proposed HSSATLBO in terms of best solution and the maximum possible improvement value. The results obtained by the proposed algorithm is compared with the other existing optimizers and the results of the compared algorithms are taken from their respective papers. The comparative analysis for solving the reliability problems are presented in Table 6 to Table 11.For the series system (4.1.1), Table 6 shows that the best optimal solution obtained by the proposed method is 0.93168238710, which is preferable to all compared algorithms SCA(Gen & Yun, 2006), SAA (Kim et al., 2006), GA (Yokota et al., 1996), IA (Hsieh & You, 2011), ABC1 (Yeh & Hsieh, 2011), IPSO (Wu et al., 2011), CS2 (Garg, 2015a), PSO (Huang, 2015), NAFSA (He et al., 2015), SSO (Huang, 2015), PSSO (Huang, 2015), MICA (Afonso et al., 2013), GA-SRS (Ardakan & Hamadani, 2014), and LXPM-IPSO-GA (Zhang et al., 2013) with the improvements 3.4940E-03%, 4.6533E-01%, 3.2446E-01%, 6.8943E-05%, 5.6662E-04%, 3.4940E-03%, 4.1061E-04%, 3.8727E + 01%, 1.7353E-04%, 2.6336E-01%, 1.3157E-04%, 4.3868E-03%, 3.6664E + 00%, and 3.9668E-05% respectively.
Table 6

Comparison of the best result for the series system (4.1.1) with other results in the literature

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SCa(3, 2, 2, 3, 3)0.7794270.8694820.9026740.7140380.7868960.931680271.21454E-017.5189183.4940E-03
SAA(3, 2, 2, 3, 3)0.7771430.8675140.8966960.7177390.7938890.931363270.000007.518918 4.6533E-01
GA(3, 2, 2, 3, 3)0.7823910.8667120.9017470.7172660.7837950.931460275.3194E-027.5189183.2446E-01
IA(3, 2, 2, 3, 3)0.779462300.871883450.902800870.711350160.787861580.93168234275.284E-077.5189186.8943E-05
ABC1(3, 2, 2, 3, 3)0.7793990.8718370.9028850.7114030.7878000.931682272.1836E-047.5189182415.6662E-04
IPSO(3, 2, 2, 3, 3) 0.78037307 0.871783430.902408900.711473560.787387600.931680271.0100E-047.5189183.4940E-03
CS2(3, 2, 2, 3, 3)0.7794397340.8719952120.9028730500.7111270880.7879863740.931682106274.42986E-077.5189182414.1061E-04
PSO(2, 3, 2, 4, 2)0.800592810.740493160.829143840.636861440.887042760.8885037416.77574.81461473.8727E + 01
NAFSA(3, 2, 2, 3, 3)0.7793884130.8717209820.9030333910.7114183620.7877892880.931682268276.7347E-097.5189181.7353E-04
SSO(3, 2, 2, 3, 3)0.782714840.87351990.902648930.713134770.777297970.93150199271.82140E-037.518918242.6336E-01
PSSO(3, 2, 2, 3, 3)0.779466450.871732780.902849510.711487800.787816440.93168229721274.9081E-57.518918241.3157E-04
MICA(3, 2, 2, 3, 3)0.7798740.8720570.9034260.7109600.7869020.93167939279.9E-057.5189184.3868E-03
GA-SRS(3, 2, 2, 3, 3)0.764593350.887528920.915395270.693505440.776031450.929082263568272.2694E-057.51891824113.6664E + 00
LXPM-IPSO-GS(3, 2, 2, 3, 3) 0.779509 0.8718590.9028910.7113450.7877390.93168236272.20E-077.5189183.9668E-05
HSSATLBO(3, 2, 2, 3, 3) 0.7793828940.8718337570.9028850370.7114168290.78779659640.93168238710274.949952767E-077.5189182411
Table 11

Comparison of results for Large Scale systems (4.1.7) with other results in the literature

DimMethodsVTVRS(r,n) Slack(g1) Slack(g2) Slack(g3)Slack(g4)
36SCa(5, 10, 15, 21, 33)0.519976
NGHS(5, 10, 15, 21, 33)0.519976
IPSO(5, 10, 15, 21, 33)0.519976
ICS(5, 10, 15, 21, 33)0.519976
CS1(5, 10, 15, 21, 33)0.51997597
INGHS(5, 10, 15, 21, 33)0.51997596538026149.125763519460109301.353247018274
IABC(5, 10, 15, 21, 33)0.5199759653802567149.125763519460179109301.35324701827426
HSSATLBO(5, 10, 15, 21, 33)0.519975965380256149.12576351946018109291.3532470182740
38SCa(10,13,15,21 ,33)0.510989
NGHS(10,13,15,21 ,33)0.510989
IPSO(10,13,15,21 ,33)0.510989
ICS(10,13,15,21 ,33)0.51098860
INGHS(10,13,15,21 ,33)0.51098859649712153.638550812459115317.039538519290
IABC(10,13,15,21 ,33)0.5109885964971198153.6385508124589020115317.039538519289640
HSSATLBO(10,13,15,21 ,33)0.5109885964971198153.6385508124589020115317.039538519289640
40SCa(5, 10, 13, 15, 33)0.503292
NGHS(5, 10, 13, 15, 33)0.503292
IPSO(5, 10, 13, 15, 33)0.503292
ICS(5, 10, 13, 15, 33)0.5032926
CS1(4, 10, 11, 21, 22, 33)0.50599242
INGHS(4, 10, 11, 21, 22, 33)0.505992421241597051.04714167016368119333.24054864606615
IABC(4, 10, 11, 21, 22, 33)0.5059924212415972051.047141670163683119333.24054864606615
HSSATLBO(5, 10, 13, 15, 33)0.5032924930631358358.53406557447610128330.2821792064087
42SCa(4, 10, 11, 15, 21, 33)0.479664
NGHS(4, 10, 11, 15, 21, 33)0.479664
IPSO(4, 10, 11, 15, 21, 33)0.479664
ICS(4, 10, 11, 15, 21, 33)0.479664
CS1(4, 10, 11, 15, 21, 33)0.47966355
INGHS(4, 10, 11, 15, 21, 33)0.47966355148656252.718250389045129354.583694396574
IABC(4, 10, 11, 15, 21, 33)0.4796635514865568252.7182503890448400129354.583694396573720
HSSATLBO(4, 10, 11, 15, 21, 33)0.4796635514865568252.7182503890448400129354.583694396573720
50SCa(4, 10, 15, 21, 33, 45, 47)0.405390
NGHS(4, 10, 15, 21, 33, 45, 47)0.405390
IPSO(4, 10, 15, 21, 33, 45, 47)0.405390
ICS(4, 10, 15, 21, 33, 42, 45)0.40695474
CS1(4, 10, 15, 21, 33, 42, 45)0.40695474513707061.955982588824154.0433.914646838262
INGHS(4, 10, 15, 21, 33, 42, 45)0.40695474513707061.955982588824154.0433.914646838262
IABC(4, 10, 15, 21, 33, 42, 45)0.4069547451370713061.9559825888243270154.0433.914646838261660
HSSATLBO(4, 10, 15, 21, 33, 42, 45)0.4069547451370713061.9559825888243270154.0433.914646838261660

Here, VTV denotes the variables that received the value 2 in optimum stage

Comparison of the best result for the series system (4.1.1) with other results in the literature It can be observed from Table 7 that the optimal solution for the complex system (4.1.2) produced by HSSATLBO is 0.9998896373815054 which is better than the best result given by the other compared algorithms and also have most symbolic improvement 4.7596E + 01%, 1.7777E + 00%, 8.6705E + 00%, 2.5927E-01%, 6.6414E + 01%, 9.1343E-01%, and 2.5695E + 00%, over the results given by SCA (Gen & Yun, 2006), SSA(Kim et al., 2006), GA (Yokota et al., 1996), IA (Hsieh & You, 2011), PSO/SSO (Huang, 2015), and GA-SRS (Ardakan & Hamadani, 2014) respectively.
Table 7

Comparison of the best result for the complex system (4.1.2) with other results in the literature

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SCa(3, 3, 2, 3, 2)0.8144830.8213830.8961510.7130910.8140910.9997894181.8540754.2647704.7596E + 01
SAA(3, 3, 3, 3, 1)0.8681160.8072630.8728620.7126670.7510340.99988764400.0073001.6092891.7777E + 00
GA(3, 3, 3, 3, 1)0.8140900.8646140.8902910.7011900.7347310.99987916180.3763474.2647708.6705E + 00
NGHS(3, 3, 2, 4, 1)0.829839990.857989110.913339260.646744790.703109720.9998896050.000005941.560466293.3860E-02
IA(3, 3, 3, 3, 1)0.816624170.868767390.858748780.710279370.753429200.9998893505184.0420871E-084.2647702.5927E-01
EGHS(3, 3, 2, 4, 1)0.829839990.857989110.913339260.646744790.703109720.9998896050.000005941.560466293.3860E-02
ABC1(3, 3, 2, 4, 1)0.8280870.8578050.7041630.6481460.9142400.99988962525.433925771.560466281.5747E-02
IPSO(3, 3, 2, 4, 1)0.828683610.858025670.913646160.648034070.702275950.9998896350.000003591.560466296.6880E-03
ABC2(3, 3, 2, 4, 1)0.8279702760.8578747580.9141864040.6483553860.7035753110.99988963580953.7463676E-041.560466281.4248E-03
INGHS(3, 3, 2, 4, 1)0.82798479110.85767968130.91415645220.64848140550.70486549880.999889636450.000001891.560466288.8934E-04
CS2(3, 3, 2, 4, 1)0.827855650.85762610540.9147529160.6482172080.7026703740.999889631951.06721E-101.56046624.8959E-03
EBBO(3, 3, 2, 4, 1)0.82806068920.85804045450.91414874860.64796890120.70420487960.999889636451.4541E-041.5604668.8934E-04
PSO(3, 3, 2, 2, 3)0.770615880.901092530.892786510.600830080.734510020.999671403716.545713121.411803226.6414E + 01
NAFSA(3, 3, 2, 4, 1)0.82832179180.85797450730.91422098820.64775717010.70300666180.999889636051.5485E-051.560466291.2427E-03
PSSO(3, 3, 2, 4, 1)0.827832920.857712410.914374580.648610020.702875540.99988963573852.502902E-051.5604662881.4891E-03
SSO(3, 3, 2, 4, 1)0.820083620.851196290.918548580.660720830.702758790.9998886250.004375991.5604662889.1343E-01
GA-SRS(3, 3, 3, 3, 1)0.804572340.857173050.867346830.727591620.764166660.999886726842186.7785491E-054.2647698042.5695E + 00
LXPM-IPSO-GS(3, 3, 3, 3, 1)0.8279740.8578180.9141660.6483480.7044270.99988963685253.351252E-051.5604662884.7989E-04
MICA(3, 3, 2, 4, 1)0.827642570.857478450.914196770.649273790.704092000.9998896350.000044281.560466296.6880E-03
HSSATLBO(3, 3, 2, 4, 1)0.82800516770.85781309720.91425330440.64826627310.70388071180.999889637381505451.1074633E-061.56046628802
Comparison of the best result for the complex system (4.1.2) with other results in the literature Table 8 presents that the best result for the series-parallel system (4.1.3) obtained by the proposed method is 0.9999863373757 and also better than the algorithms given by SCA (Gen & Yun, 2006), SAA (Kim et al., 2006), GA (Yokota et al., 1996), IA (Hsieh & You, 2011), ABC1 (Yeh & Hsieh, 2011), IPSO (Wu et al., 2011), CPSO (He & Wang, 2007a), CS1 (Valian & Valian, 2013), ICS (Valian et al., 2013), CS-GA (Kanagaraj et al., 2013), ABC2 (Garg et al., 2013), TS-DE (Liu & Qin, 2014), INGHS (Ouyang et al., 2015), CS2 (Garg, 2015a), MPSO (Liu & Qin, 2015), EBBO (Garg, 2015b), PSO (Huang, 2015), PSFSA (Mellal & Zio, 2016), NAFSA (He et al., 2015), PSSO (Huang, 2015), SSO (Huang, 2015), DE (Liu & Qin, 2015), IABC(Ghambari & Rahati, 2018) by the improvement 4.7085E + 01%, 4.1488E + 01%, 5.6280E + 01%, 4.2327E + 01%, 4.1490E + 01%, 3.9786E + 01%, 4.1513E + 01%, 4.1490E + 01%, 4.1490E + 01%, 4.1488E + 01%, 4.1490E + 01%, 4.1490E + 01%, 4.1490E + 01%, 4.1491E + 01%, 4.1490E + 01%, 4.1490E + 01%, 9.0348E + 01%, 4.1490E + 01%, 4.1493E + 01%, 4.1491E + 01%, 4.1687E + 01%, 4.1490E + 01%, and 3.2264E + 01% respectively. Also, the optimal redundant component by HSSATLBO for this sires-parallel system is (3,2,2,2,4) which is completely different from the other approaches.
Table 8

Comparison of the best result for the series-parallel system (4.1.3) with other results in the literature

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SCa(2, 2, 2, 2, 4)0.7854520.8429980.8853330.9179580.8703180.99997418401.1944401.6092894.7085E + 01
GA(3, 3, 1, 2, 3)0.8381930.8550650.8788590.9114020.8503550.99996875530.000007.1108495.6280E + 01
SAA(2, 2, 2, 2, 4)0.8195960.8450000.8955140.8955190.8684560.99997665400.0000071.6092894.1488E + 01
ABC1(2, 2, 2, 2, 4)0.8195915610.8449510680.8954285480.8955223390.8684902290.999976649036405.9845376E-041.6092889664.1490E + 01
IA(2, 2, 2, 2, 4)0.8121610.8533460.8975970.9007100.8663160.99997631400.0073001.6092894.2327E + 01
CPSO(2, 2, 2, 2, 4)0.819185260.843664210.894729920.895376280.869127240.99997664400.0005611.6092894.1513E + 01
CS1(2, 2, 2, 2, 4)0.8199270870.8452676570.8954915540.8954406920.8683187750.999976649400.00001611.60928904.1490E + 01
ICS(2, 2, 2, 2, 4)0.8199270870.8452676570.8954915540.8954406920.8683187750.999976649400.00001611.60928904.1490E + 01
CS-GA(2, 2, 2, 2, 4)0.8196602560.8449816150.8955193050.8954922450.8684475870.99997665400.0000000171.609288974.1488E + 01
IPSO(2, 2, 2, 2, 4)0.81974570.84500800.89545810.90090320.86840690.99997731401.4695223381.6092889663.9786E + 01
ABC2(2, 2, 2, 2, 4)0.8197377534690.8449910997760.8955295438200.8954336872060.8684348244690.999976649054401.39152E-101.6092889664.1490E + 01
TS-DE(2, 2, 2, 2, 4)0.8196590.8449810.8955070.8955060.8684480.9999766491402.66935542E-041.6092889664.1490E + 01
INGHS(2, 2, 2, 2, 4)0.81981186260.84495068420.89567015850.89523270690.8684380574450.9999766489400.000053054141.6092889664.1490E + 01
CS2(2, 2, 2, 2, 4)0.8194832324880.8447830844550.8958105538870.8952202169150.8685424869730.999976648818402.7216628E-101.6092889664.1491E + 01
MPSO(2, 2, 2, 2, 4)0.819659320.844980740.895506420.895506430.868447750.9999766490660401.961642794E-071.6092889664.1490E + 01
EBBO(2, 2, 2, 2, 4)0.81965834480.84491014060.89548717130.89551489630.86846816130.9999766490488401.748541669E-051.6092889664.1490E + 01
PSO(4, 3, 2, 1, 2)0.840252820.888650990.623750550.939849500.751586910.99985845680.9169150884.0177036419.0348E + 01
PSFS(2, 2, 2, 2, 4)0.819659391180.844980852960.895506430760.895506451720.868447693460.9999766490661401.85082E-101.60928896674.1490E + 01
NAFSA(2, 2, 2, 2, 4)0.819787575270.845671943720.894868363310.895908268560.868295830550.999976648004403.1248E-081.6092894.1493E + 01
PSSO(2, 2, 2, 2, 4)0.819589390.844584120.895341340.895816260.868529020.9999766487381408.1179461E-051.6092889664.1491E + 01
SSO(2, 2, 2, 2, 4)0.813858030.839126590.893661500.898452760.871063230.99997657400.00241.6092889664.1687E + 01
DE(2, 2, 2, 2, 4)0.819659320.844980740.895506420.895506430.868447750.9999766490660401.96164279E-071.6092889664.1490E + 01
IABC(3, 3, 2, 2, 3)0.8277437120270.8471386117940.8568910353040.8566343480410.8759765310930.999979829614380.00001830670.7059016123.2264E + 01
HSSATLBO(3, 2, 2, 2, 4)0.77536185126280.87142414227730.89037022304150.89144387411160.86302615505950.9999863373757301.26363261E-071.794965001
Comparison of the best result for the series-parallel system (4.1.3) with other results in the literature It can be noticed in Table 9, the best solution achieved by HSSATLBO for the overspeed protection system (4.1.4) is 0.99995467466432. The proposed algorithm dominates 15 competitive algorithms in terms of the best-known solution found so far. Table 9 depicts that the proposed method has symbolic improvement indices 1.759E + 03%, 2.421E-02%, 1.029E + 00%, 1.029E + 00%, 1.240E-02%, 8.038E-02%, 1.750E-02%, and 1.419E-02% over the results by SAA (Kim et al., 2006), IPSO (Wu et al., 2011), NMDE (Zou et al., 2011), PSO (Huang, 2015), PSSO (Huang, 2015), SSO (Huang, 2015), DE (Liu & Qin, 2015) and GA-PSO (Duan et al., 2010) respectively. Table 10 indicates that HSSATLBO executes the same or better than the other existing algorithms given in this literature for solving the convex quadratic reliability problem (4.1.5) and the mixed series-parallel system (4.1.6) in terms of best results. Table 11 reports the test results of the problem (4.1.7). It can be seen that the HSSATLBO algorithm gives equal or better results compare to other algorithms in terms of the best objective function value for the large-scale problems of dimensions 36,38,40,42 & 50. But in the case of dimension 40, it comes with weaker objective value than two existing algorithms INGHS and IABC.
Table 9

Comparison of the best result for the overspeed protection system (4.1.4) with other results in the literature

Algorithms(x1,x2,x3,x4) r1 r2 r3 r4 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SAA(5, 5, 5, 5)0.8956440.8858780.9121840.8877850.999945500.938028.80371.759E + 01
IA(5, 5, 4, 6)0.9015886280.8881923800.9481660220.8499697920.99995467455551.249537E-0415.36346302.421E-04
IPSO(5, 5, 4, 5)0.901631640.849970200.948218280.888128850.99995467550.00000924.0818831.029E-02
NMDE(5, 6, 4, 5)0.901614800.849921110.948141390.888222860.99995467551.057E-0524.801882721.029E-02
TS-DE(5, 6, 4, 5)0.9016150.8499210.9481410.8882230.9999546746081551.896705E-0424.801882721.240E-04
INGHS(5, 5, 4, 6)0.9015565830.8882438850.9481110970.8499817370.9999546743550.0000505424.8018827228.038E-04
CS2(5, 5, 4, 6)0.9015980770.8882261840.9481018610.8499807780.999954674558.824940E-1015.363463081.750E-04
EBBO(5, 5, 4, 6)0.901562920.888224940.948155950.8499528950.999954674552.7021E-0515.36346311.419E-04
PSO(4, 6, 5, 5)0.929523310.813703560.886637470.899871830.999904743711.526567711.64470775.242E + 01
PSSO(5, 5, 4, 6)0.901664610.888172960.948210330.849870840.99995467553.28722E-0515.36346301.029E-02
SSO(5, 6, 4, 5)0.902084350.854721070.946060180.886337280.99995416550.10923310424.80188271.123E + 00
LXPM-IPSO-GS(5, 5, 4, 6)0.901633170.8882510650.9481413770.8498540430.999954674599555.305493E-0615.363463081.423E-04
DE(5, 6, 4, 5)0.901614820.849921140.948141390.888222840.99995467551.005073E-0524.801882721.029E-02
MICA(5, 5, 4, 5)0.901489880.850035260.948129520.888238330.999954673550.0021378224.80188273.672E-03
GA-PSO(5, 5, 4, 6)0.9016280.8882300.9481210.8499210.99995467550.00000615.3634631.029E-02
HSSATLBO(5, 6, 4, 5) 0.9016238770.849936249 0.948146758 0.888204712 0.99995467466432554.96040115E-0724.80188272
Table 10

Comparison of the best result for the Convex quadratic (4.1.5 ) and Mixed series-parallel system (4.1.6) with other results in the literature

ProblemsMethodsn RS(r,n) Slack(g1) Slack(g2) Slack(g3)Slack(g4)
4.1.5GA(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.808844
HDE(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.808844
INGHS(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.808844189632730.9649307648E + 130.0202998323E + 134.3632406548E + 130.0871498795E + 13
IABC(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.808844189632730.9649307648E + 130.02029983232E + 134.36324065484E + 130.0871498795E + 13
HSSATLBO(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.80884418963273479.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
4.1.6GA(3, 4, 5, 3, 3, 2, 4, 5, 4, 3, 3, 4, 5, 5, 5)0.9202
HDE(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613
INGHS(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.9456133574581480
IABC(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.9456133574581480
HSSATLBO(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613357458137180
Comparison of the best result for the overspeed protection system (4.1.4) with other results in the literature Comparison of the best result for the Convex quadratic (4.1.5 ) and Mixed series-parallel system (4.1.6) with other results in the literature Comparison of results for Large Scale systems (4.1.7) with other results in the literature Here, VTV denotes the variables that received the value 2 in optimum stage In order to show the convergence performance of the stated algorithm over several existing algorithms like ABC, NNA, TLBO, SSA etc, we vary the best solution for each considered problem and the results are plotted in Fig. 4. This analysis shows that the HSSATLBO has a better convergence rate compared to other algorithms.
Fig. 4

Comparison of convergence curves of HSSATLBO with existing optimizers

Comparison of convergence curves of HSSATLBO with existing optimizers

HSSATLBO comparison with other variants of SSA

This section details about the comparative study of results for the conventional SSA, the proposed HSSATLBO and the three SSA variants namely Levy flight based SSA (LSSA), Cauchy salp swarm algorithm (CSSA) and Gaussian salp swarm algorithm (GSSA). The results are presented in Tables 12–16 in terms of best obtained value and the MPI values. Table 12 shows that the best optimal solution obtained by HSSATLBO for solving series system (4.1.1) is better than the original SSA and the three variants LSSA, CSSA and GSSA with the improvements 1.2529E-07%, 2.7509E-06%, 1.2566E-06%, and 6.8942E-07% respectively. It can be observed from Table 13, the optimal solution achieved by HSSATLBO for solving the complex system (4.1.2) is better than the compared algorithms SSA, LSSA, CSSA and GSSA with significant improvement percentage 2.6556E-03%, 2.3872E-03%, 3.2317E-04%, and 1.1439E-05% respectively. Again, in case solving the series-parallel system (4.1.3) and overspeed system (4.1.4), Tables 14 and 15 shows that, HSSATLBO dominated all compared algorithms effectively with the best optimal value as well as in MPI values. Table 16 shows that HSSATLBO executes the same or better optimal value with the other compared algorithms in case of solving both convex system (4.1.5) and mixed series-parallel system (4.1.6).
Table 12

Comparison of the best result for the series system (4.1.1) with SSA variants

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SSA(3, 2, 2, 3, 3)0.779464930.871810120.902893180.711392770.787786370.93168237854278.9782981E-067.51891821.2529E-07
LSSA(3, 2, 2, 3, 3)0.779320260.871862270.902734310.711605610.787671270.93168219916276.4467091E-067.518918242.7509E-06
CSSA(3, 2, 2, 3, 3)0.779597110.871813600.902923740.711340230.787691050.93168230125277.5055857E-067.518918241.2566E-06
GSSA(3, 2, 2, 3, 3)0.779385440.871918500.902868730.711304990.787865020.93168234000279.8571517E-067.518918246.8942E-07
HSSATLBO(3, 2, 2, 3, 3)0.7793828940.8718337570.9028850370.7114168290.78779659640.93168238710274.9499527E-077.5189182411
Table 16

Comparison of the best result for the Convex quadratic (4.1.5 ) and Mixed series-parallel system (4.1.6) with SSA variants

ProblemsMethodsn RS(r,n) Slack(g1) Slack(g2) Slack(g3)Slack(g4)
4.1.5SSA(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.8088441896327349.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
LSSA(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.8088441896327349.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
CSSA(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.8088441896327349.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
GSSA(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.8088441896327349.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
HSSATLBO(2, 2, 2, 1, 1, 2, 3, 2, 1, 2)0.8088441896327349.649307648E + 122.029983232E + 114.36324065484E + 138.71498795E + 11
4.1.6SSA(3, 4, 5, 4, 3, 2, 4, 6, 4, 2, 3, 4, 5, 4, 5)0.945218008629980
LSSA(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613357458180
CSSA(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613357458180
GSSA(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613357458180
HSSATLBO(3, 4, 6, 4, 3, 2, 4, 5, 4, 2, 3, 4, 5, 4, 5)0.945613357458137180
Table 13

Comparison of the best result for the complex system (4.1.2) with SSA variants

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SSA(3, 3, 3, 3, 1)0.815966680.868593250.858538280.710895080.755351620.999889343515182.4358456E-074.26476982.6556E-03
LSSA(3, 3, 2, 4, 1)0.829037890.857369460.914360330.650470590.676041990.99988937328855.7066553E-061.56046622.3872E-03
CSSA(3, 3, 2, 4, 1)0.828317070.857474670.914375660.649311450.694359260.99988960170452.8128656E-061.56046623.2317E-04
GSSA(3, 3, 2, 4, 1)0.828163670.857820250.914324960.648133950.702375210.99988963611951.3054309E-061.56046621.1439E-05
HSSATLBO(3, 3, 2, 4, 1)0.828005160.857813090.914253300.648266270.703880710.999889637381551.1074633E-061.5604662
Table 14

Comparison of the best result for the series-parallel system (4.1.3) with SSA variants

Algorithm(x1,x2,x3,x4,x5) r1 r2 r3 r4 r5 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SSA(3, 2, 2, 2, 4)0.774947090.871471460.892018250.890220790.863002770.999986336949302.4360319E-061.79496503.1230E-05
LSSA(3, 2, 2, 2, 4)0.775098820.870274530.891108820.892282530.863197300.999986335794305.4134949E-071.79496501.1575E-04
CSSA(3, 2, 2, 2, 4)0.775851800.871663300.891614100.890042000.862831180.999986336878302.9153687E-071.79496503.6426E-05
GSSA(3, 2, 2, 2, 4)0.771044000.866803770.895229950.891690370.864702160.99998630333303.4680749E-061.79496502.4856E-03
HSSATLBO(3, 2, 2, 2, 4)0.775361850.8714241420.8903702230.8914438740.8630261550.9999863373757301.2636326E-071.7949650
Table 15

Comparison of the best result for the overspeed protection system (4.1.4) with SSA variants

Algorithms(x1,x2,x3,x4) r1 r2 r3 r4 RS(R) Slack(g1) Slack(g2) Slack(g3) MPI(%)
SSA(5, 6, 4, 5)0.901586410.849855330.948153230.888269870.9999546746609553.6119283E-062.4801882E + 012.3015E-06
LSSA(5, 5, 4, 6)0.901649110.888198580.948148490.849918720.999954674643551.5440297E-061.5363463E + 014.7037E-07
CSSA(5, 5, 4, 6)0.901678800.888238970.948104730.849872610.99995467454555.2320308E-071.5363463E + 012.7428E-06
GSSA(5, 6, 4, 5)0.901598160.849928020.948158980.888216230.9999546746613554.4184018E-062.4801882E + 016.6630E-08
HSSATLBO(5, 6, 4, 5)0.9016238770.8499362490.9481467580.8882047120.99995467466432554.96040115E-072.480188272E + 01
Comparison of the best result for the series system (4.1.1) with SSA variants Comparison of the best result for the complex system (4.1.2) with SSA variants Comparison of the best result for the series-parallel system (4.1.3) with SSA variants Comparison of the best result for the overspeed protection system (4.1.4) with SSA variants Comparison of the best result for the Convex quadratic (4.1.5 ) and Mixed series-parallel system (4.1.6) with SSA variants Again, the convergence graphs of HSSATLBO are also compared with LSSA, CSSA and GSSA for solving problems 4.1.1 to 4.1.6 and are given in Fig. 5. From these convergence graphs, we can conclude that, as the iteration number increases, the proposed HSSATLBO algorithm also shows better performance than the existing algorithm.
Fig. 5

Comparison of convergence curves of HSSATLBO with SSA variants

Comparison of convergence curves of HSSATLBO with SSA variants

Parameter sensitivity analysis

In this section, parameter sensitivity analysis is performed to evaluate the impact of the probabilistic parameter PSP on the proposed algorithm. Under other conditions retained, different values of the parameter PSP are tested on reliability problems (4.1.1 - 4.1.6) and the results are presented in Table 17. PSP1 indicates that PSP and PSP takes the value 0.05 and 0.95 respectively, i.e., PSP1 lies between [0.05, 0.95]. Similarly, PSP2, PSP3, PSP4 and PSP5 are lies between [0.1, 0.9], [0.2, 0.9], [0.3, 0.9] and [0.4, 0.9] respectively. The mean values obtained by HSSATLBO and their corresponding ranking for each cases are given in that table. Also, as per their achievement in terms of mean value, we can sort their ranking, in the order: PSP4, PSP5, PSP3, PSP2, and PSP1. From the ranking order in Table 17, it can be observed that the result of the proposed algorithm is superior when PSP lies between [0.3, 0.9] i.e., for the case of PSP4. Researchers can also choose different values for PSP according to other set of problems. Figure 6 provides a better visualization of the ranking for each cases of HSSATLBO for solving reliability optimization problems.
Table 17

Ranking of results with different values of parameter PSP

ProblemsPSP1PSP2PSP3PSP4PSP5
4.1.1Mean0.9306070500.9312934140.9309096990.9315496700.930698573
Rank52314
4.1.2Mean0.9998892200.9998892780.9998893460.9998893910.999889348
Rank54312
4.1.3Mean0.9999847250.9999847480.9999848690.9999855420.999985240
Rank54312
4.1.4Mean0.9999495600.9999495590.9999529680.9999541060.999953538
Rank45312
4.1.5Mean0.8088441896330.8088441896330.8088441896330.8088441896330.808844189633
Rank11111
4.1.6Mean0.9447387290.9447252610.9453348520.9453563680.945214695
Rank45213
Average ranking43.52.512.33
Ranking54312
Fig. 6

Ranking of results with different values of parameter PSP

Ranking of results with different values of parameter PSP Ranking of results with different values of parameter PSP

Diversity and exploration-exploitation analysis

For an effective in-depth performance analysis, the population diversity and the exploration-exploitation measurement in HSSATLBO, SSA and SSA variants (i.e., LSSA, CSSA and GSSA) are presented in Table 18 while solving reliability optimization problems. A graphical presentation on comparison of diversity measurement between the proposed HSSATLBO and the SSA variants are given in Fig. 7. The exploration-exploitation phases of the proposed algorithm is also given in Fig. 8. According to Table 18, the proposed hybrid method mostly reduced population diversity compared to SSA, LSSA, CSSA and GSSA for all the reliability problems. For example, on series system (4.1.1), HSSATLBO maintained population diversity value 0.12618 which is relatively lesser than diversity values 1.17742, 0.64185, 0.65301, and 0.65110 in SSA, LSSA, CSSA and GSSA respectively. Similarly, diversity measurement in HSSATLBO for all other problems (4.1.2 - 4.1.6) remained lower than original SSA and its variants. Moreover, Table 18 also reveals that mostly HSSATLBO maintained exploration percentage lower than exploitation on all of the reliability problems. For instance, HSSATLBO maintained exploitation percentage as 81% 88% 72% 85% 67% and 65% for series, complex, series-parallel, overspeed, convex and mixed series-parallel system respectively; and these values are higher than exploitation measurements recorded for the compared algorithms. This discussion can be further assimilated via Fig. 7 for diversity measurement and Fig. 8 for exploration and exploitation behaviours in the proposed algorithm.
Table 18

Comparison on Diversity and Exploration-Exploitation measurement with SSA and its variants

ProblemsMeasurementCompared algorithms
SSALSSACSSAGSSAHSSATLBO
4.1.1Diversity1.177420.641850.653010.651100.12618
Expl% : Expt%66 : 3440 : 6041 : 5941 : 5919 : 81
4.1.2Diversity1.192470.639690.655830.663950.08692
Expl% : Expt%66 : 3347 : 5341 : 5942 : 5812 : 88
4.1.3Diversity1.204810.703490.677960.665550.19484
Expl% : Expt%68 : 3144 : 5643 : 5742 : 5828 : 72
4.1.4Diversity1.021670.541700.551050.566040.07521
Expl% : Expt%63 : 3738 : 6338 : 6239 : 6015 : 85
4.1.5Diversity2.671621.397431.375741.404170.30378
Expl% : Expt%90 : 961 : 3960 : 4062 : 3833 : 67
4.1.6Diversity3.544131.698711.672731.722120.43632
Expl% : Expt%81 : 1845 : 5145 : 5546 : 5435 : 65
Fig. 7

Comparison of Diversity Measurement of HSSATLBO with SSA, LSSA, CSSA and GSSA

Fig. 8

Exploration-Exploitation measurement of HSSATLBO for solving 4.1.1 to 4.1.6

Comparison of Diversity Measurement of HSSATLBO with SSA, LSSA, CSSA and GSSA Exploration-Exploitation measurement of HSSATLBO for solving 4.1.1 to 4.1.6 Comparison on Diversity and Exploration-Exploitation measurement with SSA and its variants

Statistical analysis

In addition, to analyze whether or not the results obtained by the proposed HSSATLBO algorithm are statistically significant, here we consider the following quality indices described below:

The statistical results by Value-based method and tied ranking

The solution quality in terms of standard deviation and mean value is described here. The lower mean value and standard deviation indicates that the algorithm has a stronger global optimization capability and more stability. Also, Tied rank (TR) (Rakhshani & Rahati, 2017) [64] is used here to compare intuitively the performance between the considered methods. In this study, the algorithm with the best mean value is assigned to rank 1; the second-best get rank 2, and so on. Besides, two algorithms having same results share the average of ranks. The algorithm with the smaller rank indicates that it is better than the compared algorithms. In view of the above two quality parameters, the statistical results achieved for HSSATLBO and all other existing algorithms (like ABC, NNA, TLBO, SSA, HHO, SMA and SCA) and the three variants of SSA (LSSA, CSSA, and GSSA) are computed and summarized in Tables 19 and 20 for the considered problems. In this table, the mean, SD and median of the best fitness value after the 30 independent runs of each algorithm is reported. From this table, it is observed that the proposed algorithm is rank 1 followed by the other algorithm, which shows its stability and convergence for all of the benchmark issues. Also, we can sort the ranking, as per their achievement, in the order: The ranking order in Table 19 indicates that the TLBO algorithm shows strong competitiveness and is the second-best on all test issues except Overspeed system. Also, in Table 20, the Gaussian variant of SSA (GSSA) occupied second best position on most of the cases except for solving the series-parallel system and the mixed system. It can therefore be argued that HSSATLBO is an efficient and effective method for solving various kinds of optimization problems.
Table 19

Comparison of the statistical results obtained by HSSATLBO and the existing optimizers

ProblemsHHOSMASCAABCNNATLBOSSAHSSATLBO
4.1.1Best0.92323165996270.93165623238840.91977648272400.9303824905130.9316823657830.9316823379610.931682378540.931682387100
Mean0.89722950125200.92772729009210.89442736541450.9243814737360.9274360928520.9295905660680.9245305903200.931379775783
Std1.9334365E-023.3659236E-031.8268221E-024.66145E-033.136224E-033.10656E-035.741864E-038.026681E-04
Median0.90629716548650.92838633189090.89616277751260.9258462866490.9271935040740.9316810356890.9246467135200.931680940554
Rank73864251
4.1.2Best0.9998572654550.9998891929670.9997988589230.9998824392620.9998896199760.9998895729550.9998893435150.999889637382
Mean0.9996777031320.9998359624570.9996737060310.9998482789110.9998312241370.9998631093110.9998512153280.999889356835
Std1.55695E-047.11217E-059.38780E-051.85770E-056.82623E-053.49079E-052.14278E-051.54451E-07
Median0.9997219627230.9998542653100.9996852794610.9998492582030.9998448128530.9998857341090.9998513151770.999889331930
Rank75846231
4.1.3Best0.9999855184900.9999862453530.9999632694960.9999844895900.9999847562340.9999863211340.99998633694930.999986337376
Mean0.9999577118860.9999769647570.9999156416710.9999756365310.9999317558950.9999813410200.9999692476580.999984950098
Std3.15903E-051.21419E-053.32342E-057.35266E-061.25126E-042.52450E-061.75994E-052.28012E-06
Median0.9999721157090.9999797231900.9999221670850.9999773788890.9999794565990.9999804569850.9999798144450.999986321573
Rank63847251
4.1.4Best0.9999481232420.9999546645750.9998727414490.9999536831800.9999546746200.9999546745560.9999546746610.999954674664323
Mean0.9997952034050.9999427318670.9995687898450.9999437241030.9999456290940.9999342315100.9999407883770.999954104675
Std2.15848E-042.19727E-052.34285E-046.52910E-061.05988E-057.36309E-051.60499E-052.16403E-06
Median0.9998896633190.9999543821370.9995947068290.9999443856560.9999461510650.9999461511940.9999461343300.999954674328
Rank74832651
4.1.5Best0.8088441896330.8088441896330.8088441896330.7857228761110.8088441896330.8088441896330.8088441896330.808844189633
Mean0.8036524111070.8002047093300.7996551734300.6476076561920.7796532243680.8044857309590.7924251752860.808844189633
Std9.35732E-031.00297E-021.02335E-027.73926E-022.06347E-027.64016E-031.76271E-025.64601E-16
Median0.8088441896330.8088441896330.8054073903330.6400778320820.7808356525330.8088441896330.7944755387810.808844189633
Rank34587261
4.1.6Best0.9447484845680.9456133574580.9210418246970.8347527546200.9456133574580.9456133574580.9452180086300.945613357458
Mean0.9402684304600.9426411421800.8895854244840.7144608865000.9424717452860.9443249148720.9409797892370.945368142124
Std2.12239E-032.37562E-031.97222E-026.32905E-022.32059E-031.25456E-035.91813E-033.76312E-04
Median0.9401279010010.9432527334910.8930756445950.7008954711490.9432527334910.9447484845680.9425524292690.945613357458
Rank63784251
Average ranking63.677.335.552.674.831
Ranking73865241
Table 20

Comparison of the statistical results obtained by HSSATLBO and the SSA variants

ProblemsSSALSSACSSAGSSAHSSATLBO
4.1.1Best0.9316823785490.9316815984660.9316818123320.9316819575800.931682387100
Mean0.9245305902940.9215899278820.9215806409500.9250159285560.931379775784
Std5.74186E-037.52792E-037.09996E-036.61257E-038.02668E-04
Median0.9246467135210.9214050530830.9233573266770.9284830556140.931680940554
Rank34521
4.1.2Best0.9998893435150.9998893732880.9998896361190.9998896017040.999889637382
Mean0.9998512153280.9998363046930.9998578915410.9998581009300.999889356835
Std2.14278E-054.79475E-052.00489E-051.55662E-051.54451E-07
Median0.9998513151770.9998513023790.9998645707750.9998513686700.999889331930
Rank45321
4.1.3Best0.9999863369490.9999863357940.9999863368780.9999863033260.999986337376
Mean0.9999692476580.9999760402650.9999738389130.9999715449070.999984950098
Std1.75994E-051.20043E-059.64372E-061.66063E-052.28012E-06
Median0.9999798144450.9999798148490.9999798032910.9999797765960.999986321573
Rank52341
4.1.4Best0.9999546746610.9999546746430.9999546746610.9999546745400.999954674664
Mean0.9999405037160.9999416270640.9999384083450.9999418538870.999954104695
Std1.58699E-052.05199E-053.65211E-051.57548E-052.16403E-06
Median0.9999461343300.9999461240210.9999461172290.9999461143500.999954674352
Rank43521
4.1.5Best0.8088441896330.8088441896330.8088441896330.8088441896330.808844189633
Mean0.7924251752860.8076985898660.8074694699130.8076985898660.808844189633
Std1.76271E-022.60543E-032.79644E-032.60543E-035.64601E-16
Median0.7944755387810.8088441896330.8088441896330.8088441896330.808844189633
Rank52.542.51
4.1.6Best0.9452180086300.9456133574580.9456133574580.9456133574580.945613357458
Mean0.9409797892370.9437234687480.9428263933200.9408952083120.945368142124
Std5.91813E-031.40407E-032.78831E-034.43258E-033.76312E-04
Median0.9425524292690.9439439506070.9434046801330.9416249466980.945613357458
Rank42351
Average ranking4.173.093.832.921
Ranking53421
HSSATLBO, TLBO, SMA, SSA, NNA, ABC, HHO and SCA. HSSATLBO, GSSA, LSSA, CSSA and SSA. Apart from this analysis, a statistical test named Wilcoxon signed-rank test is performed to check the statistical significance of the results obtained from the proposed algorithm. Comparison of the statistical results obtained by HSSATLBO and the existing optimizers Comparison of the statistical results obtained by HSSATLBO and the SSA variants

The results analysis by Wilcoxon signed-rank test

This statistical test-based method [17] is used to compare the performance of the proposed HSSATLBO with the other algorithms. Also, it has several advantages,compared to the t-test, such as: (1) normal distributions is not considered here for the sample tested; (2) It’s less affected and more responsive than the t-test. This advantages makes it more powerful test for comparing two algorithms (Mafarja et al., 2018 [55]; Sun et al., 2018 [74]; Yi et al., 2019 [90]). Wilcoxon signed-rank test is performed here with a significance level α = 0.05 and the obtained results are shown in Tables 21 and 22 . In this table, “H” scored “1” if there is a symbolic difference between HSSATLBO and the existing algorithm and also “H” is labelled as “0” if there is no significant difference. Again, the sign of “S” is taken as “ + ” if the proposed algorithm is superior to the compared algorithm and “−” is assigned to “S” if HSSATLBO is inferior to the compared algorithm. It is noted that the proposed algorithm HSSATLBO dominates all compared algorithms on all reliability problems. Thus, from this analysis, we conclude that the proposed HSSATLBO can obtain better solutions than the comparative algorithms, which means that the proposed method has a better global performance optimization capability than the comparable algorithms.
Table 21

The comparison results of the applied algorithms by Wilcoxon signed-rank test (a level of significance α = 0.05)

ProblemsHSSATLBO vs
ABCNNATLBOSSAHHOSMASCA
p-valueHSp-valueHSp-valueHSp-valueHSp-valueHSp-valueHSp-valueHS
4.1.12.35342E-061+4.44933E-051+3.68261E-021+9.31565E-061+1.73440E-061+3.8822e-061+1.73440E-061+
4.1.21.73439E-061+4.86026E-051+2.59671E-051+1.92092E-061+1.73440E-061+1.92092E-061+1.73440E-061+
4.1.33.18167E-061+4.28568E-061+4.86026E-051+1.23807E-051+3.18168E-061+8.46608E-061+1.73440E-061+
4.1.43.18167E-061+4.19550E-041+1.49356E-051+2.61343E-041+1.73440E-061+1.73440E-061+1.12654E-051+
4.1.51.73439E-061+8.1463E-061+6.3103E-010+2.4375E-051+3.667E-011+4.2859E-021+6.1035E-051+
4.1.61.73439E-061+7.2533E-061+2.7931E-041+2.6017E-061+2.5631E-061+1.1181E-051+1.7333E-061+
Table 22

The comparison results of the SSA variants by Wilcoxon signed-rank test (a level of significance α = 0.05)

ProblemsHSSATLBO vs
SSALSSACSSAGSSA
p-valueHSp-valueHSp-valueHSp-valueHS
4.1.19.32E-061+5.22E-061+1.92E-061+4.73E-061+
4.1.21.92E-061+1.73E-061+2.88E-061+3.18E-061+
4.1.31.24E-051+8.47E-061+2.88E-061+1.49E-051+
4.1.42.61E-041+1.89E-041+1.97E-051+4.07E-051+
4.1.52.44E-051+4.04E-021+1.38E-010+4.04E-021+
4.1.62.60E-061+6.91E-061+1.22E-051+1.18E-051+
The comparison results of the applied algorithms by Wilcoxon signed-rank test (a level of significance α = 0.05) The comparison results of the SSA variants by Wilcoxon signed-rank test (a level of significance α = 0.05)

Kruskal-Wallis and multiple comparison test

The MCT test is performed here to justify whether the proposed HSSATLBO algorithm is better than the other optimizers (e.g., SSA, NNA, TLBO, ABC, HHO, SMA and SCA) and the other variants of SSA (LSSA, CSSA, and GSSA). For this purpose, we perform a non-parametric Kruskal-Wallis test (KWT) between the best values obtained for each problem considered. This test was used to investigate the hypothesis that the different independent samples of the distributions had or did not have the same estimates. On the other hand, the MCT is used to determine the significant difference between the different estimates by performing multiple comparisons using one-way ANOVA. To addressed this, the significance of the proposed HSSATLBO algorithm results are compared with the compared algorithms results. The optimized results between the pairs of the different algorithms are summarized in Tables 23 and 24. In this table, the first column represents the problem considered, while the second column indicates the indices between the pairs of the different samples. The third and fifth column describes the boundary of the true mean difference between the samples considered at a 5% level of significance. At the end of the last column, the p-value of the test obtained by KWT corresponds to the null hypothesis of equal means.
Table 23

Statistical results of the existing optimizers using MCT analysis

ProblemsComparingLower boundGroup meanUpper boundp-valueProblemsComparingLower boundGroup meanUpper boundp-value
4.1.1HSSATLBO vs SSA22.8683353.4666784.065001.85E-054.1.4HSSATLBO vs SSA10.8354041.4333372.031262.06E-03
HSSATLBO vs TLBO-34.41412519.91666774.2474599.547E-01HSSATLBO vs TLBO4.28741958.616667112.9459142.390E-02
HSSATLBO vs NNA-0.34745953.983333108.3141255.291E-02HSSATLBO vs NNA-0.84591453.483333107.81258075.733E-02
HSSATLBO vs ABC35.40254189.733333144.0641251.535E-05HSSATLBO vs ABC30.9707585.30000139.629255.323E-05
HSSATLBO vs HHO111.835875166.166667220.4974595.988E-08HSSATLBO vs HHO95.17075149.50000203.829255.988E-08
HSSATLBO vs SMA8.00254162.333333116.6641251.189E-02HSSATLBO vs SMA12.8040967.13333121.462584.468E-03
HSSATLBO vs SCA116.835875171.166667225.4974595.988E-08HSSATLBO vs SCA127.07075181.4235.72924735.988E-08
4.1.2HSSATLBO vs SSA43.8036874.40000104.996321.02E-084.1.5HSSATLBO vs SSA6.8314735.2666763.701866.44E-03
HSSATLBO vs TLBO2.10249456.433333110.7641723.514E-02HSSATLBO vs TLBO-82.732372-28.80000025.1323727.393E-01
HSSATLBO vs NNA27.40249481.733333136.0641721.386E-04HSSATLBO vs NNA32.96762886.900000140.8323722.860E-05
HSSATLBO vs ABC35.93582890.266667144.5975061.316E-05HSSATLBO vs ABC86.467628140.400000194.3323725.988E-08
HSSATLBO vs HHO115.269161169.600000223.9308395.988E-08HSSATLBO vs HHO-67.432372-13.50000040.4323729.951E-01
HSSATLBO vs SMA37.60249491.933333146.2641728.103E-06HSSATLBO vs SMA-37.99903815.93333369.8657059.866E-01
HSSATLBO vs SCA128.169161182.500000236.8308395.988E-08HSSATLBO vs SCA-15.79903838.13333392.0657043.870E-01
4.1.3HSSATLBO vs SSA28.3346758.9333389.532001.49E-064.1.6HSSATLBO vs SSA43.4786974.00000104.521311.03E-08
HSSATLBO vs TLBO-2.94716451.383333105.7138317.950E-02HSSATLBO vs TLBO-18.1036036.2000090.503604.678E-01
HSSATLBO vs NNA33.36950387.700000142.0304972.736E-05HSSATLBO vs NNA20.3297474.63333128.936938.143E-04
HSSATLBO vs ABC47.486169101.816667156.1471644.293E-07HSSATLBO vs ABC144.69640199.00000253.303605.988E-08
HSSATLBO vs HHO77.769503132.100000186.4304975.988E-08HSSATLBO vs HHO58.89640113.20000167.503606.680E-08
HSSATLBO vs SMA14.53616968.866667123.1971643.075E-03HSSATLBO vs SMA18.7297473.03333127.336931.188E-03
HSSATLBO vs SCA128.536169182.866667237.1971645.988E-08HSSATLBO vs SCA114.62974168.93333223.236935.988E-08
Table 24

Statistical results of SSA variants using MCT analysis

ProblemsComparingLower boundGroup meanUpper boundp-valueProblemsComparingLower boundGroup meanUpper boundp-value
4.1.1HSSATLBO vs SSA22.8683353.4666784.065001.85E-054.1.4HSSATLBO vs SSA10.8354041.4333372.031262.06E-03
HSSATLBO vs LSSA41.9350072.53333103.131671.09E-08HSSATLBO vs LSSA7.83540538.4333369.031265.53E-03
HSSATLBO vs CSSA41.4016672.00000102.598341.12E-08HSSATLBO vs CSSA19.4687350.0666680.6645947.90E-05
HSSATLBO vs GSSA23.5683354.1666784.765001.36E-05HSSATLBO vs GSSA15.3020745.9000076.497934.12E-04
4.1.2HSSATLBO vs SSA43.8036874.40000104.996321.02E-084.1.5HSSATLBO vs SSA6.8314735.2666763.701866.44E-03
HSSATLBO vs LSSA47.9703478.56667109.162999.94E-09HSSATLBO vs LSSA-70.26853-41.83333-13.398145.74E-04
HSSATLBO vs CSSA28.4036859.0000089.596321.44E-06HSSATLBO vs CSSA-67.53519-39.10000-10.664811.65E-03
HSSATLBO vs GSSA26.9370157.5333388.129662.89E-06HSSATLBO vs GSSA-70.26853-41.83333-13.398145.74E-04
4.1.3HSSATLBO vs SSA28.3346758.9333389.532001.49E-064.1.6HSSATLBO vs SSA43.4786974.00000104.521311.03E-08
HSSATLBO vs LSSA21.4680052.0666782.665333.40E-05HSSATLBO vs LSSA18.6286949.1500079.671310.000109304
HSSATLBO vs CSSA32.5013463.1000093.698661.94E-07HSSATLBO vs CSSA25.0786955.6000086.121316.67E-06
HSSATLBO vs GSSA30.6346761.2333391.832004.86E-07HSSATLBO vs GSSA42.9786973.50000104.021311.04E-08
Statistical results of the existing optimizers using MCT analysis Statistical results of SSA variants using MCT analysis The box-plot and the MCT graphs for the problems (4.1.1-4.1.6) considered are shown in Figs. 9 and 10. In this figure, the left graph describes the boxes with the values of the 1st, 2nd and 3rd quarters, while the vertical lines that extend the boxes are called the whisker lines that provide information on the re-imagining values. On the other hand, on the right side of this figure, the MCT makes a multiple comparison between the different pairs and makes a significant difference between them. The blue line on these graphs represents the proposed HSSATLBO results and the red line indicates which algorithm results (such as HHO, SMA, SCA, ABC, NNA, TLBO, and SSA) or (LSSA, CSSA, GSSA and SSA) are statistically significant from the proposed HSSATLBO. For example, in case of series system (4.1.1), as shown in Fig. 9 we calculate that all existing algorithms (HHO, SMA, SCA, ABC, NNA, TLBO, and SSA) have statistically significant resources from the HSSATLBO algorithm. Furthermore, the vertical lines (right/left, shown in black colour) shown around the HSSATLBO results (displayed in blue colour) describe the marginal area to show which method is statistically better or not considered to be problematic. From this analysis and the results are shown in Figs. 9-10 and Tables 23-24, we conclude that the performance of the proposed algorithm is statistically significant with the other algorithms. The best results are therefore provided by the HSSATLBO.
Fig. 9

Box plot of objective function using the reported optimizers

Fig. 10

Box plot of objective function using the SSA variants

Box plot of objective function using the reported optimizers Box plot of objective function using the SSA variants

Conclusions & Future work

In order to solve the reliability-redundancy allocation problems (RRAP) with non-linear resource constraints, this paper introduces a hybrid algorithm HSSATLBO combining the SSA and TLBO algorithms. SSA has been successfully tasted to solve various kinds of complex optimization problems due to its simple structure and outstanding performance. Although the SSA experience is adequate in exploration but lacking of exploitation, which forces slow convergence and reduces the optimizing accuracy. To address these issues, the basic formation of the SSA has been renovated by embedding the features of the TLBO. In this context, a probabilistic selection strategy is defined, which helps to determine whether to apply the basic SSA or the TLBO to construct a new solution. To demonstrate the application of the HSSATLBO algorithm, we have considered several benchmark issues in the areas of reliability optimization. All of these problems considered are mixed variables – discrete, continuous and integer. The core idea of the proposed HSSATLBO algorithm is to make full use of the good global search ability of SSA and fast convergence of TLBO that helps to maximize the system reliability through the choices of redundancy and component reliability. The results obtained from the proposed algorithm have been tested and compared with a number of existing algorithms and conclude that they also perform well. Also, the best, mean, median and SD of the problems considered are reports that indicate that the proposed algorithm has better results with less SD and is therefore reliable and optimal. In addition, in order to eliminate the stochastic nature of the algorithm, we perform several statistical tests, namely a ranking test and a Wilcoxon signed-rank test for each problem. All of the above discussions and evaluations in this study ensure that the proposed algorithm is a competitive approach, not only that it performs well but also that it has a better global performance optimization capability than the comparable algorithms to solve the reliability problems. In future works, one can attempt to use the proposed algorithm in other applications such as airline recovery problems, integrated aircraft, and passenger recovery problems, flight perturbation problems, etc. Flight irregularity is a well-known and widespread problem all over the world which creates a serious impact on the performance of the airlines’ company. All through the present decades, different models for dealing with the aircraft recovery issue have been recommended that hope to optimize the task upon different conditions. Airlines are attempting to locate the best timetables that are steady with their different objectives; namely, minimize the number of interrupted passengers and the total number of aircraft to recuperate from the interruption, decrease the number of irrecoverable flights, minimize the interrupted passenger’s cost, thus the ultimate goal of airlines to maximize their overall profit (Andersson, 2006 [5]; Arıkan et al., 2016 [10]). Analysts suggest that it is better to consider recapture problems jointly with all essential constraints instead of considering only one recovery. Real situations including erratic interruptions require a more sensible arrangement that can retain changes in a better way. Based on our study in that field of disruption administration, our proposed algorithm may be considered for further research to solve the problem. Being a dynamic field for exploration, the researchers may extend the models to handle more complex variants of the combined recovery problem.
n \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$=(n_{1},n_{2},\dot {...},n_{m})$\end{document}=(n1,n2,...˙,nm), the redundancy allocation
vector for the system.
mnumber of subsystems.
ni the number of components in subsystem i.
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${n_{i}}^{max}$\end{document}nimax maximum number of components in subsystem i.
ri the components reliability in subsystem i.
b is the vector of resource limitation.
ci the component cost in subsystem i.
vi the component volume in subsystem i.
wi the component weight in subsystem i.
Ri = \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ 1-(1-r_{i})^{n_{i}} $\end{document}1(1ri)ni, is the reliability of the ith
subsystem.
C upper limit of the system’s cost.
V upper limit of the system’s volume.
W upper limit of the system’s weight.
RS the system reliability.
gj the jth constraint function.
  4 in total

1.  An improved particle swarm optimization algorithm for reliability problems.

Authors:  Peifeng Wu; Liqun Gao; Dexuan Zou; Steven Li
Journal:  ISA Trans       Date:  2010-09-20       Impact factor: 5.468

2.  A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

Authors:  Qiang He; Xiangtao Hu; Hong Ren; Hongqi Zhang
Journal:  ISA Trans       Date:  2015-10-23       Impact factor: 5.468

3.  A hybrid artificial bee colony optimization and quantum evolutionary algorithm for continuous optimization problems.

Authors:  Hai-Bin Duan; Chun-Fang Xu; Zhi-Hui Xing
Journal:  Int J Neural Syst       Date:  2010-02       Impact factor: 5.866

4.  Coronavirus herd immunity optimizer (CHIO).

Authors:  Mohammed Azmi Al-Betar; Zaid Abdi Alkareem Alyasseri; Mohammed A Awadallah; Iyad Abu Doush
Journal:  Neural Comput Appl       Date:  2020-08-27       Impact factor: 5.606

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.