Literature DB >> 26784900

Application of the Elitist-Mutated PSO and an Improved GSA to Estimate Parameters of Linear and Nonlinear Muskingum Flood Routing Models.

Ling Kang1, Song Zhang1.   

Abstract

Heuristic search algorithms, which are characterized by faster convergence rates and can obtain better solutions than the traditional mathematical methods, are extensively used in engineering optimizations. In this paper, a newly developed elitist-mutated particle swarm optimization (EMPSO) technique and an improved gravitational search algorithm (IGSA) are successively applied to parameter estimation problems of Muskingum flood routing models. First, the global optimization performance of the EMPSO and IGSA are validated by nine standard benchmark functions. Then, to further analyse the applicability of the EMPSO and IGSA for various forms of Muskingum models, three typical structures are considered: the basic two-parameter linear Muskingum model (LMM), a three-parameter nonlinear Muskingum model (NLMM) and a four-parameter nonlinear Muskingum model which incorporates the lateral flow (NLMM-L). The problems are formulated as optimization procedures to minimize the sum of the squared deviations (SSQ) or the sum of the absolute deviations (SAD) between the observed and the estimated outflows. Comparative results of the selected numerical cases (Case 1-3) show that the EMPSO and IGSA not only rapidly converge but also obtain the same best optimal parameter vector in every run. The EMPSO and IGSA exhibit superior robustness and provide two efficient alternative approaches that can be confidently employed to estimate the parameters of both linear and nonlinear Muskingum models in engineering applications.

Entities:  

Mesh:

Year:  2016        PMID: 26784900      PMCID: PMC4718656          DOI: 10.1371/journal.pone.0147338

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Accurate forecasting of flood wave movement in natural river channels is extremely important for the real-time monitoring, alert and control of floods, which are effective non-engineering measures for preventing tremendous loss of lives and property. Two categories of approaches for flood routing exist: hydraulic and hydrologic methods [1]. The former routes flood by numerically solving the famous Saint-Venant equations, which usually has strict requirements for the topographical data of the investigated stream channel (such as channel cross-section and roughness) and complicated computations [2]. Conversely, the latter is based on the continuity and empirical storage equations and is more widely used in engineering applications due to its simplicity. The Muskingum flood routing model, developed by McCarthy [3], is the most frequently applied hydrologic technique. As is known to all, the precise estimation of parameters is the key point in applying the Muskingum method for real-time flood forecasting [4]. This problem is always formulated and solved by determining the values of Muskingum parameters using historical inflow-outflow hydrograph data based on a specified optimization criterion (i.e., optimization objective). During the past decades, two types of diverse techniques have been developed to deal with the problem: traditional mathematical methods and heuristic optimization algorithms. The mathematical methods include the least-squares method (LSM) [5], the Hooke-Jeeves (HJ) pattern search in conjunction with the linear regression (HJ+LR), the conjugate gradient (HJ+CG) or the Davidon-Fletcher-Powell (HJ+DFP) algorithms [6], the nonlinear least-squares regression (NONLR) [7], the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) [8] and the Nelder-Mead simplex (NMS) algorithm [9]. However, most mathematical methods mentioned above inevitably have some drawbacks, such as special derivation conditions, a time-consuming quality or initial parameter assumptions. Therefore, numerous researchers focus on heuristic optimization algorithms that are characterized by fast convergence and the ability to obtain better solutions in recent decades, such as the harmony search (HS) [10], the genetic algorithm (GA) [11], the standard, improved or hybrid particle swarm optimization algorithms (PSOs) [12-15], the immune clonal selection algorithm (ICSA) [16], the differential evolution (DE) [17], and the cuckoo search (CS) algorithm [18]. The purpose of this research is to apply the newly developed elitist-mutated PSO (EMPSO) algorithm [19] and an improved gravitational search algorithm (IGSA) to solve parameter estimation problems of different forms of Muskingum models (one linear structure and two nonlinear structures). The proposed IGSA is based on the gravitational search algorithm (GSA) [20]. These two improved algorithms both have no previous applications for such issues. In the IGSA, a modified velocity updating rule and the elite strategy are introduced to enhance the global search ability and accelerate the convergence speed of the basic GSA, respectively. The experimental results of 9 widely-used standard benchmark functions with diverse properties demonstrate the global optimization abilities of the EMPSO and IGSA. The application cases verify their validity and advantages in handling parameter estimation problems of both linear and nonlinear Muskingum models. The remainder of this paper is organized as follows: In Sect. 2, we provide the structures and the flood routing procedures of three important linear and nonlinear Muskingum models, whose structure complexities increase with the number of parameters from two to four. In Sect. 3, we briefly describe the newly developed EMPSO and the IGSA presented in this study, and then they are tested on 9 minimization benchmark functions. In Sect. 4, the EMPSO and the IGSA are successfully applied in numerical cases (three typical flood events). The results and analysis are also presented in this section. We discuss some conclusions of our research work in Sect. 5.

Muskingum Models

In previous decades, various forms of Muskingum models have been investigated [5, 18, 21]. Three typical linear or nonlinear Muskingum models and their corresponding flood routing equations or procedures are briefly described in this section: the original two-parameter linear Muskingum model (LMM) [3], a three-parameter nonlinear Muskingum model (NLMM) [5] and a four-parameter nonlinear Muskingum model that incorporates the lateral flow (NLMM-L) [18].

LMM

The original LMM, which is based on the basic hypothesis that the storage within a river reach is a weighted function of inflow and outflow rates, employs the following continuity and storage equations. where S = channel storage at time t; I and O = observed rates of inflow and outflow at time t, respectively; K = storage-time constant, which has a value that is similar to the flow travel time through the routing river reach; x = weighting factor, x ∈ (0, 0.3] for stream channels and x ∈ (0, 0.5] for reservoir storage. The finite difference solution for Eqs (1) and (2) and the flood routing procedure of LMM is given by Eqs (3)–(5). where = estimated outflow at time t; T = total number of time intervals; C0, C1 and C2 = three coefficients of LMM. Note that the LMM is a two-parameter (C0, C1) Muskingum model because C2 = 1 − C0 − C1.

NLMM

However, the relationship between the channel storage S and the weighted flow [xI + (1 − x)O] is not always and essentially linear in many river reaches; thus, the use of LMM may be inappropriate. Hence, an additional exponent parameter m was introduced to consider the effect of nonlinearity. The following form of nonlinear Muskingum model has been suggested [5]. As shown in Eq (6), the NLMM is a three-parameter (K, x and m) Muskingum model and the LMM is a particular form of NLMM with m = 1. The rate of outflow O can be calculated by rearranging Eq (6):

NLMM-L

The LMM and NLMM are frequently viewed and discussed in the literature. However, they all disregard the lateral flow along the investigated reach despite the fact that lateral flow exists along many river reaches in actual flood events. Assuming that the lateral flow (Qlat) linearly varies along a river reach and can be expressed as a ratio (α) of the inflow rate (Qlat = αI), O'Donnell [21] proposed another linear Muskingum model that consider lateral flow in 1985, it is expressed as Eqs (8) and (9). Inspired by the above assumptions by O'Donnell [21], in 2014, Karahan and Gurarslan [18] proposed a new nonlinear Muskingum model that takes the lateral flow into consideration after the integration of continuity Eq (8) and the NLMM in Eq (6): As expressed in Eq (10), the NLMM-L is a four-parameter (K, x, m and α) Muskingum model. By rearranging Eq (10), the rate of outflow O can be calculated using Eq (11).

Routing Procedures of the NLMM and NLMM-L

In contrast to the LMM, the flood routing procedures of nonlinear Muskingum models NLMM and NLMM-L are highly complex. The routing procedures for the NLMM and the NLMM-L can be standardized using the following steps [2, 8, 18]: Step 1: Assume values of the Muskingum parameters (K, x and m for NLMM; K, x, m and α for NLMM-L). Step 2: Calculate the storage amount (S) using Eq (6) for the NLMM and Eq (10) for the NLMM-L. Step 3: After combining the continuity Eqs (1) and (8) with the corresponding outflow calculations in Eqs (7) and (11), the time rate of the storage change can be calculated using Eq (12) for the NLMM and Eq (13) for the NLMM-L. Step 4: Calculate the next storage using Eq (14), where Δt is assumed to represent the unit time. Step 5: Calculate the next estimated outflow using Eq (15) for the NLMM and Eq (16) for the NLMM-L. Note that Eqs (15) and (16) use the observed inflow at the previous time-point (I) instead of the observed inflow at the current time (It+1) compared with Eqs (7) and (11) because Eq (15) occasionally provides better estimated outflow as suggested and reported by [6, 8]. Step 6: Repeat Steps 2–5 for all time steps.

Two Improved Heuristic Algorithms

Elitist-mutated PSO

Standard PSO and Its Developments

The PSO algorithm, originally introduced by Kennedy and Eberhart [22], is a population-based stochastic search technique inspired by the social behavior of fish schooling or bird flocking. In PSO, each individual within the swarm is called as a particle and represents a candidate solution to the optimization problem. For a D-dimensional search space, assume that X = (xi1, xi2, …, x) and V = (vi1, vi2, …, v) denote the position vector and the velocity vector of the ith particle, respectively. The best previously visited position of the ith particle and the current global best position in the swarm are recorded as pbesti = (pi1, pi2, …, p) and gbest = (pg1, pg2, …, p), respectively, where g is the index of the best particle in the swarm. During iterations, the swarm is manipulated according to the updating rules written as Eqs (17) and(18). Note that such process involves individual intelligence, i.e., the particles learn through their own experience (local search) and the experience of their peers (global search). where d = 1, …, D represents the index for the decision variables; i = 1, …, pop and pop = the number of particles in the swarm; n = iteration number; r1 and r2 = uniformly generated random numbers in [0, 1]; c1 and c2 = cognitive and social parameters, respectively, which are referred to as acceleration constants. In addition, the value of velocity in each iteration should be limited within the range [−v, v], where , 0.05≤ υ ≤0.50, and are the lower bound and upper bound of the dimension d. Eqs (17) and (18) yield the standard PSO that may have shortcomings of premature convergence and poor control of its search capability. To overcome these drawbacks, extended studies and developments were reported [23-26]. Among which, two big variations focus on modifying the model coefficients are as follows. In 1988, Shi and Eberhart [26] introduced a linearly decreasing inertia weight parameter w into Eq (17) to balance the local search and the global search abilities, which are expressed as Eqs (19) and (20) where wmax and wmin = the initial and the final inertia weights, respectively; n = current iteration number; and N = maximum iterations. In 1999, Clerc [23] introduced the constriction factor χ to control the changes in velocity and assure better convergence of the PSO, as shown in Eqs (21) and (22). Other developments on improving the performance of PSO fall into two categories [24]: (1) considering the population structure [25, 27] and (2) altering the interaction modes between each particle and its neighbors [24, 28–30].

Latest Velocity Updating Rule and Elitist Mutation Operator

A newly developed PSO, namely, elitist-mutated PSO (EMPSO), was proposed by Nagesh Kumar and Janga Reddy [19] for solving water resource problems. Two main improvements in the EMPSO algorithm are: (1) each particle calculates its velocity using the latest updating rule as shown in Eq (23), where χ is the constriction coefficient and w is the inertia weight; and (2) a new strategic mechanism called elitist mutation operator is introduced to enhance the diversity of the swarm and explore new regions in the whole search space. Pseudo-code of the EMPSO algorithm is presented in Fig 1, in which Fig 1a gives the implementation of the elitist mutation operator and Fig 1b describes the main steps involved in the EMPSO methodology.
Fig 1

Pseudo-code of the EMPSO algorithm.

As illustrated in Fig 1a, in each iteration, the elitist mutation operator is performed on a predefined number (NM) of the worst fitness particles in the swarm. This process of random perturbation is described as follows: first, all particles are sorted in ascending order based on their fitness function and the index numbers for the respective particles are obtained (i.e. index of the sorted swarm is recorded in ASF[i], i = 1, …, pop); second, the elitist mutation is performed on the front NM worst particles (selected number of least ranked particles to be elitist-mutated) and the respective particle position vectors are replaced with the new mutated position vectors obtained after performing variable-wise mutation on the global best position vector (pem is the mutation probability), whereas the velocity vectors of these particles are unvaried.

Improved GSA

Basic GSA

Based on the law of gravity and mass interactions, in 2009, Rashedi and Nezamabadi-pour [20] proposed a novel heuristic algorithm, namely, the gravitational search algorithm (GSA). For an optimization problem, the searcher agents in GSA are a collection of masses in which the values of the masses are proportional to their fitness functions. During the iterative process, the masses interact with each other according to Newtonian gravity and the laws of motion. A heavier mass has a higher attraction, which indicates greater efficiency (similar to the global optimum) and a slower speed of movement. The basic GSA is mathematically described as follows. Consider a system with pop agents (masses), in which the position of the ith agent (candidate solution) is denoted by where represents the position of the ith agent in the dth dimension and D is the space dimension. According to the Newton law of gravity, the force acting on the ith mass from the jth mass at time t is defined as Eq (25). The total force acting on agent i in the dimension d is considered to be a randomly weighted sum of the forces exerted from other agents, as expressed in Eq (26) where M, M = the passive and active gravitational masses of agent i and agent j, respectively; M = M = M i = 1, …, pop, M is the inertial mass of the ith agent; G(t) = gravitational constant at time t; ε = a small constant; R(t) = ||X(t), X(t)||2, is the Euclidian distance between agents i and j; rand = a uniformly generated random number in [0, 1]; Kbest = the set of first K agents with the best fitness values and the largest masses, which is a function of time t, has the initial value K0 = pop and is linearly decreases to 1 at the end of each iteration. Note that the gravitational constant G is a function of the initial value (G0, a problem-dependent parameter) and time t: Based on the law of motion, the acceleration of the ith agent in the dth dimension at time t is given by The next velocity of each agent i is considered to be a fraction of its current velocity added to its acceleration and is expressed as follows: The inertia mass values of the masses are calculated by where fit(t) = the fitness value of the agent i at time t; worst(t) and best(t) = the worst fitness and the best fitness, respectively, among all agents.

Modified Velocity Updating Rule and Elite Strategy

The basic GSA may spend a significant amount of time converging to the global optimum due to the presence of heavier masses at the end of every run. Therefore, we propose an improved GSA (IGSA) which employs the following two strategies to overcome this drawback. The first strategy learns from the idea of memory and social information of PSO and defines a new velocity updating rule for agents, which is written as Eq (32) [31]. The second strategy adds the elite strategy to GSA to accelerate its convergence speed. The idea is to directly preserve a certain number of elite agents in the current generation and replace an equal number of worst agents of the new generated offspring generation. The top 5% of agents are preserved in each generation. Pseudo-code of the IGSA is shown in Fig 2. where rand, r1 and r2 = uniformly generated random numbers in [0, 1]; c1 and c2 = weighting factors; and = the current best solution.
Fig 2

Pseudo-code of the IGSA.

Performance Test Using Benchmark Functions

Benchmark functions are commonly recognized as an important tool to validate the performance of optimization algorithms [24, 25, 32]. There have been many kinds of benchmark functions reported in the literature [33, 34]. However, only a comprehensive selection of benchmark functions with various characteristics can be truly useful to test new algorithms in an unbiased way. For this reason, a rich test suite of 9 standard minimization benchmark functions with diverse properties in terms of separability, modality were used as experiments for evaluating the EMPSO and IGSA. Table 1 gives a detailed description of these functions, where D is the dimension of the function, fmin is the optimum value of the function. Functions f1–f7 are high-dimensional problems. The first four functions (f1 to f4) are unimodal, which are relatively easy to solve. Functions f5f9 are multimodal so that the algorithm really suffers from being premature. Functions f5f7, where the number of local minima increases exponentially with the problem dimension, appear to be the most difficult class of optimization problems. Functions f8 and f9 are two low-dimensional functions which have only a few local minima. We applied the EMPSO and IGSA to the above 9 benchmark functions and compared the experimental results with those obtained by the standard PSO as well as basic GSA. The results are averaged over 50 independent runs and the best-so-far solution, mean and standard deviation of the best solution in each run are listed in Table 2. In all cases, population size is set to 50 (pop = 50) and maximum number of iterations is 1000 (N = 1000). For PSO and EMPSO, the acceleration constants are c1 = 1.0 and c2 = 0.5. Other parameters of the EMPSO use the following settings: constriction factor χ = 0.9, inertia weight w = 1.0, mutation probability pem = 0.2, and the size of the elitist-mutated particles NM = 10. For GSA and IGSA, G0 = 1.0 and β = 20.0, whereas c1 = 0.5 and c2 = 1.5 for the IGSA.
Table 1

Benchmark functions.

No.FormulaDRangefmimSeparabilityModality
f1f1(X)=i=1Dxi230[–100, 100]D0SeparableUnimodal
f2f2(X)=i=1D|xi|+i=1D|xi|30[–10, 10]D0Non-SeparableUnimodal
f3f3(X)=i=1D(j=1ixj)230[–100, 100]D0Non-SeparableUnimodal
f4f4(X)=i=1D1[100(xi+1xi2)2+(xi1)2]30[–30, 30]D0Non-SeparableUnimodal
f5f5(X)=i=1Dxisin(|xi|)30[–500, 500]D-418.9829×DSeparableMultimodal
f6f6(X)=20exp(0.21Di=1Dxi2)exp(1Di=1Dcos(2πxi))+20+e30[–32, 32]D0Non-SeparableMultimodal
f7f7(X)=14000i=1Dxi2i=1Dcos(xii)+130[–600, 600]D0Non-SeparableMultimodal
f8f8(X)=(1500+j=1251j+i=12(xiaij)6)12[-65.536, 65.536]D1Non-SeparableMultimodal
f9f9(X)=i=110[(Xai)(Xai)T+ci]14[0, 10]D-10.5Non-SeparableMultimodal

The values of a in f8 are given in S1 Table.

The vectors a and c in f9 are given in S2 Table.

Table 2

Minimization results of benchmark functions in Table 1.

FunctionStatisticsPSOEMPSOGSAIGSA
f1Best3.03E+039.46E-168.94E-181.43E-18
Mean7.45E+031.29E-051.98E-173.48E-18
Std.1.99E+038.8E-055.74E-189.70E-19
f2Best2.93E+011.73E-021.32E-085.43E-09
Mean8.40E+012.80E-012.30E-087.72E-09
Std.6.46E+012.12E-013.60E-091.33E-09
f3Best1.95E+047.975.77E+041.48E+02
Mean3.06E+047.13E+011.02E+052.54E+03
Std.5.98E+036.01E+012.95E+041.66E+03
f4Best1.81E+061.78E+012.57E+011.45E+01
Mean6.85E+066.76E+012.69E+014.96E+01
Std.3.17E+066.68E+015.294.21E+01
f5Best-9091.9-11437.2-4249.3-9299.7
Mean-7273.4-10205.5-2907.9-7604.1
Std.8.23E+024.51E+024.67E+026.26E+02
f6Best1.26E+019.31E-012.43E-091.07E-09
Mean1.45E+012.003.35E-098.45E-01
Std.1.054.79E-014.55E-101.27
f7Best3.43E+013.83E-141.254.72E-02
Mean6.86E+013.59E-024.101.61
Std.1.91E+014.38E-021.622.08
f8Best0.99800.99800.99800.9980
Mean0.99810.99803.49611.2553
Std.1.74E-043.33E-162.258.58E-01
f9Best-10.0931-10.5364-10.5364-10.5364
Mean-6.6513-5.5079-9.3011-10.5364
Std.1.533.582.838.88E-15

Best: best-so-far solution over 50 runs.

Mean: mean of the best solutions in 50 runs.

Std.: standard deviation of the best solutions in 50 runs.

The values of a in f8 are given in S1 Table. The vectors a and c in f9 are given in S2 Table. Best: best-so-far solution over 50 runs. Mean: mean of the best solutions in 50 runs. Std.: standard deviation of the best solutions in 50 runs. As can be seen from Table 2, the best results are indicated in bold font. Generally speaking, the EMPSO provides much better results than PSO for all the 9 benchmark functions according to the three statistics (Best, Mean and Std.). The IGSA can find better solutions than GSA for functions f1–f7 and it strikingly improves the robustness of GSA (smaller values of Std.) on all functions except for function f6. If comprehensively consider the values of Best and Std., the EMPSO performs the best on 5 functions (f1, f2, f4, f6, f9) and IGSA is the best on the other 4 functions (f3, f5, f7, f8). The results in Table 2 also show that EMPSO and IGSA have better global optimization abilities than the PSO and GSA in solving most of the 9 benchmark functions and can obtain similar solutions.

Numerical Cases

In the parameter optimization problems of Muskingum models, minimization of the sum of the squared deviations (SSQ) or the sum of the absolute deviations (SAD) between the observed and the estimated outflows is always adopted as the objective function f, defined as follows: where O = observed outflow at time t; = estimated outflow at time t by the Muskingum routing equation that is Eq (4) for the LMM, Eq (15) for the NLMM and Eq (16) for the NLMM-L; P = the parameter vector need to be calibrated, where P = (C0, C1) in LMM, P = (K, x, m) in NLMM, P = (K, x, m, α) in NLMM-L. To evaluate the practicability of the EMPSO algorithm and the IGSA in engineering applications, we applied these two improved heuristic algorithms to seek the optimal parameter vector P for the three different Muskingum models and compared the results with those obtained by RGA and standard PSO, as well as the basic GSA. The optimal parameter vectors obtained in this study are also compared with the best existing solutions reported in previous literature. For the above five algorithms, the iterations proceed until the stopping criterion is satisfied, which is expressed as where n is the iteration number and N is the maximum number of iterations (set to 5000); fbest(n) is the best value of f in the nth iteration and δ is convergence accuracy. For the five algorithms in applications, the population size pop was set to 50 and they were implemented on a PC with a 32-bit Windows 7 operating system, 4 GB RAM and 2.93 GHz-core (TM) i3-based processor. Each algorithm was performed over 50 runs on the three Muskingum models for the numerical examples. In RGA, the tournament selection, simulated binary crossover (SBX) and polynomial mutation operators [35] are used; the crossover probability p = 0.85 and mutation probability p = 0.05; the distribution index for SBX is 10 and the distribution index for the mutation operator is 100. Parameter settings of the PSO, EMPSO, GSA and IGSA are the same with Sect. 3.

Case 1: Application to LMM

Flood Data from the South Canal of China in August 1961

A flood occurred in the south canal of China between the Linqing River and the Chenggou Bay in August 1961, in which the inflow and outflow hydrographs exhibit obvious linear characteristics [36], is employed as the numerical case for the LMM, where Δt = 12h and T = 28. The search ranges for the two parameters in LMM are set to C0, C1 ∈ (0.00, 0.50). One best existing solution according to the literature [11] is C0 = 0.4736, C1 = 0.0301 and SAD = 141.225, which is used as a reference.

Results and Analysis

For comparison, the statistical results (Best, Worst, Mean, and Std.) of the SAD, the model parameters, the iteration number and the CPU time for convergence (convergence accuracy δ) by the five algorithms (RGA, PSO, GSA, EMPSO, IGSA) are listed in Table 3: (1) with the exception of RGA, the other four algorithms find the same optimal solution (SAD = 141.194; C0 = 0.4729 and C0 = 0.0317) after 50 runs, which is better than the reference; however, only the GSA, EMPSO and IGSA can steadily converge to the same optimal solution for the LMM in every run (values of Std. for the SADs and the parameters are 0.0000E+00), whereas the optimal solutions obtained by the PSO are slightly fluctuate between different runs; (2) the GSA and IGSA require more time to converge than other three algorithms; (3) compared with GSA and IGSA, the EMPSO has a faster convergence speed (only requires 120 iterations and an average 0.0067s of CPU time for convergence in every run) and exhibits better stability (smallest values of Std.). The estimated outflow hydrograph by the LMM using the best parameter vector obtained in this study is shown in Fig 3. Fig 4 shows the comparison of the average convergence rate among the five algorithms on the LMM.
Table 3

Statistics of different algorithms performed on the LMM over 50 runs for the 1961 flood from the south canal of China.

AlgorithmsStatisticsfPConvergence (δ = 0.001)
SADC0C1IterationsCPU (s)
RGABest141.1960.47290.031643670.3852
Worst141.3010.47210.032744910.3977
Mean141.2200.47310.031232600.2894
Std.2.1616E-024.8277E-041.0421E-03992.018.6446E-02
PSOBest141.1940.47290.031727840.0668
Worst141.2080.47300.031511110.0277
Mean141.2000.47290.031624310.0586
Std.3.1968E-037.0425E-051.4650E-041410.093.3789E-02
GSABest141.1940.47290.031721470.5549
Worst141.1940.47290.031728130.7099
Mean141.1940.47290.031725190.6291
Std.0.0000E+000.0000E+000.0000E+00153.993.1552E-02
EMPSOBest141.1940.47290.0317500.0037
Worst141.1940.47290.03171650.0088
Mean141.1940.47290.03171200.0067
Std.0.0000E+000.0000E+000.0000E+0023.541.1736E-03
IGSABest141.1940.47290.031711750.3817
Worst141.1940.47290.031720530.6155
Mean141.1940.47290.031717540.5420
Std.0.0000E+000.0000E+000.0000E+00155.334.2196E-02
Fig 3

Fitting curve of outflow hydrograph of the LMM experiment.

Fig 4

Average best curves for the LMM. All results represent the means of the 50 runs.

Case 2: Application to NLMM

Data Set of Wilson (1974)

The data set from ref. [37], which had been demonstrated to have a nonlinear relationship between the storage and the weighted-flow [7], is taken as the numerical case for the NLMM. It is a single peak hydrograph that has been previously investigated by many researchers [2, 4, 7, 8, 10, 16–18, 21, 38], where Δt = 6h and T = 21. The NLMM has three parameters and their search ranges are set to K ∈ [0.01, 1.00], x ∈ [0.00, 0.30]and m ∈ [1.00, 3.00]. One best existing solution for the NLMM refers to Xu and Qiu [17] using the differential evolution (DE) algorithm was K = 0.5175, x = 0.2869, m = 1.8680 and SSQ = 36.77. The statistical results, which resemble Table 3, are also listed in Table 4: (1) only the EMPSO and IGSA can steadily converge to the same optimal solution (SSQ = 36.7679; K = 0.5175, x = 0.2869 and m = 1.8680) for the NLMM in every run, whereas optimal solutions obtained by RGA, PSO and GSA fluctuate between different runs; (2) for GSA, EMPSO and IGSA, the average number of iterations that are required for convergence in every run are approximately 2062, 191 and 780, respectively, and the corresponding CPU consuming times are 0.9992s, 0.0883s and 0.4864s; these data indicate that the EMPSO has a faster convergence speed and the IGSA obviously improves the convergence performance of GSA; (3) the EMPSO has the best performance in optimizing the NLMM (the lowest CPU time for every run and a steady fluctuation among of iterations with the lowest Std. = 51.22). The estimated outflow hydrograph by the NLMM using the best parameter vector obtained in this study is shown in Fig 5. Fig 6 shows the average convergence rate for the five different algorithms on the NLMM.
Table 4

Statistics of different algorithms performed on the NLMM over 50 runs for the data set of Wilson (1974).

AlgorithmsStatisticsfPConvergence (δ = 0.0001)
SSQKxmIterationsCPU (s)
RGABest36.76830.51840.28681.867732410.9613
Worst37.83970.63690.28811.822345481.3407
Mean36.93000.52660.28731.864619730.5888
Std.2.3267E-013.3579E-021.5334E-031.3702E-021844.475.4068E-01
PSOBest36.76910.51680.28711.86842330.0536
Worst36.83760.54380.28751.856943900.9927
Mean36.79050.51620.28691.868722030.5007
Std.1.5596E-029.3849E-036.8361E-044.0096E-031453.703.2952E-01
GSABest36.76940.52160.28701.866330221.5431
Worst38.07590.55140.28191.8555400.0256
Mean37.04220.54920.28711.855320620.9992
Std.3.2174E-013.3700E-021.8361E-031.3696E-021669.067.9225E-01
EMPSOBest36.76790.51750.28691.86811040.0475
Worst36.76790.51750.28691.86813210.1450
Mean36.76790.51750.28691.86811910.0883
Std.0.0000E+000.0000E+000.0000E+000.0000E+0051.222.3747E-02
IGSABest36.76790.51750.28691.86813660.2332
Worst36.76790.51750.28691.868110510.6483
Mean36.76790.51750.28691.86817800.4864
Std.0.0000E+000.0000E+000.0000E+000.0000E+00119.177.1140E-02
Fig 5

Fitting curve of outflow hydrograph of the NLMM experiment.

Fig 6

Average best curves for the NLMM.

Case 2: Application to NLMM-L

River Wyre Flood in October 1982

The River Wyre flood event in October 1982 [21], which exhibits a considerable increase of flood volume (lateral flow) between the inflow section and the outflow section (approximately 25km), is selected as the numerical case for the NLMM-L. The flood data have multi-peaked inflow, in which Δt = 1h and T = 31, and a major lateral flow contribution (which implies a large value of α). The search ranges for the four parameters in the NLMM-L are set to K ∈ [0.01, 6.00], x ∈ [0.00, 0.30], m ∈ [0.50, 3.00] and α ∈ [0.00, 3.00]. The best existing solution by the cuckoo search (CS) algorithm for the NLMM-L according to the literature [18] is K = 5.6765, x = 0.2271, m = 0.9800, α = 2.5298 and SSQ = 53.6574. For this numerical case to the NLMM-L, statistical results resemble Table 3 are presented in Table 5. As shown in Table 5: (1) only the EMPSO and IGSA can steadily find the global optimal solution in every run (the values of Std. for the SSQs and the parameters are 0.0000E+00), which is same to the reference; (2) the EMPSO still has the best performance in optimizing the NLMM-L (the lowest average CPU time for every run and a fairly steady fluctuation among iterations with the lowest Std. = 39.82). The estimated outflow hydrograph by the NLMM-L using the best solution obtained in this study is shown in Fig 7. Fig 8 shows the average convergence rate between the five different algorithms on the NLMM-L.
Table 5

Statistics of different algorithms performed on the NLMM-L over 50 runs for the River Wyre flood in October 1982.

AlgorithmsStatisticsfPConvergence (δ = 0.0001)
SSQKxmαIterationsCPU (s)
RGABest53.81735.63000.22990.98212.53736320.2863
Worst66.94554.10860.23321.04702.523050002.0693
Mean58.19294.80870.23101.01502.525343741.8169
Std.3.5486E+003.6741E-011.9815E-031.6023E-022.6082E-031285.315.2463E-01
PSOBest53.72135.76440.22580.97702.531931951.0745
Worst56.03695.22330.21720.99602.530949031.6501
Mean54.37775.65070.22770.98102.530729070.9797
Std.4.9223E-012.1336E-014.6838E-037.7166E-034.7672E-031411.274.7516E-01
GSABest67.91464.07790.22941.04742.52041450.1187
Worst156.83792.27460.23421.17022.514842032.6463
Mean101.88333.08910.23451.10702.518341782.6176
Std.1.4658E+013.1027E-017.9331E-042.0579E-021.2428E-03601.503.6447E-01
EMPSOBest53.65745.67650.22710.98002.52981310.0886
Worst53.65745.67650.22710.98002.52982980.2010
Mean53.65745.67650.22710.98002.52981960.1330
Std.0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+0039.822.6695E-02
IGSABest53.65745.67650.22710.98002.52982770.2386
Worst53.65745.67650.22710.98002.52987800.6520
Mean53.65745.67650.22710.98002.52986180.5201
Std.0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+0097.187.9210E-02
Fig 7

Fitting curve of outflow hydrograph of the NLMM-L experiment.

Fig 8

Average best curves for the NLMM-L.

Conclusions

In this study, the EMPSO algorithm and the IGSA were applied for solving the parameter estimation problems of three forms of linear or nonlinear Muskingum models (LMM, NLMM and NLMM-L). The LMM has two parameters and the NLMM has three, whereas the NLMM-L considers the lateral flow along the river reach, which has a more complex structure with four parameters. The EMPSO and IGSA were tested on a rich set of 9 standard minimization benchmark functions. Then three typical flood events used in previous literature were selected as numerical cases (Case 1–3) to evaluate the practicability of the EMPSO and IGSA in applications. The results by the EMPSO and IGSA were compared with those obtained by the RGA, PSO and GSA, as well as the best reported solutions in the literature. Several conclusions are summarized as follows. only the EMPSO and IGSA can steadily converge to the same optimal solution for the three Muskingum models in every run compared with RGA, standard PSO and the basic GSA; the GSA may require more iterations and CPU time than the IGSA to find the same optimal solution for the LMM, and the results obtained by the GSA for the NLMM and the NLMM-L are the worst among the five algorithms (the largest values of SSQ and Std. in Tables 4 and 5), which indicates that the proposed IGSA can improve the performance (including the search efficiency, convergence speed and the stability) of GSA in optimizing the NLMM and the NLMM-L; the EMPSO has the fastest convergence rate and the best robustness than the other four algorithms for the three Muskingum models in term of specified convergence accuracy and the Std. values.

a in function f8.

(DOCX) Click here for additional data file.

Vectors a and c in function f9.

(DOCX) Click here for additional data file.

Original data of each case and optimal estimated outflow hydrographs by the EMPSO and IGSA (m3/s).

(DOCX) Click here for additional data file.
  2 in total

1.  Selectively-informed particle swarm optimization.

Authors:  Yang Gao; Wenbo Du; Gang Yan
Journal:  Sci Rep       Date:  2015-03-19       Impact factor: 4.379

2.  Particle swarm optimization with scale-free interactions.

Authors:  Chen Liu; Wen-Bo Du; Wen-Xu Wang
Journal:  PLoS One       Date:  2014-05-23       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.