| Literature DB >> 31885533 |
Chnoor M Rahman1,2, Tarik A Rashid3.
Abstract
One of the most recently developed heuristic optimization algorithms is dragonfly by Mirjalili. Dragonfly algorithm has shown its ability to optimizing different real-world problems. It has three variants. In this work, an overview of the algorithm and its variants is presented. Moreover, the hybridization versions of the algorithm are discussed. Furthermore, the results of the applications that utilized the dragonfly algorithm in applied science are offered in the following area: machine learning, image processing, wireless, and networking. It is then compared with some other metaheuristic algorithms. In addition, the algorithm is tested on the CEC-C06 2019 benchmark functions. The results prove that the algorithm has great exploration ability and its convergence rate is better than the other algorithms in the literature, such as PSO and GA. In general, in this survey, the strong and weak points of the algorithm are discussed. Furthermore, some future works that will help in improving the algorithm's weak points are recommended. This study is conducted with the hope of offering beneficial information about dragonfly algorithm to the researchers who want to study the algorithm.Entities:
Mesh:
Year: 2019 PMID: 31885533 PMCID: PMC6925939 DOI: 10.1155/2019/9293617
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Dynamic dragonfly swarming (a) versus static swarming (b) [9].
Algorithm 1Pseudocode for DA [9].
Algorithm 2Pseudocode for BDA [9].
Algorithm 3Pseudocode for MODA [9].
Algorithm 4Pseudocode for MHDA [26].
The optimal parameters for the SVR model found by using DA [30].
| SVR parameters | Optimal values of SVR parameters | |
|---|---|---|
| IEEE 30-bus | Algerian 59-bus | |
|
| 971.9378 | 985.561 |
|
| 0.1 | 0.1 |
|
| 0.0001 | 0.0001 |
Parameter settings for binary hybrid HBDESPO [32].
| Parameter | Value |
|---|---|
| No. of iterations | 70 |
| No. of search agents | 5 |
| Dimension | No. of features in the data |
| Search domain | [0 1] |
| No. of runs | 10 |
|
| 0.9 |
|
| 0.4 |
| Deltaxmax | 6 |
|
| 2 |
|
| 2 |
|
| 6 |
|
| 0.01 |
|
| 0.99 |
Parameter settings for INMDA [33].
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Parameter settings for CDA, D stands for dimension [35].
| Parameter | Value |
|---|---|
|
| 1.5 |
|
| 31 |
|
| 50 |
| Lower bound | 1 |
| Upper bound | 31 |
| Maximum iteration | 50 |
Parameter settings for PSO, ABC, CSO, GWO, CSA, and SCA optimization algorithms [35].
| Algorithm | Parameters | Value |
|---|---|---|
| PSO | An inertia weight | 1 |
| An inertia weight damping ratio | 0.9 | |
| Personal learning coefficient | 1.5 | |
| Global learning coefficient | 2.0 | |
| ABC | Number of colony size | 10 |
| Number of food source | 5 | |
| Number of limit trials | 5 | |
| CSO | Number of chicken updated | 10 |
| The percent of roosters population size | 0.15 | |
| The percent of hens population size | 0.7 | |
| The percent of mother hens population size | 0.05 | |
| GWO | a | 2 |
| SCA | b | 2 |
| CSA | Awareness probability | 0.1 |
| Flight length | 0.01 |
The purposes of using the DA in various applications and its results.
| Reference | Purpose | Result |
|---|---|---|
| [ | BDA helped in searching for the optimal parameter sets (kernel parameter and penalty factor) for KELM and the optimal feature subset among the feature candidates simultaneously | BDA showed its superiority as a searching technique to find the set of optimal parameters and the optimal feature subset |
|
| ||
| [ | Multilevel segmentation of colour fundus images | Using the DA as an optimization algorithm produced better results for segmenting colour images |
|
| ||
| [ | In a watermarking technique for the medical images, DA was utilized to select the effective pixels | The correlation coefficient values using the DA were greater than the other techniques such as PSO, GA, and random selection |
|
| ||
| [ | Exploring the pixels of images and discovering which pixel contains significant information about the object (DA was used as a detection model) | The DA could work as an efficient and fast object extraction from images |
|
| ||
| [ | DA was used as a parameter optimizer of SVM; furthermore, the effect of the number of solutions and generations on the accuracy of the produced result and computation time was investigated | It was shown that the classification error rate for the proposed work was lower than that in PSO + SVM and GA + SVM, and the reason for this was that the DA parameters could be altered iteratively; furthermore, it was shown that increasing either the number of solutions or generations decreased the rate of misclassification and rose the computational time |
|
| ||
| [ | New updating mechanism and elitism were added to the binary dragonfly algorithm; the improved technique was then used to classify different signal types of infant cry, and it was used to overcome the dimensionality problem and select the most salient features | It was noted that the improved technique reduced the percentage of error rate compared with the original binary dragonfly algorithm |
|
| ||
| [ | The DA-based artificial neural network technique was utilized for predicting the primary fuel demand in India | The proposed model using the DA was provided with more accurate results comparing to the existing regression models |
|
| ||
| [ | Binary-BDA, multi-BDA, and ensemble learning-based BDA were used for wavelength selection | Using binary-BDA causes instability; however, stability boosted by using the multi-BDA and the ensemble learning-based BDA; in addition, the computational complexity of ensemble learning-based BDA was lower than the multi-BDA |
|
| ||
| [ | Instead of gradient-based techniques, DA was used for designing filters of IIR | Using the DA prevented trapping into local optima and coefficients close to the actual value were evaluated, and the minimum mean square value was found; in addition, the superiority of the DA was proved to compare to the PSO, CSO, and BA for the aforementioned problem |
|
| ||
| [ | Dragonfly-based clustering algorithm was used to focus on the scalability of the internet of vehicles | The proposed technique was compared to a comprehensive learning PSO and ant colony optimization algorithm; the results proved that in high density and medium density the examined technique showed better and average performance, respectively; however, in a low density, the proposed technique performance was bad while the comprehensive learning PSO performed well |
|
| ||
| [ | Dragonfly algorithm was utilized to predict the location of randomly deployed nodes in a designated area; also it was used to localizing different noise percentages of distance measurement (Pn) | For range-based localization with varying Pn, dragonflies could produce fewer errors compared with PSO; furthermore, increasing Pn caused an increase in the distances between real and approximated nodes by DA and PSO |
|
| ||
| [ | DA was used to enlarge the lifetime of the RFID network | The cluster breakage was reduced through choosing the cluster heads that had similar mobility but high leftover energy; this reduction reduced energy consuming; hence, compared with the existing techniques the efficiency was improved |
|
| ||
| [ | DA with two selection probabilities were used as new loud balancing technique called (FDLA); the new technique was then used to keep the stability of processing multiple tasks in the cloud environment | The proposed technique provided the minimum load by allocating less number of tasks |
|
| ||
| [ | DA was utilized to examine the optimal sizing and location of distributed generation in radial distribution systems to reduce the power loss in the network | Compared with the DA and WOA, MFO performed better and converged earlier |
|
| ||
| [ | In the court case assignment problem, the ability of the judicial system highly depends on time and the efficiency of operation in the court case; the DA was used to find the optimal solution of the assignment problem | The DA could show superior results compared with the FA |
|
| ||
| [ | DA was used to optimize the optimum sitting of the capacitor in different radial distribution systems (RDSs); the main aim of this study was to minimize power loss and total cost with voltage profile enhancement | The results proved that DA-based optimization provided comparative results with GWO- and MFO-based optimization methods in terms of a small number of iterations and convergence time; however, it provided superior results compared with the PSO-based technique |
Figure 2(a) DA-SVM's testing error rate using different number of dragonflies. (b) DA-SVM's computational time with different number of dragonflies.
Crying samples in the two utilized databases [47].
| Databases | Samples (types of the crying signal) |
|---|---|
| First database | 507 normal crying samples (N) |
| 340 asphyxia crying samples (A) | |
| 879 deaf crying samples (D) | |
| 350 hungry crying samples (H) | |
| 192 pain crying samples (P) | |
| Second database | 513 jaundice crying samples (J) |
| 531 premature crying samples (Prem) | |
| 45 normal crying samples (N) |
Parameter values of the BDA [49].
| Parameters | Values |
|---|---|
| Maximum no. of iterations | 50 |
| No. of dragonflies | 10 |
| No. of wavelengths | 401 |
| Separation, alignment, cohesion, food, and enemy factor | Adaptive tuning |
| No. of principal components | 2 |
| No. of folds of cross validation | 5 |
Figure 3Comparing RMSE between DA and PSO with varying Pn.
Figure 4Comparing RMSE between DA and PSO with different number of unknown nodes.
Values of parameters for FDLA [54].
| Parameters | Values |
|---|---|
| Separation weight | 0.5 |
| Alignment weight | 0.5 |
| Cohesion weight | 0.5 |
| Food factor | 0.5 |
| Enemy factor | 0.5 |
| Population size | 10 |
Case studies [57].
| Case # | The operation mode of DG | System |
|---|---|---|
| Case 1 | DG operating at a unity power factor | IEEE 69-bus radial distribution system |
| Case 2 | DG operating at an optimal power factor | |
| Case 3 | DG operating at a unity power factor | IEEE 119-bus radial distribution system |
| Case 4 | DG operating at an optimal power factor |
Metrics used to evaluate the performance of algorithms [60].
| Metric |
| Relative error (RE) |
| Mean absolute error (MAE) |
| Root mean square error (RMSE) |
| Standard deviation (Std) |
| Efficiency |
The Wilcoxon ranksum test overall runs for the classical benchmark functions.
| F | DA | PSO |
|---|---|---|
| F1 | N/A | 0.045155 |
| F2 | N/A | 0.121225 |
| F3 | N/A | 0.003611 |
| F4 | N/A | 0.307489 |
| F5 | N/A | 0.10411 |
| F6 | 0.344704 | N/A |
| F7 | 0.021134 | N/A |
| F8 | 0.000183 | N/A |
| F9 | 0.364166 | N/A |
| F10 | N/A | 0.472676 |
| F11 | 0.001008 | N/A |
| F12 | 0.140465 | N/A |
| F13 | N/A | 0.79126 |
| F14 | N/A | 0.909654 |
| F15 | 0.025748 | 0.241322 |
| F16 | 0.01133 | N/A |
| F17 | 0.088973 | N/A |
| F18 | 0.273036 | 0.791337 |
| F19 | N/A | 0.472676 |
| F20 | 0.938062 | 0.938062 |
| F21 | N/A | N/A |
| F22 | 0.256157 | 0.256157 |
| F23 | 0.59754 | 0.59754 |
Comparison of results of the classical benchmark functions between DA, PSO, DE, and FA.
| F | Meas. | DA | PSO | DE | FA |
|---|---|---|---|---|---|
| F1 | Mean | 2.85 | 4.2 |
| 1.72 |
| Std. | 7.16 | 1.31 |
| 9.43 | |
| F2 | Mean | 1.49 | 0.003154 |
| 6.01 |
| Std. | 3.76 | 0.009811 |
| 3.29 | |
| F3 | Mean | 1.29 | 0.001891 | 27.882386 |
|
| Std. | 2.1 | 0.003311 | 12.845742 |
| |
| F4 | Mean |
| 0.001748 | 0.002883 | 5.913 − 03 |
| Std. |
| 0.002515 | 0.000577 | 0.029813 | |
| F5 | Mean | 7.600558 | 63.45331 | 9.50058 |
|
| Std. | 6.786473 | 80.12726 | 3.204155 |
| |
| F6 | Mean | 4.17 | 4.36 |
| 1.9 |
| Std. | 1.32 | 1.38 |
| 1.04 | |
| F7 | Mean | 0.010293 | 0.005973 | 0.005256 |
|
| Std. | 0.004691 | 0.003583 | 0.001649 |
| |
| F8 | Mean | − | −7.1 | −4181.932984 | −3566.452419 |
| Std. |
| 1.2 | 30.0487686 | 239.113661 | |
| F9 | Mean | 16.01883 | 10.44724 |
| 7.462188 |
| Std. | 9.479113 | 7.879807 |
| 4.41686 | |
| F10 | Mean | 0.23103 | 0.280137 |
| 8.47 |
| Std. | 0.487053 | 0.601817 |
| 4.64 | |
| F11 | Mean | 0.193354 | 0.083463 |
| 0.053309 |
| Std. | 0.073495 | 0.035067 |
| 0.053615 | |
| F12 | Mean | 0.031101 | 8.57 |
| 1.92 |
| Std. | 0.098349 | 2.71 |
| 1.05 | |
| F13 | Mean | 0.002197 | 0.002197 |
| 8.21 |
| Std. | 0.004633 | 0.004633 |
| 4.5 | |
| F14 | Mean | 103.742 | 150 | 0.99800 |
|
| Std. | 91.24364 | 135.4006 | 0 |
| |
| F15 | Mean | 193.0171 | 188.1951 | 0.000698 |
|
| Std. | 80.6332 | 157.2834 | 1.546 |
| |
| F16 | Mean | 458.2962 | 263.0948 | − | − |
| Std. | 165.3724 | 187.1352 |
|
| |
| F17 | Mean | 596.6629 | 466.5429 |
| 3.0 |
| Std. | 171.0631 | 180.9493 |
| 6.05 | |
| F18 | Mean | 229.9515 | 136.1759 | 2.9999999 | − |
| Std. | 184.6095 | 160.0187 | 1.27 |
| |
| F19 | Mean | 679.588 | 741.6341 | − | −3.259273 |
| Std. | 199.4014 | 206.7296 |
| 0.059789 | |
| F20 | Mean | − | −3.27047 | −3.321797 | −9.316829 |
| Std. | − | 0.059923 | 0.001054 | 2.21393 | |
| F21 | Mean | −10.1532 | − | −9.867489 | −10.147907 |
| Std. |
| 3.11458 | 0.722834 | 1.396876 | |
| F22 | Mean | −10.4029 | − | −10.381587 | −9.398946 |
| Std. |
| 3.038572 | 0.075194 | 1.99413 | |
| F23 | Mean | −10.5364 | − | −10.530836 | −10.2809 |
| Std. |
| 2.640148 | 0.02909 | 1.39948 |
For each test function, the result of the algorithm that has shown its superiority comparing to the other algorithms for solving a specific function is shown in bold.
CEC-C06 2019 benchmark functions [61].
| Function | Functions | Dimension | Range |
|
|---|---|---|---|---|
| CEC01 | Storn's Chebyshev polynomial fitting problem | 9 | [−8192, 8192] | 1 |
| CEC02 | Inverse Hilbert matrix problem | 16 | [−16384, 16384] | 1 |
| CEC03 | Lennard–Jones minimum energy cluster | 18 | [−4, 4] | 1 |
| CEC04 | Rastrigin's function | 10 | [−100, 100] | 1 |
| CEC05 | Grienwank's function | 10 | [−100, 100] | 1 |
| CEC06 | Weierstrass function | 10 | [−100, 100] | 1 |
| CEC07 | Modified Schwefel's function | 10 | [−100, 100] | 1 |
| CEC08 | Expanded schaffer's F6 function | 10 | [−100, 100] | 1 |
| CEC09 | Happy CAT function | 10 | [−100, 100] | 1 |
| CEC10 | Ackley Function | 10 | [−100, 100] | 1 |
IEEE CEC-C06 2019 benchmark test results.
| CEC function | Meas. | DA | PSO |
|---|---|---|---|
| CEC01 | Mean |
| 1.47127 |
| Std. |
| 1.32362 | |
| CEC02 | Mean |
| 15183.91348 |
| Std. |
| 3729.553229 | |
| CEC03 | Mean | 12.70240422 | 12.70240422 |
| Std. | 1.50 | 9.03 | |
| CEC04 | Mean | 103.3295366 |
|
| Std. | 20.00405422 |
| |
| CEC05 | Mean | 1.177303105 |
|
| Std. | 0.057569859 |
| |
| CEC06 | Mean |
| 9.305312443 |
| Std. |
| 1.69 | |
| CEC07 | Mean | 898.5188217 |
|
| Std. | 4.023921424 | 104.2035197 | |
| CEC08 | Mean | 6.210996106 |
|
| Std. | 0.001657324 | 0.786760649 | |
| CEC09 | Mean | 2.601134198 |
|
| Std. | 0.233292964 |
| |
| CEC10 | Mean | 20.0506995 | 20.28063455 |
| Std. | 0.070920925 | 0.128530895 |
For each test function, the result of the algorithm that has shown its superiority comparing to the other algorithms for solving a specific function is shown in bold.