Literature DB >> 34739972

COVID-19 X-ray image segmentation by modified whale optimization algorithm with population reduction.

Sanjoy Chakraborty1, Apu Kumar Saha2, Sukanta Nama3, Sudhan Debnath4.   

Abstract

Coronavirus disease 2019 (COVID-19) has caused a massive disaster in every human life field, including health, education, economics, and tourism, over the last year and a half. Rapid interpretation of COVID-19 patients' X-ray images is critical for diagnosis and, consequently, treatment of the disease. The major goal of this research is to develop a computational tool that can quickly and accurately determine the severity of an illness using COVID-19 chest X-ray pictures and improve the degree of diagnosis using a modified whale optimization method (WOA). To improve the WOA, a random initialization of the population is integrated during the global search phase. The parameters, coefficient vector (A) and constant value (b), are changed so that the algorithm can explore in the early stages while also exploiting the search space extensively in the latter stages. The efficiency of the proposed modified whale optimization algorithm with population reduction (mWOAPR) method is assessed by using it to segment six benchmark images using multilevel thresholding approach and Kapur's entropy-based fitness function calculated from the 2D histogram of greyscale images. By gathering three distinct COVID-19 chest X-ray images, the projected algorithm (mWOAPR) is utilized to segment the COVID-19 chest X-ray images. In both benchmark pictures and COVID-19 chest X-ray images, comparisons of the evaluated findings with basic and modified forms of metaheuristic algorithms supported the suggested mWOAPR's improved performance.
Copyright © 2021 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  COVID-19 chest X-ray image; Image segmentation; Kapur's entropy; Multilevel thresholding; Whale optimization algorithm

Mesh:

Year:  2021        PMID: 34739972      PMCID: PMC8556692          DOI: 10.1016/j.compbiomed.2021.104984

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   4.589


Introduction

A new virus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was discovered in late December 2019 as the cause of a severe pneumonia infection outbreak identified as coronavirus disease 2019 (COVID-19). The disease reportedly arose in Wuhan City, Hubei Province, China, and was later labeled a pandemic by the World Health Organization on March 11, 2020. (WHO) [1,2]. Due to SARS-highly CoV-2's human-to-human contagious nature, the disease has affected 186.0849 million people across the world, with 4.0213 million deaths in 222 nations and territories, as well as international transportation, in the last year and a half (https://www.worldometers.info/coronavirus/). To regulate or prevent COVID-19, Li et al. [3] suggested vaccinations, monoclonal antibodies, oligonucleotide-based therapeutics, peptides, interferon therapy, and small-molecule medicines. Early identification of the disease and degree of infection, i.e., the severity of the patients, is another significant factor in combating COVID-19. The diagnosis options on the market are based on the detection of viral genes, human antibodies, and viral antigens [4]. Currently, the detection techniques of COVID-19 are real-time reverse transcription-polymerase chain reaction (RT-PCR), reverse-transcription loop-mediated isothermal amplification (RT-LAMP), specific high-sensitivity enzymatic reporter unlocking (SHERLOCK) assay, CT scan, antigen test, and serology tests [5]. The concentration of numerous biomarkers, including C-reactive protein, D-dimer, lymphocytes, leukocytes, and blood platelets, may also be useful in detecting infection and measuring illness severity [6]. In radiology, most of the literature concentrated on CT manifestations of COVID-19 [7,8]. However, because CT is not widely available, has problems with sterilization thereafter, reduces infection, and is more expensive than X-ray, portable chest X-ray is more appropriate, despite being less sensitive. COVID-19 might be difficult to identify in some individuals due to hazy pulmonary opacities on portable chest radiography (CXR). Irregular, patchy, hazy, reticular, and extensive ground-glass opacities have been seen on the CXR of probable COVID-19 sufferers [9]. To reduce the death rate of COVID-19 patients, a faster quantitative evaluation of disease severity is essential. The interpretation of X-ray scans is one of the most challenging aspects of COVID-19 diagnosis. Several studies used artificial intelligence on X-ray images to detect COVID-19 early and accurately to tackle these challenges. Artificial intelligence has made significant progress in COVID-19 diagnostic imaging in the latest days [10,11]. Several researches have investigated to increase the diagnostic quality of COVID-19 based on X-ray picture segmentation using swarm intelligence, deep learning, deep neural networks, and neural network optimization methods [[12], [13], [14], [15], [16], [17], [18], [19]]. When a patient's RT-PCR test for COVID-19 is negative early on, the other diagnosis tool, chest imaging, will play a critical role. Early detection with COVID-19 requires a high-resolution CT scan of the patient's chest. The chest CT has better sensitivity for COVID-19 diagnosis than the RT-PCR [20,21]. As a result, diagnosing COVID-19 patients from CT or X-ray pictures is critical, and tremendous advances in imaging utilizing Artificial Intelligence (AI) have been accomplished in recent years [22,23]. Swarm-based methods have shown significant performance in solving numerous practical issues [24]. Segmentation of medical images using swarm-based optimization methods is a popular application. Complex feature spaces, especially in the medical image, are often highly challenging to handle [25]. Clinical analysis is regularly inspired by just a particular segment of a medical image, while different parts are of optional significance [26]. Hence more emphasis is required on the accuracy and efficiency of the method used to handle the issue [27]. A swarm-based optimization method with efficacy can be highly effective in segmentation medical images [24]. Li et al. [28] proposed a dynamic-context cooperative quantum-behaved particle swarm optimization algorithm to segment medical images with enhanced searchability. Turajlić [29] applied firefly and bat algorithms to segment X-ray images with multilevel thresholding strategy. Abdel-Basset et al. [30] developed a new algorithm named HSMA_WOA integrating slime mould algorithm and WOA, also segmented COVID-19 chest X-ray images applying multilevel thresholding strategy. Zhao et al. [26] proposed an improved slime mould algorithm (DASMA) with a diffusion mechanism and an association strategy to increase solution diversity and faster convergence speed, respectively. They applied the method to segment the CT image of chronic obstructive pulmonary disease (COPD) using multilevel thresholding approach. Liu et al. [12] modified the ant colony optimization (ACO) algorithm using Cauchy mutation to enhance the searching ability and convergence speed of ACO. Greedy Levy mutation was used to avoid the local solution. The authors segmented the COVID-19 X-ray images applying the method with Kapur's entropy-based multilevel thresholding approach. Murillo-Olmos et al. [31] segmented X-ray images of pneumonia with whale optimization algorithm. Abualigah et al. [32] proposed differential evolution-based arithmetic optimization algorithm (DAOA). Differential evolution was used to enhance the local search, COVID-19 CT images segmented using multilevel thresholding strategy. Thus, segmentation of the COVID-19 chest X-ray images to separate the background and target by classifying image pixels can be very important to diagnose and examine the severity of a patient infected with COVID-19. This can help specialists to make a suitable conclusion and give a treatment plan. Moreover, the segmented image can be used to train the machine learning algorithms and generate decisions effectively. Mirjalili and Lewis devised the whale optimization algorithm in 2016 while researching humpback whale feeding behavior. With only a few algorithm-specific parameters, WOA is a simple yet powerful system. Despite a few limitations, the effectiveness of WOA outperforms a few other well-known algorithms in terms of exploitation and avoiding the local optimal solution [33]. However, the conventional WOA may be trapped into a local solution due to low exploration capacity, and the best optimal solution may not be attained while solving complex problems [34]. Moreover, in WOA, global and local search phases are not well-balanced because exploitation gets higher preference in the second half of the search process [28]. As a result, this study offers mWOAPR, a novel variant of WOA that increases the algorithm's exploration capability while balancing global and local search features. In furthermore, the proposed technique has been successfully used to tackle the image segmentation problem. 2D histograms made of greyscale images are used as the fitness function to achieve an ideal threshold set, and 2D Kapur's entropy is being used as a fitness function. Hereunder are the study's main contributions: A new traversing parameter β is introduced to balance between exploration and exploitation. Instead of the search prey phase of WOA, random initialization of solution is performed to increase exploration. In the encircling prey and bubble-net attack phases, the value of co-efficient vector A and constant b is altered. It facilitates the exploration of the search space at the start of the process, and as iteration advances, a thorough local search is executed. A population reduction mechanism minimizes the algorithm's computational complexity and enhances the exploitation ability. Six benchmark images and three COVID-19 X-ray images are segmented using different thresholds, and evaluated results are compared with several metaheuristic algorithms. Friedman's test, a nonparametric statistical test, has been used to validate the suggested algorithm's statistical performance. Convergence graphs are also used to assess the algorithm's solution searching capability. The remainder of the paper is structured as follows: The description of the classic WOA is presented in Section 2. In Section 3, the proposed algorithm mWOAPR is described. In Section 4, the image segmentation problem is defined. Section 5 compares and analyses the evaluated outcomes. The algorithm's computing complexity, statistical analysis of the findings, and convergence analysis are all shown in Section 6. The research comes to a close with Section 7.

Whale optimization algorithm

For constructing the algorithm, the whale optimization algorithm (WOA) mimics the foraging behavior of humpback whales. WOA's execution procedure, like that of other population-based algorithms, begins with the generation of a set of random solutions. WOA's search technique is primarily divided into three stages: searching the prey, encircling the prey, and spiral bubble-net attack. WOA employs these three approaches to achieve an appropriate equilibrium between both the exploratory and exploitative processes. Finally, the search procedure ends when a pre-defined condition is met and the optimization results are produced.

Searching the prey phase

Whales randomly search the target in the search space based on their current location. The program uses the food-finding mechanism of whales to explore the search region. The mathematical formulation of this behavior is given by: where, represents the solution vector, is a randomly chosen solution from the current solutions, and represents the present iteration number. represents the distance of random and the current solution. (.) characterizes the element-by-element multiplication, and | | signifies absolute value. Parameters and in Eqns. (1), (2) are said to be co-efficient vectors and are obtained by the following equations: where, declines linearly from 2 to 0 with each iteration, and is a random number between 0 and 1.

Encircling the prey

The algorithm employs this whale hunting method for the aim of exploitation. The current best solution is anticipated to be the solution closest to the ideal value during this phase. The population's other solutions change their places concerning the current best option. The mathematical expressions to formulate this behavior are given below: where characterizes the best solution based on the fitness value among the whales till the present iteration.

Bubble-net attack

To approach their target, humpback whales employ a spiral-shaped route of bubbles. For local search, the bubble-net attacking technique is used. The bubble-net procedure is carried out as follows: where denotes the shape of the logarithmic spiral path and is kept constant; is a random number calculated using the following equation: In Eqn. (9), decreases linearly from (−1) to (−2) with each iteration and . The coefficient parameter is used to make the transition between the algorithm's explorative and exploitative phases. When , the exploratory process is chosen, and the global search is started through Eqn. (1) and Eqn. (2). If , the candidate whales upgrade positions by Eqn. (6) or Eqn. (8) depending on a probability value , which is constant , and based on the value of , the search process transits between encircling prey or bubble-net attacking strategy. The mathematical representation of the same is given below:

Proposed modified WOA with population reduction (mWOAPR)

The humpback whale's hunting behavior inspired the development of whale optimization algorithm. The whales migrate while hunting for food, selecting a random solution from the population; this phase has been termed the search for prey phase. The algorithm's global search phase led this phase. Local searches were conducted by encircling the target and using the whale's bubble-net attack strategy. The solutions in both of these phases were updated using the current best value. To search away from the current solution, two co-efficient vectors A and C, are employed. In basic WOA, selection between exploration and exploitation were performed using the value of co-efficient vector , and an arbitrary number . The arrangement steered the search process only to the exploitation phase during the second part of the search [35], decreasing diversity in the solution. In the proposed mWOAPR, a new selection parameter β is introduced, which varies between 1 and 0. Selection between the exploration and exploitation phase is achieved using the value of β. The parameters and used in classical WOA are also modified here. An arbitrary number is subtracted from β to get the value of . While exploiting the search space using the bubble-net method, the value of β is used instead of 1 in WOA. is calculated using the equation below: In the above equation, and represent the present iteration value and the maximum number of iterations, respectively. Like other metaheuristic algorithms, mWOAPR starts with initializing a random population. If the value of β is greater than a random number and another random number is less than 0.5, the exploration phase is selected. Unlike the WOA search for prey phase in the exploration phase, the present solution is regenerated to increase the exploration. Otherwise, the encircling prey phase in Eqn. (6) is used. The value of is restricted within the range only to exploit the positions around the best value. While is less than an arbitrary value, then the bubble-net attack phase is selected. The radius of the spiral path decreases gradually, and the variable defines the shape of the spiral path, considering that value of is taken within instead of 1 in WOA. After updating solutions in an iteration, the population for the next iteration is calculated using the formula given in Eqn. (12). In Eqn. (12), signifies the population value, is the minimum number of solutions the population may decrease. is the current value of function evaluation, and is the maximum number of function evaluations. While experimenting, we have fixed value to 15. Reduction of the population reduces complexity and increases convergence speed and local search ability of the algorithm. The best fitness value is returned as output. The pseudo-code of the proposed algorithm is given in Fig. 1 .
Fig. 1

Pseudo code of the proposed mWOAPR.

Pseudo code of the proposed mWOAPR.

Steps involved in mWOAPR

The stepwise execution process of mWOAPR is given below: Initialize the random population and other related parameters. Evaluate each solution's fitness and find the present best fitness and the best solution. Calculate the traversing parameter β. Evaluate update value of If the value of is greater than a random value and also a random value is greater than 0.5 then reinitialize the current solution. If the value of is greater than a random value and also a random value is less than or equal to 0.5 then update the current solution using the encircling prey strategy. If the value of is less than or equal to a random value, update the current solution using the bubble-net attack method. Update each solution in the population using either step 5, step 6, or step 7. Evaluate the value of the new population after reduction using equation (12). Move in between step 2 to step 9 as long as the termination condition is not true. Return the final best fitness and the corresponding solution as output.

Image segmentation

Segmentation of images has been motivating researchers from various areas for years, owing to the advent of computer vision applications. In today's world, digital cameras are ubiquitous and linked to multiple devices for a variety of applications that require specific treatment for reasons such as medical diagnostics, monitoring, commercial deployments, and so on. The process of dividing a digital image into non-overlapping areas or segments and finding objects and boundaries in images is known as segmentation. The intensities of pixels within a region are homogenous or comparable in terms of properties such as grey level, texture, color, and brightness [36]. Image segmentation is regarded as a vital component in the study of computer vision and image processing systems; it impacts the entire image or a collection of object outlines in a succession of pieces and isolates the image into groups of pixels, and divides the parts along these lines in such a way that it is extremely precise [37]. Each pixel in a region is comparable in specific unique or calculated properties, such as color, texture, or intensity. Image segmentation produces many divisions that distribute the main image or collection of forms ejected from the image. The goal of segmentation is to pre-process an image to expedite future processing chores by improving the look of the original image [38]. It is critical to note that each segmentation procedure has two primary goals: decomposing the target picture into sub-images to aid in a more comprehensive analysis and modify the representation. The segmented section of a picture should be homogenous and uniform in color, grey level, texture, and simplicity. Similarly, neighboring pixels should have considerably different values. The objective of segmentation is to simplify or transform a picture into a meaningful representation that can be analyzed further. The most popular approach for segmenting digital pictures based on histograms is the thresholding technique for image segmentation. Thresholding-based methods classify or group features based on the intensity range of the pixels. It is one of the simplest but most effective methods for segmenting images that can differentiate between objects and other parts of an image by establishing image thresholds. The most sophisticated, relevant, and fascinating image analysis and pattern detection approach is automatic image separation [39]. Image segmentation methods are classified into two types based on their thresholds: parametric and nonparametric [40]. Because they involve the analysis of a probability density function, parametric methods are time-demanding. On the other hand, nonparametric methods are more precise and dependable and do not involve estimating any probability function. The techniques for nonparametric strategies are established based on statistical skills that aid in analyzing histogram data; these tools include intra-class variance, entropy, error rate, and so on. When using an optimization strategy, such statistical approaches might be employed as objective functions [41]. Threshold values can be computed when the parameter is being maximized or minimized based on its characteristics. The precision of segmentation is determined by the threshold values chosen. A histogram for the image can help with threshold selection. Bi-level and multilevel thresholding are two different forms of thresholding [42]. In bi-level thresholding, the image pixels are categorized into two groups: (i) pixels with intensities less than the threshold and (ii) pixels greater than the threshold. On the other hand, image pixels are split into many classes in multilayer thresholding. Each class has a grey level value that is defined by several threshold values. Otsu's between class variance [43] and Kapur's entropy method [44] are two widely used techniques for image segmentation via thresholding. Otsu's between-class variance is a popular method called a global strategy due to its simplicity and efficacy. However, because it is one-dimensional and only examines information at the grey level, it does not provide a superior segmentation result [45]. On the other hand, the notion of maximizing Kapur's entropy as a metric for object segmentation is based on the premise that an image comprises a foreground and a background area with values contributing to the distribution of object intensity [45]. Both areas are computed independently to maximize their amount. The best limit value is then determined to maximize the entropy amount.

Problem formulation of multilevel thresholding

Thresholding can be bi-level or multilevel. Bi-level thresholding uses only one threshold value and two classes and are created on this threshold value. While in multilevel thresholding threshold values of numbers are used { … … …,} and splits the image into classes of { , , ,… …. ). In an image of grey levels, bi-level thresholding can be written as: where denotes the intensity of pixels of the image . For multilevel image thresholding, the same equations can be stretched to

Kapur's entropy method

Kapur's function measures the separability of the class and calculates entropy measurement using the probability distribution of the image's grey level values. The threshold's optimal values are gained whenever entropy measure in segmented classes has the highest value. The process aims to find the highest entropy value, which returns the best threshold value. Kapur's entropy was initially developed for bi-level thresholding of images. The procedure can be extended to multilevel thresholding. For bi-level thresholding, the fitness function can be written as,where, In the above equations and signify the entropies, and represent the class probabilities of the segmented classes and , respectively. is the probability of grey level . is calculated as follows,where is the histogram value of the pixel in position. Stretching the formula for multilevel thresholding into classes, the objective function of multilevel thresholding can be written as,where, , are the entropies, , represents the class probabilities of the segmented classes , , ….respectively.

Image quality measurement

Multilevel image threshold segmentation performance can be measured in several ways. This study uses peak signals to noise ratio (PSNR) and structural similarity index measure (SSIM) to measure performance.

Peak signals to noise ratio (PSNR)

Degree of segmented image quality measured in decibels (DB) by PSNR. Mathematically, it can be written as,where MSE represents the mean square error. MSE is evaluated as follows, In Eqn. (16), the variables M and N are the sizes of the images. and represents the original and segmented image individually.

Structural similarity index measure (SSIM)

SSIM is used to gauge the picture's structural uprightness, and it is another metric used for assessing performance. Expecting that is the unsegmented picture and is the segmented picture, the primary similitude between them can be determined as follows In Eqn. (17), are the average greyscale of images and . The variance of images and is represented by and respectively. is the covariance of the images and , constants and are used for maintaining the stability of the system.

Experimental results and analysis

The suggested method's performance is validated in this section by segmenting two sets of images using Kapur's entropy-based multilevel thresholding approach. The benchmark images are given in Fig. 2 together with their associated histogram. The COVID-19 X-ray images from the Kaggle data collection are the second. The evaluated outcomes are compared to the original metaheuristic algorithms and modified algorithms. The WOA is one of the basic metaheuristics used for comparison. The other fundamental algorithms are those that have lately been published, including heap-based optimizer (HBO) [46], hunger games search (HGS) [47], and slime mould algorithm (SMA) [48]. Modified variants used for the comparison are A-C parametric whale optimization algorithm (ACWOA) [49], adaptive whale optimization algorithm (AWOA) [50], hybrid improved whale optimization algorithm (HIWOA) [51], enhanced Whale optimization algorithm integrated with salp swarm algorithm (ESSAWOA) [52], Whale optimization algorithm with modified mutualism (WOAmM) [33], Modified whale optimization algorithm hybridized with DE and SOS (m-SDWOA) [53], and Butterfly optimization algorithm modified with mutualism and parasitism (MPBOA) [54]. The advantages and disadvantages of the algorithms employed for comparison are given in subsection 5.1. Among these methods, HBO, HGS, and SMA are the very recently published algorithms. ACWOA, AWOA, HIWOA, ESSAWOA, WOAmM, and m-SDWOA are the recently published WOA variants. WOA is the component algorithm of mWOAPR. All the algorithms mentioned proved their ability to solve numerous optimization issues. MPBOA is a recently published method that has solved the image segmentation problem with greater efficacy. The parameters of all the algorithms used for assessment are set as suggested in the respective study. The termination condition for all algorithms is 5000 function evaluations. A fixed population of size 50 is used during evaluation. The mean, standard deviation, and best values for each image are calculated from 30 independent runs at various threshold levels, given the best values of image quality measuring matrices, such as PSNR and SSIM. All the experiments have been executed on MATLAB R2015a on a Windows 2010 PC with an Intel Core i3 processor and 8 GB RAM.
Fig. 2

Images used in the experiment of image segmentation.

Images used in the experiment of image segmentation.

Advantage and disadvantages of the compared algorithms

Every technique has some advantages and disadvantages, and thus the algorithms considered in this study for comparison have certain advantages and disadvantages. In this subsection, we mention the advantages and disadvantages of the employed methods. WOA can be implemented quickly and require only a few parameters to fine-tune. But the algorithm has a slow convergence rate and is easily stuck into local solutions [55]. In HBO, high exploration ability while early iterations, the emergence of exploitation ability, and balance between the global and local search are implemented [46]. Still, the algorithm stuck into local solutions [56]. The algorithm HGS was proposed with a simple structure, executed with a unique stability feature [47]. HGS employs several parameters. In runtime, HGS may take a longer time to search the region effectively. SMA guarantees the act of explorations while accomplishing exploitations; this balances the algorithm's global and local search [48]. But the algorithm is often trapped in local solutions while solving continuous global optimization issues [57]. In ACWOA and AWOA of parameters, exploration and exploitation ability of the algorithms increased modifying parameters of WOA. Despite modifications performance of the algorithms while solving high dimensional problems is not satisfactory. HIWOA has a higher exploration ability than WOA; it diminishes the chance of the algorithm being trapped into the local solution [51]. However, the introduction of a feedback mechanism in HIWOA increases the complexity of the algorithm. ESSAWOA has increased exploration and exploitation ability than WOA by introducing the strategies like SSA and LOBL, which enlarged the computational cost of the algorithm. In WOAmM, m-SDWOA, and MPBOA, the exploration and exploitation ability of the algorithms were balanced by amplifying the diversity of the algorithms. However, the computational complexity of these algorithms was increased with the modification.

Analysis of experimental results on benchmark images

The threshold levels 3, 4, 5, and 6 are used to evaluate the test images in Fig. 2. Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 provide the mean, standard deviation (std), and the optimum value of image quality matrices. Columns 5, 6, and 9 represent the mean, standard deviation, and best fitness value, respectively. Columns 7 and 8 of the tables contain the optimum PSNR and SSIM values. Table 1 depicts that the algorithms mWOAPR, WOAmM, m-SDWOA, and SMA evaluate similar fitness at threshold level 3. SMA has the smallest standard deviation of all the models. The fitness values achieved by mWOAPR at threshold levels 4, 5, and 6 are superior to those obtained by the other algorithms. In Table 2, at threshold level 3, mWOAPR, AWOA, WOAmM, m-SDWOA, and SMA acquire similar optimal results. However, the standard deviation value obtained by mWOAPR, m-SDWOA, and SMA is equal. At threshold level 4, mWOAPR and SMA achieve the same optimal value, and mWOAPR's standard deviation is the lowest of all. The assessed optimal values of mWOAPR are maximum than the comparable algorithms for threshold levels 5 and 6. Table 3 shows that at threshold level 3, mWOAPR, m-SDWOA, and SMA all achieve the same optimal value, with SMA's standard deviation being the lowest of all. mWOAPR can locate the highest optimal outcome at threshold levels 4, 5, and 6. Table 4 shows the maximum and equal optimal values calculated by mWOAPR and MPBOA at level 3; the standard deviation value calculated by MPBOA is the smallest. Compared to the employed algorithms, the fitness outcomes of mWOAPR are best at threshold levels 4, 5, and 6. Table 5 shows that WOA, AWOA, WOAmM, m-SDWOA, SMA, and mWOAPR calculate the same optimal fitness at threshold level 3. At this threshold level, the estimated standard deviation value of SMA is the lowest of all the algorithms. mWOAPR analyses maximal optimal fitness at threshold levels 4 and 6. The evaluated optimal fitness of mWOAPR and m-SDWOA are similar at threshold level 5 and the maximum. Among all the algorithms used in this experiment, the proposed technique had the lowest standard deviation. Table 6 shows that WOA, AWOA, WOAmM, m-SDWOA, SMA, and mWOAPR all have the same optimal fitness at threshold level 3. At this threshold level, SMA has the lowest estimated standard deviation value among the algorithms. mWOAPR determines the greatest optimal fitness among the compared algorithms at threshold levels 4, 5, and 6. Table 7 shows the algorithms achieved the highest mean fitness in the benchmark images used in the study with various threshold settings. Fig. 3 and Fig. 4 show segmented images from several algorithms using images of an airport and a cameraman at threshold levels 4 and 5. After comparing the findings of all of the tables, it can be determined that at threshold level 3, the majority of the algorithms evaluate optimal fitness in the same way. At threshold level 3, SMA emerges as the algorithm with the lowest standard deviation. At image airport threshold levels 3 and 4, MPBOA, HBO, HGS, and SMA have higher PSNR values than mWOAPR. The efficacy of mWOAPR improves as the threshold level is raised. mWOAPR maintains the leading place in most threshold levels throughout all test images evaluating estimated maximum optimal fitness.
Table 1

Comparison of results using image airport.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRa39316525617.74621.39E-0513.15480.322917.7462
WOA9316525617.7466.15E-0413.15480.322917.7462
ACWOA9316525617.74070.012213.15480.322917.7462
AWOA9316525617.74460.003413.15480.322917.7462
HIWOA9316525617.73180.017313.15480.322917.7462
ESSAWOA9516525617.62910.102113.05680.31417.7417
WOAmM9316525617.74621.08E-1413.15480.322917.7462
m-SDWOA9316525617.74621.08E-1413.15480.322917.7462
MPBOA9116525617.72530.012515.27590.07317.7461
HBO9116324217.3660.212815.28090.073317.6363
HGS9316525617.74430.004615.24260.071317.7462
SMA9316525617.7462015.24260.071317.7462
mWOAPRa49015319925622.17060.003313.3840.344222.1729
WOA9015319925622.17040.00313.3840.344222.1729
ACWOA8915319925622.13750.028313.44040.347922.1717
AWOA9015319925622.16830.006613.3840.344222.1729
HIWOA9115319925622.1230.035613.33290.343822.172
ESSAWOA9515320325621.8470.276713.12350.321922.1356
WOAmM9015319925622.17010.002113.3840.344222.1729
m-SDWOA9015319925622.16920.003413.3840.344222.1729
MPBOA9015319925622.17020.009115.3190.076222.1729
HBO8415719925021.3830.339915.44710.080122.0295
HGS9015319925622.16140.020315.3190.076222.1729
SMA9015319925622.17010.00215.3190.076222.1729
mWOAPRa58212116020425626.29430.003114.41220.418126.2972
WOA8212116020425626.2930.00614.41220.418126.2972
ACWOA8212616520725626.24170.054514.34830.413326.2917
AWOA8212116020425626.29140.004814.41220.418126.2972
HIWOA8212616520725626.23910.046814.34830.417326.2917
ESSAWOA8212916620425625.63540.4714.30810.409826.2443
WOAmM8212116020425626.29370.00214.41220.418126.2972
m-SDWOA8212116020425626.29220.003214.41220.418126.2972
MPBOA8212116020425626.29390.00115.66180.090826.2972
HBO8113516119925525.31020.433715.62760.08926.1144
HGS8212116020425626.26440.037515.66180.090826.2972
SMA8212116020425626.29440.00215.66180.090826.2972
mWOAPRa6418512716520725630.07720.078820.86250.794430.1577
WOA418512716520725630.0680.067320.86250.794430.1577
ACWOA418012116720625629.89090.090821.44470.805730.0789
AWOA418512616520825630.01660.037920.91270.794430.1547
HIWOA418012216020425629.89930.087821.42340.752630.1442
ESSAWOA7512215318321125629.33520.426915.23830.45929.8886
WOAmM418212116020425630.07640.071321.3740.805630.1552
m-SDWOA418512716520725630.06770.082520.86250.794430.1577
MPBOA418512716520725630.010.068317.81190.114130.1577
HBO408214516220324728.81620.428417.43260.106729.4463
HGS418512216520825629.93460.107217.88840.114930.14
SMA418412416520725630.05550.075617.88220.114430.1566
Table 2

Comparison of results using image bridge.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRb310217925618.6516012.96080.405118.6516
WOA10217925618.65153.8572e-0412.96080.405118.6516
ACWOA10217925618.64980.002212.96080.405118.6516
AWOA10217925618.65164.1353e-0512.96080.405018.6516
HIWOA10217925618.64940.002112.96080.403918.6516
ESSAWOA10618025618.59400.098012.69030.390018.6469
WOAmM10217925618.65162.4744e-0512.96080.405118.6516
m-SDWOA10217925618.6516012.96080.405118.6516
MPBOA10317925618.63610.010813.17220.069918.6514
HBO9417225518.37280.168713.42280.076718.6140
HGS10217925618.65150.000313.19440.070618.6516
SMA10217925618.6516013.19440.070618.6516
mWOAPRb46313019525623.40153.3610e-0416.79920.624123.4017
WOA6313019525623.40140.001316.79920.624123.4017
ACWOA6313119525623.38450.017416.76610.621823.4006
AWOA6313019525623.40060.002016.79920.624123.4017
HIWOA6313019525623.38350.016416.79920.619423.4017
ESSAWOA6412719425623.18090.193316.94150.631723.3916
WOAmM6313019525623.40137.1123e-0416.79920.624123.4017
m-SDWOA6313019525623.40145.4523e-0416.79920.624123.4017
MPBOA6413019325623.35710.037614.49220.095523.3954
HBO612919525422.85250.258914.33610.093423.2965
HGS6313019525623.39460.009414.47390.095423.4017
SMA6313019525623.40150.000514.47390.095423.4017
mWOAPRb55510315019925627.75400.001119.03470.736627.7545
WOA5510315019925627.75370.001219.03470.736627.7545
ACWOA5510315119925627.70590.065919.02260.735627.7540
AWOA5510315019925627.75340.001119.03470.735627.7545
HIWOA5710615320125627.67650.059018.94850.734927.7492
ESSAWOA5411616520725627.27200.322418.17400.695827.6681
WOAmM5510315019925627.75330.001119.03470.736627.7545
m-SDWOA5510315019925627.75290.001719.03470.736627.7545
MPBOA5510715420425627.69210.041815.15720.103027.7404
HBO4810016921125326.86190.361914.88890.102327.4814
HGS5410114919925627.72590.023615.23250.103327.7522
SMA5510315019925627.75310.002415.21900.103127.7545
mWOAPRb6529213217221125631.76737.3212e-0420.42080.785731.7680
WOA529213217221125631.76708.2313e-0420.42080.785731.7680
ACWOA539413517921725631.66740.090720.17220.775431.7559
AWOA529213217221125631.76680.001320.42080.785731.7680
HIWOA499213517321125631.63970.109120.27240.781331.7617
ESSAWOA508514118721725631.01890.364419.64310.754531.5843
WOAmM529213217221125631.76670.001320.42080.785731.7680
m-SDWOA529213217221125631.76570.002620.42080.785731.7680
MPBOA529213417420925631.68450.047215.66220.106031.7455
HBO20526512116123730.66830.436015.24430.101131.4979
HGS499313517521225631.68410.062815.61500.106731.7589
SMA529213217221125631.76000.015915.68110.106231.7680
Table 3

Comparison of results using image boat.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRc310918025618.14875.81E-0414.76950.546818.1488
WOA10918025618.14820.001814.76950.546818.1488
ACWOA10918025618.14490.006714.76950.546818.1488
AWOA10918025618.14760.002614.76950.546818.1488
HIWOA10918025618.14330.007414.76950.546818.1488
ESSAWOA10518125618.05770.085214.56540.545918.1435
WOAmM10918025618.14833.51E-0514.76950.546818.1488
m-SDWOA10918025618.14872.64E-1414.76950.546818.1488
MPBOA10718025418.13960.008313.2420.054818.1486
HBO10217924717.80820.246613.1550.054918.1208
HGS10918025618.14830.002313.26740.054918.1488
SMA10918025618.1487013.26740.054918.1488
mWOAPRc46512218125622.83449.47E-0417.86990.668222.8346
WOA6512218125622.83410.001117.86990.668222.8346
ACWOA6412218125622.82010.011117.87360.668122.8344
AWOA6512218125622.83420.001417.86990.668222.8346
HIWOA6512218125622.82090.014317.86990.668222.8346
ESSAWOA6112218225622.66670.119517.82660.666822.8156
WOAmM6512218125622.83421.19E-0417.86990.668222.8346
m-SDWOA6512218125622.83411.36E-0417.86990.668222.8346
MPBOA6412218125622.81770.015914.30210.061922.8344
HBO6611417925022.24410.337314.03330.059922.777
HGS6512218125622.83040.006714.30090.061822.8346
SMA6512218125622.8340.000214.30090.061822.8346
mWOAPRc5519213018125626.95070.02520.02940.734426.9576
WOA519213018125626.94770.026220.02940.734426.9576
ACWOA529113018125426.88550.062919.98570.732126.9559
AWOA519213018125626.9270.051120.02940.734426.9576
HIWOA529113018125626.89480.058219.98570.73126.9559
ESSAWOA519713118024826.52580.274220.21020.738826.921
WOAmM519213018125626.95070.004420.02940.734426.9576
m-SDWOA519213018125626.95020.001820.02940.734426.9576
MPBOA539213018125626.92470.021515.00470.064626.9554
HBO619913219024026.09440.370414.94670.062626.7732
HGS539213018125626.85670.070415.00470.064626.9554
SMA519213018125626.95010.001515.02070.065126.9576
mWOAPRc6509012816619525630.86960.005821.05990.759530.8762
WOA509012816619525630.86830.009321.05990.759530.8762
ACWOA508912816619525630.8270.047421.03840.758730.8752
AWOA509112816619525630.86630.008521.07820.759530.8757
HIWOA498812516619525630.77770.101720.70020.76430.8658
ESSAWOA509212917220525630.17450.384520.68830.745730.8456
WOAmM509012816619525630.8690.011521.05990.759530.8762
m-SDWOA509012816619525630.86760.011621.05990.759530.8762
MPBOA529112816619625630.84170.016115.35340.067130.8696
HBO588512816519025429.84760.405815.22490.065730.6438
HGS509012516619525630.81260.044815.24610.067130.8652
SMA509012816619525630.8670.006315.36430.067630.8762
Table 4

Comparison of results using image couple.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRd39918225518.06543.61E-1514.35230.502118.0654
WOA9918225518.0655.90E-0414.35230.502118.0654
ACWOA9918225518.04960.015614.35230.502118.0654
AWOA9918225518.06514.57E-0414.35230.502118.0654
HIWOA9918025518.04050.010514.34020.502518.0644
ESSAWOA9718325517.9360.106114.31340.501318.0572
WOAmM9918225518.0650.001614.35230.502118.0654
m-SDWOA9918225518.06532.57E-0414.35230.502118.0654
MPBOA9918225518.0654013.45990.05318.0654
HBO10617725017.64440.206313.43080.055217.9184
HGS9918225518.03740.022913.45990.05318.0654
SMA9918225518.06520.000613.45990.05318.0654
mWOAPRd49315920125422.63560.016115.12450.55122.6542
WOA9315920125422.63490.01715.12450.55122.6542
ACWOA9416220125422.5760.056914.99960.54622.6369
AWOA9315920125422.63550.014815.12450.55122.6542
HIWOA9916120125422.56180.059815.02510.665922.6379
ESSAWOA9916120625422.42930.132115.02250.544522.5963
WOAmM9315920125422.63480.01515.12450.55122.6542
m-SDWOA9315920125422.62780.01315.12450.55122.6542
MPBOA6011318025522.54930.046414.67750.063622.6122
HBO7010017325321.87340.270614.29030.057322.2631
HGS9316020125422.59540.041113.7010.058122.6536
SMA9315920125422.62380.0113.71730.058422.6542
mWOAPRd56010716020125427.14850.016818.82670.706727.1603
WOA6010716020125427.13690.040218.82670.706727.1603
ACWOA6010716020125427.14670.018318.82670.706727.1603
AWOA6010816020125427.13920.018418.89170.707627.1587
HIWOA6310716220425426.95310.115618.69860.702127.129
ESSAWOA6010915620125326.55570.330919.2420.712927.0798
WOAmM6010716020125427.14420.019318.82670.706727.1603
m-SDWOA6010716020125427.14670.010918.82670.706727.1603
MPBOA5810816020225427.01050.068114.95860.065827.1348
HBO5711215519425425.90790.462215.09460.068226.8265
HGS6211116220125426.96070.168614.99250.06627.1412
SMA6010716020125427.14770.006414.95130.065127.1603
mWOAPRd6599613016620325431.02150.007921.52660.778231.0285
WOA599713116620325431.02030.008621.3870.778131.0283
ACWOA5810113116520225430.80160.137821.52360.78330.9914
AWOA599713116620325431.01420.017721.3870.7831.0283
HIWOA5810213716620325430.78050.108721.1070.77830.9881
ESSAWOA599713616820025430.21680.506420.88170.760530.9676
WOAmM589713116620325431.01030.016921.39180.778331.0242
m-SDWOA599813116620325431.01250.011821.43160.7831.0264
MPBOA579512916720125430.85190.09215.78850.06631.0099
HBO5810013416319823929.5930.373215.84580.067230.4203
HGS5910313717020425430.75460.17915.71110.067131.0086
SMA599612916620325431.00930.018415.81040.065831.0264
Table 5

Comparison of results using image cameraman.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRe312819625617.58423.76E-1313.62570.534217.5842
WOA12819625617.58424.29E-0513.62570.534217.5842
ACWOA12819625617.58270.003113.62570.534217.5842
AWOA12819625617.58424.54E-0513.62570.534217.5842
HIWOA12819625617.57230.031613.62570.534217.5842
ESSAWOA12719625617.34570.225713.71720.537417.584
WOAmM12819625617.58424.19E-1313.62570.534217.5842
m-SDWOA12819625617.58423.27E-0513.62570.534217.5842
MPBOA13319625517.53140.03913.11950.518317.5743
HBO11719225516.78450.363714.27370.56217.5034
HGS12819625617.58410.000213.62570.534217.5842
SMA12819625617.5842013.62570.534217.5842
mWOAPRe44410319625621.97710.048314.46020.624722.0073
WOA4410319625621.96690.050414.46020.624722.0073
ACWOA4410319625521.91630.112914.46020.624722.0073
AWOA4410319625621.97040.04814.46020.624722.0073
HIWOA4410319625621.94140.059114.46020.624622.0073
ESSAWOA4710219625621.67850.257214.35650.620621.9929
WOAmM4410319625621.9750.032514.46020.624722.0073
m-SDWOA4410319625622.00270.019514.46020.624722.0073
MPBOA4310219625621.94220.058814.34250.621522.0028
HBO2810019925321.22110.358814.01020.608421.822
HGS4410319625621.96230.046414.46020.624722.0073
SMA4410319625622.0070.00114.46020.624722.0073
mWOAPRe5449614619625626.58310.003920.15310.68726.5863
WOA449614619625626.56940.024320.15310.68726.5863
ACWOA409614619625526.43910.204220.13570.688326.577
AWOA449614619625626.57530.011920.15310.68726.5863
HIWOA449814719625626.4420.170820.28570.688626.5812
ESSAWOA329513519825325.87920.332519.06870.712926.4087
WOAmM449614619625626.58140.00420.15310.68726.5863
m-SDWOA449614619625626.58310.004120.15310.68726.5863
MPBOA459614419625526.49590.060520.0680.697326.5781
HBO35213915422925.46320.450917.26850.684726.3201
HGS439614519625626.54910.023520.11360.692526.582
SMA449614619625626.58220.002120.15310.68726.5863
mWOAPRe624609814619625630.52740.050620.66080.708130.56
WOA24609814619625630.52620.045920.66080.708130.56
ACWOA266710214619625630.3570.10520.94130.716530.5272
AWOA24609814619625630.51450.057720.66080.71330.56
HIWOA22609814519625530.3490.194420.59720.719430.5524
ESSAWOA22459815819925429.63130.348119.64740.650430.15
WOAmM24619814619625630.52640.05220.66180.707730.5599
m-SDWOA24609814619625630.51960.047120.66080.708130.56
MPBOA236110014219725530.43540.054820.46070.729430.5255
HBO318512920022425529.0650.498318.19320.709229.9247
HGS225910014819725630.37910.090120.82590.702530.5345
SMA24619814619625630.50620.072720.66180.707730.5599
Table 6

Comparison of results using image clock.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRf311018625617.62891.45E-1414.71910.759917.6289
WOA11018625617.62892.47E-1414.71910.759917.6289
ACWOA11018625617.62640.00414.71910.759917.6289
AWOA11018625617.62891.50E-0414.71910.759917.6289
HIWOA11018625617.62670.003414.71910.759917.6289
ESSAWOA11018525617.59370.033614.65650.760617.6283
WOAmM11018625617.62891.45E-1414.71910.759917.6289
m-SDWOA11018625617.62891.58E-0414.71910.759917.6289
MPBOA11118625617.59490.014311.21560.033817.6283
HBO99017417.42280.100611.10560.0417.5943
HGS11018625617.62510.010411.22890.03417.6289
SMA11018625617.6289011.22890.03417.6289
mWOAPRf42711018625622.31950.091715.80860.80822.3838
WOA2711018625622.310.098715.80860.80822.3838
ACWOA2710818625622.2550.095415.82210.808922.3827
AWOA2711018625622.31660.011415.80860.80822.3838
HIWOA2711218625622.21410.092215.78150.807622.3825
ESSAWOA2710818825622.08430.242915.95410.807522.3766
WOAmM2711018625622.31890.064115.80860.80822.3838
m-SDWOA2711018625622.31490.041315.80860.80822.3838
MPBOA269516222522.14350.088412.09390.043722.2875
HBO8513520025021.78550.21412.22930.03922.0638
HGS2711018625622.25380.09811.59210.040922.3838
SMA2711018625622.30370.000411.59210.040922.3838
mWOAPRf5278914219625626.91460.024718.4370.853426.9269
WOA278914219625626.9010.120718.4370.853426.9269
ACWOA5911216120225426.91240.140818.95170.704927.1236
AWOA278914119625626.90230.024918.42640.852626.9256
HIWOA278913819625626.53450.302118.37650.852926.9152
ESSAWOA277514020225626.35570.336818.73220.844526.7965
WOAmM278914219625626.89790.12118.4370.853426.9269
m-SDWOA278914219625626.90830.039618.4370.853426.9269
MPBOA278314319725626.73040.129412.49540.042626.9017
HBO267913817922225.76740.344613.21520.044226.5713
HGS279114419625626.73590.231112.46430.042626.9246
SMA278914219625626.91350.002312.46820.042626.9269
mWOAPRf6277711916020225630.99960.024720.18820.872531.018
WOA277711916020225630.97820.179320.18820.872531.018
ACWOA277811515320125630.76910.226319.91510.870330.9812
AWOA277912015820225630.91060.128220.13880.871431.0091
HIWOA278212116220225630.64990.362920.16590.872431.0045
ESSAWOA278010215220525630.13250.494219.98650.855930.682
WOAmM277711916020225630.9980.019120.18820.872531.018
m-SDWOA277711916020225630.99160.03120.18820.872531.018
MPBOA278512516220625630.81510.109313.16710.043330.9463
HBO25618913118422929.35270.574412.95820.043930.3689
HGS277811916320325630.77180.188713.0980.043231.0039
SMA277711916020225631.00770.011313.05190.04331.018
Table 7

Algorithms with maximum mean fitness in different levels of benchmark images.

ImageLevelAlgorithm
a3mWOAPR, WOAmM, m-SDWOA, SMA
4mWOAPR
5mWOAPR
6mWOAPR
b3mWOAPR, AWOA, WOAmM, m-SDWOA, SMA
4mWOAPR, SMA
5mWOAPR
6mWOAPR
c3mWOAPR, m-SDWOA, SMA
4mWOAPR
5mWOAPR, WOAmM
6mWOAPR
d3mWOAPR, MPBOA
4mWOAPR
5mWOAPR
6mWOAPR
e3mWOAPR, WOA, AWOA, WOAmM, m-SDWOA, SMA
4mWOAPR
5mWOAPR, m-SDWOA
6mWOAPR
f3mWOAPR, WOA, AWOA, WOAmM, m-SDWOA, SMA
4mWOAPR
5mWOAPR
6mWOAPR
Fig. 3

Segmented images of image airport using Kapur's entropy at level 4.

Fig. 4

Segmented images of image cameraman using Kapur's entropy at level 5.

Comparison of results using image airport. Comparison of results using image bridge. Comparison of results using image boat. Comparison of results using image couple. Comparison of results using image cameraman. Comparison of results using image clock. Algorithms with maximum mean fitness in different levels of benchmark images. Segmented images of image airport using Kapur's entropy at level 4. Segmented images of image cameraman using Kapur's entropy at level 5.

Analysis of experimental results on COVID-19 chest X-ray images

The threshold levels 3, 4, 5, and 6 are used to evaluate the test images in Fig. 5 . Table 8, Table 9, Table 10 provide the mean, standard deviation (std), and outcomes of image quality matrices. Columns 5, 6, and 9 represent the mean, standard deviation, and best fitness values, respectively. Columns 7 and 8 of the tables show the best PSNR and SSIM values. In Table 8, at threshold level 3, the algorithms mWOAPR, WOA, AWOA, WOAmM, and SMA evaluate equal fitness; the standard deviation value of SMA is minimum than the others. The proposed mWOAPR estimates the second lowest standard deviation value after SMA. The fitness values obtained by mWOAPR are the highest of the other algorithms for threshold levels 4, 5, and 6. In Table 9, the optimum values for mWOAPR, WOA, WOAmM, m-SDWOA, and SMA are identical at threshold level 3. Among the comparison algorithms, SMA obtains the lowest standard value. At threshold level 4, mWOAPR and WOAmM have the same optimal value, although mWOAPR has a lower standard deviation than WOAmM. The assessed optimal values of mWOAPR are the highest of all the comparison algorithms for threshold levels 5 and 6. Table 10 shows that at threshold level 3, mWOAPR, WOA, AWOA, WOAmM, m-SDWOA, HGS, and SMA all achieve the same optimal value, with SMA's standard deviation being the lowest. WOA and m-SDWOA provide comparable results as mWOAPR at threshold level 4. When compared to WOA, the evaluated standard value for mWOAPR is the smallest. WOA and mWOAPR are able to discover the maximum optimal outcome at threshold levels 5 and 6. The standard value determined by mWOAPR, on the other hand, is the bare minimum. mWOAPR's optimal fitness, as measured by threshold level 6, is the best of all the compared algorithms. The algorithms that achieved the highest mean fitness in different threshold levels of the COVID-19 X-ray images examined in this work are shown in Table 11 . Segmented images of all the algorithms for image C1 at threshold level 4, C2 at threshold level 5, and C3 at threshold level 6 are given in Fig. 6 , Fig. 7 , and Fig. 8 , respectively.
Fig. 5

COVID-19 X-ray images used for segmentation.

Table 8

Comparison of results using image C1.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRC1397 170 25618.28301.4424e-1414.95560.400418.2830
WOA97 170 25618.28303.6678e-0514.95560.400418.2830
ACWOA97 170 25618.28246.4541e-0414.95560.400418.2830
AWOA97 170 25618.28303.6678e-0514.95560.400418.2830
HIWOA97 170 25618.28274.7091e-0414.95560.370718.2830
ESSAWOA95 171 25318.22520.082514.02850.403918.2815
WOAmM97 170 25618.28303.6678e-0514.95560.400418.2830
m-SDWOA97 170 25618.28298.1730e-0514.95560.400418.2830
MPBOA97 170 25218.28210.000714.95560.400418.2830
HBO98 181 25318.04020.160114.51990.397818.2656
HGS97 170 25618.28270.000414.95560.400418.2830
SMA97 170 25618.28300.000014.95560.400418.2830
mWOAPRC1470 125 182 25622.82570.002917.79070.508022.8263
WOA70 125 182 25622.82522.5645e-0417.79070.508022.8263
ACWOA70 125 182 25422.81320.011117.79070.508022.8263
AWOA70 125 182 25622.82470.004017.79070.508022.8263
HIWOA70 125 182 25622.81280.013117.79070.508022.8263
ESSAWOA70 128 182 25522.72710.089117.80710.508122.8213
WOAmM70 125 182 25622.82380.005517.79070.508022.8263
m-SDWOA70 125 182 25622.82540.003017.79070.508022.8263
MPBOA70 126 182 25422.82120.007217.79290.508222.8262
HBO56 112 181 24922.48240.204317.82470.533922.7380
HGS70 125 182 25622.81810.008117.79070.508022.8263
SMA70 125 182 25622.82360.006017.79070.508022.8263
mWOAPRC1565 115 165 215 25627.18950.001818.72130.518927.1904
WOA65 115 165 215 25627.18910.002318.72130.518927.1904
ACWOA64 114 163 215 25627.15490.035518.76160.523627.1895
AWOA65 115 165 215 25627.18810.004218.72130.518927.1904
HIWOA63 114 165 215 25327.14130.059818.78580.514727.1892
ESSAWOA62 118 169 214 25626.87090.220918.70670.518527.1621
WOAmM65 115 165 215 25627.18920.001518.72130.518927.1904
m-SDWOA65 115 165 215 25627.18890.002618.72130.518927.1904
MPBOA64 114 164 215 25227.15070.032518.76340.522427.1904
HBO74 125 170 210 24526.43600.322318.27560.493226.9687
HGS65 115 165 215 25627.16670.020418.72130.518927.1904
SMA65 115 165 215 25627.18930.001718.72130.518927.1904
mWOAPRC1654 94 133 174 215 25631.20960.003920.42270.570031.2123
WOA54 94 133 174 215 25631.20740.005120.42270.570031.2123
ACWOA54 96 138 178 215 25331.13780.090520.37630.568231.2055
AWOA54 93 133 173 215 25331.20510.005820.44950.570531.2119
HIWOA52 95 135 175 215 25631.09390.080020.50330.578331.2035
ESSAWOA54 88 131 175 214 25330.50060.447020.40340.569931.1726
WOAmM54 94 133 174 215 25631.20860.004520.42270.570031.2123
m-SDWOA54 94 133 174 215 25631.20750.005520.42270.570031.2123
MPBOA53 93 134 177 215 25631.15890.051620.41450.572231.2076
HBO60 95 126 159 202 25330.00980.568120.36340.571230.9151
HGS53 93 132 172 215 25631.11180.101520.49110.575231.2105
SMA54 94 133 174 215 25531.19090.067420.42270.570031.2123
Table 9

Comparison of results using image C2.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRC2390 145 25617.22454.0978e-0516.62280.622617.2245
WOA90 145 25617.22454.1866e-0516.62280.622617.2245
ACWOA90 145 25617.22350.001716.62280.622617.2245
AWOA90 145 25617.22427.5523e-0416.62280.621317.2245
HIWOA90 145 25617.22330.001816.62280.619517.2245
ESSAWOA91 146 24917.21430.015616.62050.620117.2240
WOAmM90 145 25617.22451.1765e-1416.62280.622617.2245
m-SDWOA90 145 25617.22451.7768e-0516.62280.622617.2245
MPBOA90 145 25417.22390.000916.62280.622617.2245
HBO90 146 25017.13510.081816.65540.620617.2236
HGS90 145 25617.22440.000416.62280.622617.2245
SMA90 145 25617.22450.000016.62280.622617.2245
mWOAPRC2474 117 160 25621.41750.001819.41970.667121.4182
WOA74 117 160 25621.41616.3623e-0519.41970.667121.4182
ACWOA74 117 160 25621.41160.012419.41970.667121.4182
AWOA74 117 160 25621.41460.006819.41970.658721.4182
HIWOA74 117 160 25621.41080.016819.41970.662721.4182
ESSAWOA72 115 160 21921.27850.149119.48450.668521.4074
WOAmM74 117 160 25621.41750.003519.41970.667121.4182
m-SDWOA74 117 160 25621.41724.7195e-0519.41970.667121.4182
MPBOA73 117 160 25621.41320.003219.47170.667921.4180
HBO69 117, 164 24021.10070.165219.65350.660021.3782
HGS74 117 160 25621.40980.011719.41970.667121.4182
SMA74 117 160 25621.41710.000219.41970.667121.4182
mWOAPRC2574 117 160 211 25625.55710.055719.42910.667225.5731
WOA74 117 160 211 25625.55670.021819.42910.667225.5731
ACWOA74 118 160 211 25625.49130.081419.44650.667325.5719
AWOA74 117 160 211 25625.53520.083919.42910.659825.5731
HIWOA74 117 160 211 25625.47480.077319.42910.661725.5731
ESSAWOA65 113 156 211 25225.14230.280019.55780.682825.5338
WOAmM74 117 160 211 25625.55570.006819.42910.667225.5731
m-SDWOA74 117 159 211 25625.53650.040519.40380.669625.5728
MPBOA54 92 129 164 25625.25330.005521.61890.718425.2628
HBO55 87 136 169 24724.72630.258821.33980.692925.1413
HGS72 117 160 211 25625.40290.145919.53720.668925.5705
SMA74 117 160 211 25625.53610.104719.42910.667225.5731
mWOAPRC266 60 99 149 242 25629.39140.073819.02590.718929.5190
WOA55 93 129 165 211 25629.36770.057221.65420.715729.4173
ACWOA5 55 102 156 209 25629.36760.092619.34510.706729.5184
AWOA5 57 102 160 256 25629.33730.130919.29330.702129.5184
HIWOA7 44 85 148 243 25629.18540.204118.20310.701129.3950
ESSAWOA6 64 110 149 256 25628.87450.424719.24850.723129.5088
WOAmM5 57 112 158 244 25629.37630.095419.73130.705429.5789
m-SDWOA7 58 108 145 226 25629.35610.070318.84950.730829.4787
MPBOA49 82 113 155 211 25028.98580.120520.57490.730129.2399
HBO34 69 98 156 212 25528.34960.308319.50880.716329.0105
HGS9 50 104 160 256 25629.12730.232519.33740.701629.4210
SMA5 56 100 151 249 25629.47760.114119.15030.715629.5754
Table 10

Comparison of results using image C3.

AlgorithmImageLevelIntensityMeanStdPSNRSSIMBest
mWOAPRC3388 157 25618.20207.2416e-1415.06440.510918.2020
WOA88 157 25618.20203.6134e-1315.06440.510918.2020
ACWOA88 157 25618.20196.3185e-0415.06440.510918.2020
AWOA88 157 25618.20207.8476e-1415.06440.510318.2020
HIWOA88 157 25618.20198.9770e-0415.06440.504918.2020
ESSAWOA88 157 25218.19180.027715.06440.510918.2020
WOAmM88 157 25618.20203.6134e-1315.06440.510918.2020
m-SDWOA88 157 25618.20203.6134e-1315.06440.510918.2020
MPBOA88 157 25418.20180.000415.06440.510918.2020
HBO93 159 25518.07260.104814.78650.499318.1970
HGS88 157 25618.20200.000115.06440.510918.2020
SMA88 157 25618.20200.000015.06440.510918.2020
mWOAPRC3472 123 174 25622.64899.3530e-0518.45000.607822.6489
WOA72 123 174 25622.64891.2746e-0418.45000.607822.6489
ACWOA72 123 174 25622.64730.003618.45000.607822.6489
AWOA72 123 174 25622.64881.3974e-0418.45000.602622.6489
HIWOA72 123 174 25622.64865.8876e-0418.45000.602222.6489
ESSAWOA70 122 173 24322.55340.098518.54920.614422.6461
WOAmM72 123 174 25622.64881.6908e-0418.45000.607822.6489
m-SDWOA72 123 174 25622.64891.1249e-0418.45000.607822.6489
MPBOA72 123 174 25422.64700.001218.45000.607822.6489
HBO75 136 175 25022.29930.213117.77580.578422.5725
HGS72 123 174 25622.64400.011018.45000.607822.6489
SMA72 123 174 25622.64880.000218.45000.607822.6489
mWOAPRC3566 107 147 186 25626.69372.6062e-0420.33470.659726.6939
WOA66 107 147 186 25626.69372.6807e-0420.33470.659726.6939
ACWOA66 107 147 186 25326.68710.012220.33470.659726.6939
AWOA66 107 147 186 25626.69347.2364e-0420.33470.657326.6939
HIWOA66 107 147 186 25126.68210.045820.33470.652026.6939
ESSAWOA71 111 147 188 25626.39300.161220.04150.645626.6705
WOAmM66 107 147 186 25626.69364.2296e-0420.33470.659726.6939
m-SDWOA66 107 147 186 25626.69356.3076e-0420.33470.659726.6939
MPBOA66 107 147 186 24726.68860.003820.33470.659726.6939
HBO57 103 148 188 23826.22810.250920.26470.663026.6309
HGS67 108 147 186 25626.65350.062720.31480.657926.6932
SMA66 107 147 186 25626.69300.001220.33470.659726.6939
mWOAPRC3664 104 143 182 221 25630.48880.003120.54180.663930.4935
WOA67 108 147 186 242 25630.48790.005120.31480.657930.5006
ACWOA68 107 149 189 242 25630.47010.039520.17200.650530.4930
AWOA66 106 147 186 242 25630.47840.044220.31050.662130.5000
HIWOA34 69 109 149 187 25230.41060.096520.46470.659230.4868
ESSAWOA63 99 143 183 222 25030.07770.308120.41170.662430.4686
WOAmM64 104 143 181 221 25630.48790.003820.53150.664330.4932
m-SDWOA66 107 149 188 242 25630.48810.004220.27530.654630.4930
MPBOA63 105 141 178 217 25530.47010.015720.64680.664230.4853
HBO40 82 119 153 194 24829.82080.278021.54140.696530.3395
HGS63 107 144 181 217 25630.42590.054220.65110.661130.4815
SMA64 104 143 181 221 25230.45750.056020.53150.664330.4936
Table 11

Algorithms with maximum mean fitness in different levels of COVID-19 X-ray images.

ImageLevelAlgorithm
C13mWOAPR, WOA, AWOA, WOAmM, SMA
4mWOAPR
5mWOAPR
6mWOAPR
C23mWOAPR, WOA, WOAmM, m-SDWOA, SMA
4mWOAPR, WOAmM
5mWOAPR
6mWOAPR
C33mWOAPR, WOA, AWOA, WOAmM, m-SDWOA, HGS, SMA
4mWOAPR, WOA, m-SDWOA
5mWOAPR, WOA
6mWOAPR
Fig. 6

Segmented images of COVID-19 X-ray image1 (C1) using Kapur's entropy at level 4.

Fig. 7

Segmented images COVID-19 X-ray image 2 (C2) using Kapur's entropy at level 5.

Fig. 8

Segmented images COVID-19 X-ray image 3 (C3) using Kapur's entropy at level 6.

COVID-19 X-ray images used for segmentation. Comparison of results using image C1. Comparison of results using image C2. Comparison of results using image C3. Algorithms with maximum mean fitness in different levels of COVID-19 X-ray images. Segmented images of COVID-19 X-ray image1 (C1) using Kapur's entropy at level 4. Segmented images COVID-19 X-ray image 2 (C2) using Kapur's entropy at level 5. Segmented images COVID-19 X-ray image 3 (C3) using Kapur's entropy at level 6. Based on the preceding explanation, mWOAPR is the best method for segmenting COVID-19 chest X-ray pictures among the compared algorithms. With increasing threshold levels, mWOAPR's segmentation performance improves.

Description of the lesion parts in COVID-19 X-ray images and comparison with normal chest X-ray image

Images (a), (b), and (c) in Fig. 9 exhibit COVID-19 X-ray images (C1, C2, C3) segmented by mWOAPR using thresholds 4, 5, and 6. The damaged area in each picture is the grey-colored portion indicated by a red arrow. The black area, indicated by the green arrow, is the unaffected segment. The COVID-19 X-ray images are divided, making it simple to identify the infected area and severity. It is clear from images (a), (b), and (c) in the given figure that image (c) has the highest infection. Even though the original X-ray images C2 and C3 are nearly identical, the segmented image reveals a greater disease effect in image C3.
Fig. 9

Illustration of the lesion part and unaffected part in COVID-19 and normal X-ray image.

Illustration of the lesion part and unaffected part in COVID-19 and normal X-ray image. The segmented image of a normal chest scan is shown in the image (d) in the figure. The vital organs, namely the lung and heart, are located in the upper abdomen areas colored green in the image (d). The black region confirms the patient's normalcy within the designated portion of the image. It is clear from the segmented images that image (d) has more active parts than other images in the figure.

Computational complexity analysis and statistical analysis

Here, the first subsection represents the worst-case runtime required for the algorithm to run. Here the computational complexity of mWOAPR is compared with WOA. In the second subsection, statistical analysis of the evaluated results is performed to check the proposed algorithm's performance statistically.

Analysis of computational complexity

The run time of an algorithm is directly related to the computational complexity of the algorithm. Here, in this section, the computational complexity of the algorithms WOA is evaluated to compare it with mWOAPR. Let is the maximum number of iterations used as termination criteria for both the algorithms.

Comparison of computational complexity with WOA

The primary strategies related to the computational complexity in WOA are: Initializing the whale population is , where is the size of the population. Fitness evaluation of initial population is . Sorting the population and determining the best solution is . While iteration, updating whale population, and evaluating fitness . While iteration, sorting the population and determining the best solution . Therefore, the total time complexity of WOA is: Though WOA and mWOAPR start with population N in mWOAPR, with increasing iteration, the population decreases gradually, and lastly, the value of population becomes 15 instead of N. Therefore, it is evident from the discussion that the complexity of mWOAPR is much lesser than that of WOA.

Statistical analysis

Friedman test is employed for statistical comparison. Friedman's test is a nonparametric test used to find differences in treatments (methods) across multiple attempts (functions). It is used in place of the ANOVA test when the fundamental assumption of ANOVA is violated, i.e., data does not come from a normal population. This test extends the ‘Paired samples Wilcoxon signed-rank test when there are more than three treatments (strategies). In the case of two treatments (strategies), both the tests are identical. Table 12 depicts the result of Friedman's rank test. Column 2 of the table shows the mean rank of the algorithms used for comparison. In column 3, the final position is calculated from the evaluated mean rank. The evaluated fitness values in threshold levels 3, 4, 5, and 6 of every algorithm's images are utilized to calculate the mean rank. Image segmentation is a maximization problem; hence, the algorithm with the highest mean rank is considered the best algorithm. The final rank of the other compared algorithms is determined using a similar process. Fig. 10 shows the graphical representation of the mean rank evaluated by Friedman's test.
Table 12

Statistical comparison outcomes of the employed algorithms.

AlgorithmMean rankFinal RankP-value
mWOAPR11.181P-value 4.28E-65 < 0.01 indicates that the hypothesis is rejected at 1% significance level. It implies that there is a significant difference in the performance of different algorithms.
WOA95
ACWOA4.998
AWOA8.076
HIWOA3.8910
ESSAWOA211
WOAmM9.392
m-SDWOA9.214
MPBOA4.949
HBO112
HGS5.047
SMA9.293
Fig. 10

Graphical representation of the evaluated mean rank.

Statistical comparison outcomes of the employed algorithms. Graphical representation of the evaluated mean rank.

Convergence analysis

Convergence graphs are mainly drawn to verify the solution generating speed of the algorithms. Fig. 11, Fig. 12 show the convergence graphs drawn using the benchmark images and COVID-19 X-ray images. A population size of 50 and 5000 function evaluations is used as the end criteria to draw the graphs. In both, the figure graphs drawn in threshold levels 4, 5, and 6 are shown in row1, row2, and row 3, respectively. In every diagram, the function evaluation numbers are shown on the X-axis. The Y-axis represents the fitness values evaluated by the algorithms according to the function evaluation. The best value generated by an algorithm after every iteration is plotted until the termination criterion is satisfied. Among all the lines generated by the algorithms used for comparison, the line that touches the horizontal boundary first and its corresponding algorithm is considered faster convergence than the others. Similarly, the curve w.r.t. to the Y-axis shows the highest evaluated optimal value during convergence. The algorithm for which a curve touches the horizontal boundary faster and attains the highest optimal value on Y-axis is considered more efficient. Convergence curves of images including all the algorithms employed in the study using threshold 4, 5, and 6 are given in Fig. 1, Fig. 2 and Fig. 3 of Appendix-I.
Fig. 11

Convergence curves of WOA and mWOAPR on benchmark images.

Fig. 12

Convergence curves of WOA and mWOAPR on COVID-19 X-ray images.

Fig. 1

Convergence curves of benchmark images a-f and Covid-19 X-Ray images using threshold-4.

Fig. 2

Convergence curves of benchmark images a-f and Covid-19 X-Ray images using threshold-5.

Fig. 3

Convergence curves of benchmark images a-f and Covid-19 X-Ray images using threshold-6.

Convergence curves of WOA and mWOAPR on benchmark images. Convergence curves of WOA and mWOAPR on COVID-19 X-ray images.

Conclusion

This research introduces a new WOA version that improves the balance of the search processes. Basic, the search prey phase in basic WOA is eliminated by randomly initializing the solution during the exploration phase. The coefficient vector A and constant b parameter values are changed to aid exploration and exploitation processes. To increase convergence speed and exploitation, the population reduction method is used. During execution, a traversal parameter is introduced to pick the exploration or exploitation phase. The overall setup considerably improves the basic WOA's performance. The proposed method is used to separate benchmark images and COVID-19 X-ray images into two pieces, which may aid clinicians in identifying and planning treatment. The advantage of the projected mWOAPR algorithm over the comparative methods is confirmed by comparing the evaluated outcomes with several metaheuristic algorithms.

Declaration of interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  3 in total

1.  An agent-based transmission model of COVID-19 for re-opening policy design.

Authors:  Alma Rodríguez; Erik Cuevas; Daniel Zaldivar; Bernardo Morales-Castañeda; Ram Sarkar; Essam H Houssein
Journal:  Comput Biol Med       Date:  2022-07-19       Impact factor: 6.698

2.  Comparative Performance Analysis of Differential Evolution Variants on Engineering Design Problems.

Authors:  Sanjoy Chakraborty; Apu Kumar Saha; Sushmita Sharma; Saroj Kumar Sahoo; Gautam Pal
Journal:  J Bionic Eng       Date:  2022-06-13       Impact factor: 2.995

3.  A Bio-Inspired Multi-Population-Based Adaptive Backtracking Search Algorithm.

Authors:  Sukanta Nama; Apu Kumar Saha
Journal:  Cognit Comput       Date:  2022-01-30       Impact factor: 4.890

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.