Literature DB >> 35464666

BCOVIDOA: A Novel Binary Coronavirus Disease Optimization Algorithm for Feature Selection.

Asmaa M Khalid1, Hanaa M Hamza1, Seyedali Mirjalili2,3, Khalid M Hosny1.   

Abstract

The increased use of digital tools such as smart phones, Internet of Things devices, cameras, and microphones, has led to the produuction of big data. Large data dimensionality, redundancy, and irrelevance are inherent challenging problems when it comes to big data. Feature selection is a necessary process to select the optimal subset of features when addressing such problems. In this paper, the authors propose a novel Binary Coronavirus Disease Optimization Algorithm (BCOVIDOA) for feature selection, where the Coronavirus Disease Optimization Algorithm (COVIDOA) is a new optimization technique that mimics the replication mechanism used by Coronavirus when hijacking human cells. The performance of the proposed algorithm is evaluated using twenty-six standard benchmark datasets from UCI Repository. The results are compared with nine recent wrapper feature selection algorithms. The experimental results demonstrate that the proposed BCOVIDOA significantly outperforms the existing algorithms in terms of accuracy, best cost, the average cost (AVG), standard deviation (STD), and size of selected features. Additionally, the Wilcoxon rank-sum test is calculated to prove the statistical significance of the results.
© 2022 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Best cost; Big data; Convergence; Coronavirus; Evolutionary algorithm; Feature selection; Frameshifting; Meta-heuristic; Optimization

Year:  2022        PMID: 35464666      PMCID: PMC9014647          DOI: 10.1016/j.knosys.2022.108789

Source DB:  PubMed          Journal:  Knowl Based Syst        ISSN: 0950-7051            Impact factor:   8.139


Introduction

With the rapid use of computer and internet technologies, immense quantities of data with hundreds of features are produced. In data mining, useful information must be extracted from such big data to decide. Selecting only the relevant and useful features would have a significant effect in many applications such as text mining [1], image processing [2], Bioinformatics [3], and industrial applications [4]. Internet of things (IoT) is a modern and powerful technology in which physical objects embedded with sensors are connected through a network to exchange data [5]. Challenges to IoT applications include storing and processing such a vast amount of data gathered by IoT sensors [6]. Another challenge is the existence of redundant, irrelevant, and noisy features. A solution to these challenges is to use feature selection to select the optimum subset of features. Feature selection is a preprocessing, mining, and machine learning problem since it removes redundant and irrelevant variables in a dataset [7]. Feature selection aims to reduce data dimensionality, reduce training time, and increase generalization. A feature selection model consists of three main factors; classification (e.g., Support Vector Machine (SVM), K-Nearest Neighbors (KNN), etc.), evaluation criteria (such as classification accuracy), and search algorithm [8]. Classification assigns each subset to a specific class, evaluation criteria should be selected to evaluate each subset, and the searching algorithm is used to select the optimum subset of features. The main categories of feature selection methods are wrappers and filters. The wrapper methods involve classifiers and detect the interactions between variables. Filter methods evaluate feature subsets based on the data regardless of the model [9]. Filter methods are much faster than wrapper methods as they do not involve classifiers but may fail to find the best subset of features. However, wrapper methods usually provide the best performing feature subset for a predetermined classifier [10]. For filtering algorithms, Xu et al. [11] proposed an SVMs Classification based two-side cross-domain collaborative filtering algorithm by inferring intrinsic user and item. The major innovation of the proposed model is that domain-independent intrinsic features of users and items can be inferred from domain-dependent rating matrices. The results have shown that the proposed model significantly outperforms all the state-of-the-art algorithms at various sparsity levels. Additionally, a Cross-Domain Collaborative Filtering (CDCF) Algorithm is proposed in [12]. In the proposed algorithm, knowledge can be transferred from user- and item-side auxiliary domains to the target domain by expanding the original feature vector. The experimental results have shown that the proposed algorithm outperforms the state-of-the-art baseline algorithms. According to a specific evaluation metric, a feature selection method searches for the best feature subset from all possible subsets. Searching algorithms can be classified into exact search methods and metaheuristics [13]. The exact methods search the entire search space. For example, for the feature set with k features, the size of the search space is proportional to 2 which requires too much computational time [14]. On the other hand, metaheuristic search strategies can be used to find a (near) optimum subset from the original set by using the local search or imitating a natural process. In the feature selection problem, a dataset can be represented by a two-dimensional matrix X with N rows representing instances where , 2, …, N, and M columns representing features in each instance where , 2, …, M. is the value of the th feature in and th instance. Fig. 1 shows a matrix representing a dataset. Each class has similar values, whereas two different classes have elements with different values.
Fig. 1

Dataset representation.

Several metaheuristic algorithms have been proposed in the literature to solve feature selection problems. These algorithms include binary Cuckoo Search (BCS) [15], Binary Flower Pollination Algorithm (BFPA) [16], Binary Dragonfly algorithm (BDA) [17], Simulated Annealing (SA) [18], Particle Swarm Optimization (PSO) [19], Genetic Algorithm (GA) [20], Differential Evolution (DE) [21], [22], Artificial Bee Colony (ABC) [23], Ant Colony Optimization (ACO) [24], Grey Wolf Optimization (GWO) [14], Whale Optimization Algorithm (WOA) [25], and Bat Algorithm (BA) [26]. In addition to these algorithms, where two or more algorithms are combined to solve feature selection problems. In [27], three types of hybridization of GA and SA are proposed to solve some non-linear optimization cases. The algorithm is tested using five benchmark functions, and the obtained results showed that hybrid GA-SA could enhance the performance of GA and SA to provide better results. Mafarja and Mirjalili [28] proposed two hybridization modes based on WOA. The first model embeds the SA algorithm to WOA, while SA improves the best solution found after each iteration in the second model. The performance is evaluated on 18 UCI datasets. The results confirm the efficiency of the proposed approaches in enhancing classification accuracy compared to other feature selection algorithms. In addition, a binary version of hybrid GWO and PSO algorithms to solve feature selection problems is proposed in [29]. 18 UCI benchmark datasets are employed. The results showed that the proposed algorithm outperformed the state-of-the-art binary algorithms using performance measures such as accuracy and computational time. Additionally, recent optimization algorithms are introduced and applied to feature selection, such as the meta-heuristic quantum-inspired immune clone optimization algorithm (QICO) [30] used for optimal feature selection from gene expression data to develop the cancer data classification. Also, a clustering-based hybrid approach [31] is introduced for gene subset selection of microarray gene expression data. The experimental outcomes denote that the proposed model achieves efficient results in selecting the best gene subsets. The Coronavirus Optimization Algorithm (COVIDOA) [32] is a recent technique that mimics the Coronavirus mechanism inside the human body. COVIDOA has been tested on many benchmarks and real-world problems compared to other metaheuristic techniques. COVIDOA has been proven to have a better performance when compared to other well-known metaheuristics in addition to its high exploration and exploitation capabilities. Additionally, the no-free-lunch theorem has logically proved that no one metaheuristic algorithm, in particular, is best suited for solving all optimization problems. These motivated the authors to propose a binary version of COVIDOA as a wrapper feature selection method to improve the performance of feature selection and classification tasks. The main contributions of this paper can be summarized as follows: A binary version of the recent COVIDOA algorithm is proposed to solve the feature selection problem. The performance of the proposed algorithm is tested using 26 standard benchmark datasets. A number of evaluation measures are utilized, including classification accuracy, best fitness, average fitness, standard deviation, and selection size. The Wilcoxon rank-sum test was conducted, and the results proved the significance of the proposed algorithm. This paper is organized as follows: Section 2 provides a brief overview of COVIDOA. In Section 3, the proposed binary COVIDOA is introduced. The datasets and experimental results are discussed in Section 4. Finally, conclusions and future work are given in Section 5. Dataset representation. Coronavirus replication lifecycle.

Coronavirus optimization algorithm

COVIDOA is a new evolutionary optimization algorithm proposed by Khalid et al. [32]. COVIDOA is inspired by the replication mechanism of Coronavirus particles when attacking the human body. Four stages are considered to simulate the replication lifecycle of Coronavirus, as follows: Virus entry and uncoating The virus uses the spike protein on its surface as a key to enter a human cell. Once inside the cell, the virus contents (RNA) are released inside the cell cytoplasm [33]. Virus replication The virus uses the Ribosomal frameshifting technique for replication [32]. Frameshifting is a process when a specific reading frame of RNA molecule shifts to another reading frame to provide a new protein sequence [33], [34], [35]. Frameshifting results in the creation of several viral proteins. In the proposed algorithm, a solution (virus particle) is selected for replication. The frameshifting technique produces several viral proteins that are then combined to form a new virus particle (solution). The most popular type of frameshifting is +1 frameshifting [36] which can be modeled as follows: +1 frameshifting technique The parent solution’s values are shifted in the right direction by 1, and the value in the first position is set as a random value in the range [minVal maxVal] as follows. Where minVal and maxVal are the minima and maximum values for the variables in each solution. Virus mutation Coronavirus accumulates random mutations to escape from the human immune system [37]. In the proposed algorithm, a mutation operator is applied to the solution created in the previous step to generate a new mutated solution as follows: is the solution before mutation, is the mutated solution, and are the th element in the old and new solutions, respectively, 1, …, , and are random values in the range [minVal, maxVal]. is the mutation rate. New virion release Finally, the new virion is released, trying to hijack new healthy cells. The replication lifecycle of Coronavirus is shown in Fig. 2. The flowchart of COVIDOA is shown in Fig. 3. The parameters of COVIDOA are described as follows:
Fig. 2

Coronavirus replication lifecycle.

Fig. 3

The flowchart of COVIDOA.

PopNo : number of solutions in the initial population. Max_Iter : maximum number of iterations. MinVal, MaxVal : lower and upper bounds of each variable in COVID solution. The values of MinVal, and MaxVal depend on the problem; however, in the case of feature selection, they are set to 0 and 1, respectively. D: refers to the problem dimension. As in the case of MinVal and maxVal, the problem dimension depends on the problem; however, it is equal to the size of features in the dataset in the feature selection problem. MR: represents the mutation rate. As mentioned in [35], the mutation rate of Coronavirus is 1 × 10 which is very low; however, the mutation rate in the proposed algorithm is set at a larger value in the range [0.1 0,001], which helps in exploring new promising regions and avoid getting stuck in a local minimum. Shifting Number: represents the number by which the variables of each solution are shifted in the frameshifting technique. The most common type of frameshifting is +1 frameshifting which uses a shifting number of 1. numOfProtiens: represents the number of viral proteins generated from one virus particle in the replication process. Each virus particle can generate millions of viral proteins; however, we set it to 2 proteins to avoid computational complexity. As mentioned in the coming sections, parameter tuning is applied to test the impact of changing parameter values on the performance of the proposed algorithm. Pseudocode of the native COVIDOA is given in Box I. The flowchart of COVIDOA.

The proposed binary approach

In binary problems, such as feature selection, each solution is represented by a one-dimensional vector that contains only zeros and ones, where one indicates that the feature is selected and 0 indicates it is ignored. The number of elements in the vector equals the size of features in the dataset. The binary representation of a COVID solution for a dataset with the number of features D is shown in Fig. 4.
Fig. 4

The binary representation of the COVID solution.

Initial stage

In the initial stage of the COVID algorithm, a population pop of randomly generated agents is created, where each agent represents a solution to the problem. The following equation generates the initial population: Where is the solution at the th location in the population pop; is a random value between 0 and 1; and are the upper and lower boundaries of the problem. However, in the binary version, and . In the proposed binary algorithm, each solution is converted into its binary representation using a binarization technique. The sigmoid function is one of the most used transformation functions belonging to the S-shaped family [28], [38]. It maps each value in the real-valued solution to a value of 0 or 1 as follows: The curve of the sigmoid function is shown in Fig. 5.
Fig. 5

The sigmoid function.

The binary representation of the COVID solution. The sigmoid function.

The KNN classifier

The KNN classifier (where ) is used to get the classification accuracy of each solution for the following advantages [22], [39]: It is straightforward to implement. Only two parameters are required to implement KNN, i.e., the parameter K, which represents the number of neighbors, and the distance function (e.g., Euclidean or Manhattan, etc.). It is a memory-based approach that allows the algorithm to respond quickly to changes in the input [40]. Efficiency in finding the best subset of features and high robustness to noisy data. K-NN algorithm gives the user the flexibility to choose distance while building the K-NN model (e.g., Euclidian distance, Hammig distance, Manhattan distance, etc.). The role of the KNN classifier is to assign each data point to a class to which most of the closest neighbors belong. Each dataset is divided into training, validation, and testing sets using cross-validation in the same way as in [41]. The folds are used for training and validation in cross-validation, and the remaining is utilized for testing. The K-NN classifier works as follows: KNN classifier. For the test dataset, the kNN algorithm must determine the closest neighbors for each sample from the training dataset by computing the Euclidean as follows: Where is the number of features in a given dataset. As shown in Fig. 6, the classifier assigns the unknown sample to class B (when ) because 2 of its closest points are from class B. The classification accuracy for the classifier determines how accurate the class prediction is for the classifier and can be obtained by dividing the correct instances by the total number of instances found in the dataset. On the other hand, the classification error rate can be obtained by dividing the incorrect instances by the total number of instances in the dataset.
Fig. 6

KNN classifier.

The classification accuracy rate must be calculated using the KNN classifier to evaluate each solution, where the best solution is the one with the highest accuracy rate.

The fitness function

Not only the classification accuracy rate is the only measure used to compare solutions, but an additional objective is needed, which is the number of selected features. When two solutions have the same classification accuracy, the one with the minimum number of selected features is the best. Therefore, the fitness function is to maximize the classification accuracy rate (minimize classification error) and minimize the number of selected features as follows: Where is a value between 0 and 1, is the error rate of the classifier, represents the number of selected features, and is the total number of features.

Position update

As mentioned in [32], virus particles (solutions) use the frameshifting technique for producing multiple protein sequences, which are then combined to form a new particle (solution). In the original COVIDOA version, the +1 frameshifting technique is applied to a solution by shifting its variables to the right by 1, and the value in the first position is assigned with a random value in the range [minVal maxVal] as mentioned in Eqs. (1), (2). In the binary version, the frameshifting technique is applied as follows: refers to the th generated protein, P is the parent solution, D is the problem dimension, is a random value between 0 and 1. After replication, a mutation in the binary version can be applied as follows: is the solution before mutation, is the mutated solution, and and are the th elements in the old and new solutions. Dataset description.

Experimental results

This section presents the proposed algorithm results and the comparisons with the state-of-the-art algorithms. The proposed and state-of-the-art algorithms were run on a laptop with the following specifications: Intel(R) Core(TM) i7-1065G7 CPU, 8 GB RAM, Windows 10 operating system, and MATLAB R2016a development environment.

Datasets

We applied the proposed method to 26 different datasets from the UC Irvine Machine Learning Repository [42] to prove the efficiency of the proposed algorithm. Each data set is described in terms of the number of features, number of instances, number of classes, and the area to which they belong. We utilized many datasets to ensure the efficiency of the proposed algorithm in feature selection. These 26 datasets are selected based on the variety in dimension size (number of features) and the area they belong to. We utilized datasets with small (9, 11, 13), medium (64, 91, 256), and large (500, 617, 10000) dimension size. Furthermore, the datasets belong to different areas such as life, computer, physical, game, and social. A detailed description of the datasets is presented in Table 1. N/A means that this information is not known.
Table 1

Dataset description.

DatasetNo. of attributesNo. of instancesNo. of classesArea
1Heart132705Life
2Zoo161017Life
3Breast_cancer96992Life
4Glass_identification112146Physical
5Australian156902N/A
6Spambase5746012computer
7EEG Eye State15149802Life
8Segment1923107N/A
9Waveform2150003Physical
10Auto MPG83982N/A
11House Voting164352Social
12Wine131783Physical
13Vowel1399011N/A
14Dermatology333666Life
15Cryotherapy790N/ALife
16M-of-n44267N/AN/A
17kr-vs-kp3631962Game
18Optical recognition64562010Computer
19Page blocks1054732Computer
20Semion256159310Computer
21Pendigits16109922Handwriting
22Movement_libras9136015N/A
23arrhythmia27945213Life
24isolet5617779726Computer
25Mturk500180N/AComputer
26pixraw10P1000010010Face image data
Pseudocode of the proposed binary COVIDOA is given in Box II. Parameter setting. Scenarios of the tuning parameters. The results of parameter tuning on Zoo dataset.

Parameter setting

In all algorithms, the number of solutions in the population is 50, and the maximum number of iterations is 100. The proposed and state-of-the-art algorithms were run 20 times and the best results gained from these runs are reported. The problem dimension equals the number of features in the dataset, and the search domain is set to [0,1]. The remaining parameters of the proposed and state-of-the-art algorithms are set as shown in Table 2.
Table 2

Parameter setting.

AlgorithmParameterValue
GASelectionRoulette wheel
CrossoverProbability = 0.9
MutationProbability = 0.05

DEScaling factor0.5
Crossover probability0.5

PSOtopologyFully connected
Cognitive and social constant(C1, C2) 2, 2

WOAa2 to 0
r[0,1]

GWOa2 to 0
r[0,1]

HHa2 to 0

AOAα5
μ0.5

COVIDShifting No.1
No. of proteins2
Mutation0.1
Results obtained from the Proposed Binary COVIDOA.

Parameter tuning

To test the impact of changing parameter values on the performance of the COVID algorithm, we used nine different scenarios by changing the values of the parameters MR (Mutation Rate) and numOfProtiens. We utilized the values of 0.1, 0.01, ad 0.001 for MR, 2, 4, and 6 for numOfProtiens which produces nine scenarios, as shown in Table 3. The feature selection results (for the zoo dataset) of each scenario are shown in Table 4. It is observed that scenario 1 yields the best results, followed by scenario 3, which means that a higher mutation rate and a lower number of proteins improve the performance of the proposed algorithm.
Table 3

Scenarios of the tuning parameters.

ScenarioParameters
MRnumOfProtiens
10.12
20.012
30.0012
40.14
50.014
60.0014
70.16
80.016
90.0016
Table 4

The results of parameter tuning on Zoo dataset.

MetricScenario 1Scenario 2Scenario 3Scenario 4Scenario 5Scenario 6Scenario 7Scenario 8Scenario 9
Accuracy110.9607810.980390.980390.980390.960780.98039
Best fitness0.003750.0043750.0425740.0056250.0244120.0250370.0250370.0438240.024412
Average fitness0.00370.00440.04290.00560.02470.02500.03280.04740.0279
STD5.1021e−186.2304e−184.9024e−045.2304e−183.0894e−041.7435e−170.00950.00860.0067
Selection size676989988
Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a)Auto_mpg, (b) and Page_blocks, and (c) House_voting datasets. Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Breast_cancer, (b) glass_identification, and (c) movement_libras datasets. Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Pixraw10p, (b) Heart, and (c) Zoo datasets. Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Wine, (b) Waveform, and (c) Mturk datasets. : The total average accuracy for all datasets.

Evaluation measures

The results of the proposed algorithm are compared to the state-of-the-art feature selection algorithms such as GA [43], DE [44], PSO [45], WOA [25], WOASA [28], GWOPSO [46], HH [47], GWO [48], and AOA [49]. The parameters of these algorithms are selected as suggested by their authors. The comparison is made according to the following measures: Classification Accuracy It measures how accurate the classifier is in selecting the optimum subset of features. The maximum classification accuracy can be calculated as follows: Where refers to the accuracy at run n, where , …, and is the number of runs. Best cost The best cost at run n can be calculated as follows: Where is the cost obtained at iteration i where i =1, …, . The best cost obtained over the M runs can be calculated as follows: Where , …, M and M is the number of runs. Average cost: The average cost at each run n can be calculated as follows: Where refers to the cost at iteration i. The minimum average cost over M runs can be calculated as follows: Where , …, M and M is the number of runs. Standard Deviation (STD) It shows how the cost values are far from the average cost. STD at run n can be calculated as follows: The minimum STD over M runs can be calculated as follows: Selection size The selection size at run n can be calculated as follows: refers to the number of selected features in the optimum solution obtained at run n. The minimum number of features obtained over the M runs can be calculated as follows: Wilcoxon rank-sum test Null hypothesis [50] is a type of hypothesis widely used in Statistics. It is used to prove the results’ statistical significance. The test results of the 15 datasets are compared using Wilcoxon rank-sum test at the %5 significance level [51]. A small -value (typically 0.05) indicates strong evidence against the null hypothesis. In this hypothesis, researchers assume no significant difference between the two methods’ average values. The results of classification accuracy of the proposed and state-of-the-art algorithms. The results of best fitness of the proposed and state-of-the-art algorithms.

Results

This section presents the numerical results of the proposed binary COVIDOA and the comparisons with the state-of-the-art algorithms. The results of the proposed binary COVIDOA for feature selection are presented in Table 5. The proposed binary COVIDOA achieved the highest accuracy rate (100%) for the datasets named Zoo, Glass identification, and M-of-n. The minimum accuracy achieved is 65% for the mturk dataset. Moreover, the binary COVIDOA achieved the minimum best fitness (0.001) and average fitness (0.0032) for the Zoo dataset and minimum STD value (1.3948e−17) for the Cryotherapy dataset. Besides, the proposed algorithm achieved the minimum selection size of 1 from 15 features for the Australian dataset. The average accuracy, best fitness, average fitness, standard deviation, and selection size for all 26 datasets using the binary COVIDOA are achieved: 92.5%, 0.0898, 0.0920, 0.0019, and 147.15, respectively. The obtained results reveal the efficiency of the proposed algorithm, which indicates its strong exploration and exploitation capabilities.
Table 5

Results obtained from the Proposed Binary COVIDOA.

DatasetAccuracyBest CostAverage CostStandard Deviation (STD)Selection Size
1Zoo10.001870.003321.0548e−053
2Heart0.874070.127740.13940.00834
3Breast_cancer0.980.0264670.02721.8282e−045
4Glass_identification10.0020.00211.0000e−042
5Australian0.878260.121240.123420.00321
6spambase0.920470.0838230.08670.003129
7EEG Eye State0.966620.042330.04390.002413
8Segment0.975760.0287370.03069.1212e−049
9Waveform0.80080.204350.20510.002815
10Auto MPG0.888890.112860.11291.9739e−172
11House voting0.89450.106450.10641.1158e−163
12Wine0.988760.015290.01532.6152e−174
13Vowel0.975710.0315490.03187.2473e−048
14Dermatology0.994540.0115860.01820.002413
15Cryotherapy0.977780.0253330.02531.3948e−172
16M-of-n10.0046150.00510.002320
17kr-vs-kp0.839170.162930.16292.7895e−1613
18Optical recognition0.994440.0130060.01369.7768e−0431
19Page blocks0.963460.0401710.04033.2698e−044
20Semion0.983690.0209030.02200.0024125
21Pendigits0.993140.0142920.01442.3527e−0411
22Movement_libras0.872220.130940.13140.001536
23arrhythmia0.668140.331510.34010.001673
24isolet50.847430.1568730.16510.0047250
25pixraw10P0.84820.163260.16333.6043e−062861
26mturk0.657730.354540.36230.0142289

Average0.92500.08980.09200.0019147.15
Fig. 7, Fig. 8, Fig. 9, Fig. 10 show the convergence curves of applying the proposed algorithm in the feature selection of various datasets. The curves represent the relationship between the iterations from 1 to 100 and the corresponding fitness values for the proposed binary COVIDOA and the state-of-the-art algorithms (GA, DE, PSO, WOA, WOASA, GWOPSO, HH, GWO, and AOA). It is evident from the figure that the proposed binary COVIDOA outperforms the state-of-the-art algorithms’ overall datasets in terms of fitness values. In addition, the figure indicates the rapid convergence of the proposed algorithm as it reaches the global optimum during the first few iterations.
Fig. 7

Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a)Auto_mpg, (b) and Page_blocks, and (c) House_voting datasets.

Fig. 8

Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Breast_cancer, (b) glass_identification, and (c) movement_libras datasets.

Fig. 9

Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Pixraw10p, (b) Heart, and (c) Zoo datasets.

Fig. 10

Comparison of convergence curves of the binary COVIDOA and the state-of-the-art algorithms for (a) Wine, (b) Waveform, and (c) Mturk datasets.

The numerical results of the proposed algorithm against the state-of-the-art algorithms are shown in Table 6, Table 7, Table 8, Table 9, Table 10. Table 6 shows the comparison in terms of classification accuracy. This table shows that the proposed algorithm reaches the maximum accuracy in 22 out of 26 datasets; however, GA reaches the maximum accuracy in only 5 out of 26 datasets.
Table 6

The results of classification accuracy of the proposed and state-of-the-art algorithms.

DatasetAlgorithm
BGA [43]BPSO [45]BWOA [23]BWOASABDE [44]BGWOPSO [46]BHH [47]BGWO [48]BAOA [49]BCOVIDOA
1Heart0.82960.8592590.8222220.8444440.8148150.8444440.8666670.8074070.800000.87407
2Zoo0.96080.9607840.9607840.9803920.96078411111
3Breast_ cancer0.97430.9685710.9685710.9771430.9685710.9771430.9685710.9714290.9714290.98
4Glass_identification0.990710.99065410.9906541110.9906541
5Australian0.83480.8724640.8579710.8492750.8115940.8434780.8434780.8347830.8724q640.87826
6spambase0.93870.9226420.9174270.9282920.9248150.9322030.9169930.9230770.9126470.92047
7EEG Eye State0.968890.9668890.9600800.9660880.9666220.9623500.9651540.9656880.9663550.966622
8Segment0.96800.9679650.9593070.9653680.9619050.9696970.9610390.9688310.9619050.97576
9Waveform0.79160.7936000.7912000.7944000.7940000.7996000.7928000.7944000.7940000.8008
10Auto MPG0.83330.8484850.8232320.7929290.8434340.8282830.8282830.8181820.8686870.88889
11House Voting0.87160.8807340.8761470.8577980.8899080.8761470.8715600.8440370.8853210.8945
12Wine0.93260.9438200.9550560.9662920.9550560.9550560.9438200.9550560.9325840.98876
13Vowel0.96560.9574900.9433200.9696360.9676110.9615380.9514170.9696360.9736840.97571
14Dermatology0.98360.9890710.9836070.9890710.9890710.9836070.9836070.9836070.9890710.99454
15Cryotherapy0.88890.9777780.9555560.9555560.9777780.9777780.9333330.9111110.9555560.97778
16M-of-n1111111111
17kr-vs-kp0.82420.8316650.8235290.8147680.8254070.8310390.8116400.8085110.8041300.83917
18Optical recognition0.99330.9922140.9911010.9944380.9922140.9899890.9833150.9933260.9822020.99444
19Page blocks0.95470.9598100.9601750.9572520.9521370.9525030.9572520.9546950.9609060.96346
20Semion0.99250.9849440.9849440.9824340.9861980.9924720.9761610.9849440.9855390.98369
21Pendigits0.99430.9942820.9902800.9925670.9942820.9931390.993130.9914240.9836890.99314
22Movement_libras0.86670.8555560.8111110.8111110.8166670.8500.8166670.8000000.8333330.87222
23arrhythmia0.672340.6637170.7035400.6902650.6160710.6725660.6150440.62390.6090910.66814
24isolet50.87690.8076920.8333330.843590.8269230.8376920.8166670.8192310.8051280.84743
25pixraw10P0.78000.7200000.8000000.8222220.7800000.7600000.7800000.800000.653060.8482
26mturk0.65170.6222220.6111110.6111110.6179780.6333330.5888890.6333330.5568180.65773

Average0.89760.89770.89510.89830.89320.90090.89090.89060.88700.9250
Table 7

The results of best fitness of the proposed and state-of-the-art algorithms.

DatasetAlgorithm
BGA [43]BPSO [45]BWOA [23]BWOASA [28]BDE [44]BGWOPSO [46]BHH [47]BGWO [48]BAOA [49]BCOVIDOA
1Heart0.173280.1431790.1806150.1586150.1894870.1593850.1366150.1952820.2010770.12774
2Zoo0.0413240.0425740.0425740.0244120.0425740.0031250.0050000.0043750.0050000.00187
3Breast_cancer0.02870.0366700.0400030.0281840.0388920.0304060.0355590.0338410.0338410.026467
4Glass_identification0.0122520.0030.0112520.0030000.0112520.0030.0040.0122520.0112520.002
5Australian0.164990.1284040.1413230.1513600.1929500.1585280.1585280.1685650.1269750.12124
6spambase0.0668050.0820230.0884140.0781840.0816260.0722060.0886690.0838730.0919180.083823
7EEG Eye State0.0399510.0420650.0488060.0428580.0423300.0465590.0437840.0432550.0425940.0439
8Segment0.0348720.0343460.0429170.0374440.0419250.0331580.0443610.0345410.0398200.028737
9Waveform0.212510.2119550.2162360.2111630.2120350.2079150.2117950.2121150.2120350.20435
10Auto MPG0.169290.1514290.1807140.2078570.1578570.1728570.1728570.1828570.1342860.11286
11House Voting0.127820.1234070.1266150.1427800.1149910.1259480.1298230.1597370.1168650.10645
12Wine0.0709080.0589510.0486610.0375370.0478280.0486610.0606180.0503280.0684080.01529
13Vowel0.0407350.0487520.0636130.0367270.0403980.0455770.0547640.036727.0335530.031549
14Dermatology0.0203470.0155260.0227000.0155260.0169960.0200530.0224060.0215240.0146430.011586
15Cryotherapy0.113330.0270000.0490000.0490000.0270000.0270000.0710000.0913330.0490000.025333
16M-of-n0.0046150.0046150.0046150.0046150.0046150.0046150.0046150.0046150.0046150.004615
17kr-vs-kp0.177510.1729380.1795630.1890940.1777040.1715570.1930480.1961460.1981970.16293
18Optical recognition0.0108260.0131770.0153720.0114440.0145840.0147550.0235500.0152010.0189740.013006
19Page blocks0.0498520.0447880.0434260.0463200.0503840.0510220.0483200.0488520.0447030.040171
20Semeion0.0116420.0199630.0229750.0201130.0211350.0136040.0295260.0223400.0233780.020903
21Pendigits0.0137850.0137850.0148580.0164980.0137850.0142920.0142920.0159910.0257710.014292
22Movement_libras0.135330.1474440.1914440.1916670.1872780.1517220.1856110.2041110.1700000.13094
23arrhythmia0.286370.3379380.2982630.3112250.3884760.3275280.3866980.3792990.3953150.33151
24isolet50.126210.1951330.1706400.160050.1802600.1475300.18990.1869190.1974290.156873
25pixraw10P0.221690.2819570.2025280.1804550.2226660.2432170.2204590.2029060.348230.16326
26mturk0.34920.3789700.3902510.3903510.3862580.3701140.4122510.3697540.4481290.35454

Average0.10400.10610.10910.10560.11170.10240.11330.11440.11750.0898
Table 8

The results of the average fitness of the proposed and state-of-the-art algorithms.

DatasetAlgorithm
BGA [43]BPSO [45]BWOA [23]BWOASA [28]BDE [44]BGWOPSO [46]BHH [47]BGWO [48]BAOA [49]BCOVIDOA
1Heart0.20150.17020.20430.18510.15220.16110.14320.19550.22080.1394
2Zoo0.05270.08260.04520.08180.06450.00470.00590.00440.01000.00332
3Breast_cancer0.03110.03280.03170.03850.03210.03500.03570.03380.03600.0272
4Glass_identification0.01230.00320.01150.00310.01160.02450.00410.01230.01420.0021
5Australian0.16550.13050.16150.16210.19310.15990.16010.16870.21720.12342
6spambase0.07130.08360.09890.08380.08500.07500.09010.08420.10530.0867
7EEG Eye State0.04010.04240.04980.04540.04240.04670.04420.04330.04260.0439
8Segment0.03510.03450.04750.04130.04290.03340.04480.03460.04440.0306
9Waveform0.21420.21370.21980.21410.21270.20840.21400.21230.22840.2051
10Auto MPG0.16930.15140.18130.21260.15810.17290.17320.1828570.13430.1129
11House Voting0.12880.12420.12930.14790.1149910.12610.13240.16000.12350.1064
12Wine0.07100.06070.06020.04180.05030.04990.06180.05040.09650.0153
13Vowel0.04110.04940.06570.03850.04070.04770.05580.03700.03360.0318
14Dermatology0.02280.01790.02940.02540.02220.02050.02450.02150.02150.0182
15Cryotherapy0.11330.02700.04970.04990.02700.02760.07100.09130.04900.0253
16M-of-n0.00550.00460.02790.02040.00940.00690.01290.00490.12330.0051
17kr-vs-kp0.18010.18000.18090.19830.18610.17400.19500.19620.21370.1629
18Optical recognition0.01220.01390.01910.01470.01580.01830.02510.01610.01900.0136
19Page blocks0.04990.04480.04400.04680.05040.05110.04830.04890.04470.0403
20Semeion0.01290.02130.02600.0148580.02390.01810.03020.02260.02340.0220
21Pendigits0.01380.01390.01540.01690.01400.01440.01490.01600.02590.0144
22Movement_libras0.14210.14990.20910.20360.20120.15390.18870.20490.17960.1314
23arrhythmia0.31060.34200.32330.34280.40580.34370.39250.37990.39590.3401
24isolet50.14500.19910.19520.17790.18990.16050.18820.19060.20490.1651
25pixraw10P0.23450.28360.20480.18070.22270.24460.22060.20290.34820.1633
26mturk0.40440.38410.42480.44390.41420.38100.43130.39010.45870.3623

Average0.11080.11000.11750.11660.11470.10610.11570.11550.13130.0920
Table 9

The results of the proposed and state-of-the-art algorithms’ standard deviation (STD).

DatasetAlgorithm
BGA [43]BPSO [45]BWOA [23]BWOASA [28]BDE [44]BGWOPSO [46]BHH [47]BGWO [48]BAOA [49]BCOVIDOA
1Heart0.00850.01010.01330.01170.00940.01580.01680.00230.00560.0083
2Zoo5.4847e−042.4328e−040.00350.01061.3690e−040.00390.00183.4007e−045.0000e−051.0548e−05
3Breast_cancer6.3965e−044.0826e−040.00250.00140.00151.9050e−040.00112.4166e−042.2222e−041.8282e−04
4Glass_identification1.7145e−044.0936e−044.3519e−043.4289e−048.6199e−042.0000e−041.9695e−040.00203.0000e−041.0000e−04
5Australian0.00460.00630.03230.02855.5268e−040.01080.00690.00140.00910.0045
6spambase0.00860.00280.01510.00710.00710.00490.00250.00150.00140.0031
7EEG Eye State0.00110.00150.00600.01189.9593e−049.7810e−040.00210.00250.00220.0018
8Segment0.00119.0169e−040.00630.00830.00190.00117.7223e−042.3733e−044.6306e−059.1212e−04
9Waveform0.00450.00490.00500.00670.00290.00330.00360.00450.00330.0028
10Auto MPG1.4286e−041.1158e−160.00550.01737.8019e−041.4286e−040.00302.7895e−172.5106e−161.9739e−17
11House Voting0.00380.00250.00390.00200.00126.7722e−040.00210.00140.00161.1158e−16
12Wine3.6507e−040.00490.03150.02210.00640.00700.00710.00410.00332.6152e−17
13Vowel0.00150.00300.00340.00670.00110.00410.00120.00122.5000e−047.2473e−04
14Dermatology0.00640.00480.01940.01510.00860.00120.00370.00116.8804e−040.0024
15Cryotherapy6.9739e−176.2765e−170.00430.00556.2765e−170.00636.9739e−171.3948e−166.9739e−171.3948e−17
16M-of-n0.00440.01390.04480.04990.01840.01790.02660.00440.01200.0023
17kr-vs-kp0.00590.01000.00410.00960.01140.00570.00265.5751e−040.00162.7895e−16
18Optical recognition0.00270.00180.00370.00430.00180.00330.00190.00200.00629.7768e−04
19Page blocks2.0136e−046.4991e−050.00120.00139.3483e−053.5320e−046.3829e−052.8766e−042.3956e−043.2698e−04
20Semeion0.00280.00150.00250.00240.00230.00340.00120.00110.00608.7931e−05
21Pendigits2.6000e−045.2196e−040.00206.1602e−045.5680e−046.3477e−045.9333e−045.6604e−052.4575e−042.3527e−04
22Movement_libras0.01600.00860.01220.01530.01090.00520.00360.00360.00240.0015
23arrhythmia0.03390.00820.02180.02370.01590.02100.00760.00340.00450.0016
24isolet50.02140.00820.01000.01200.00860.01450.00370.00530.00270.0047
25pixraw10P5.2430e−050.00540.00601.1680e−044.1843e−160.00493.5025e−046.6087e−062.7035e−043.6043e−06
26mturk0.04930.01210.01580.03760.01860.02640.01600.02850.02700.0142

Average0.00680.00430.01060.01190.00500.00630.00450.00270.00260.0019
Table 10

The results of selection size of the proposed and state-of-the-art algorithms.

DatasetAlgorithm
BGA [43]BPSO [45]BWOA [23]BWOASA [28]BDE [44]BGWOPSO [46]BHH [47]BGWO [48]BAOA [49]BCOVIDOA
1Heart5455665543
2Zoo56710768764
3Breast_cancer3585774555
4Glass_identification2223222322
5Australian3333955711
6spambase35313840403030443129
7EEG Eye State13131313131313131313
8Segment1355581311749
9Waveform13151716161614181615
10Auto MPG3242222232
11House Voting3863954853
12Wine5455456724
13Vowel88881098888
14Dermatology14162216221421181313
15Cryotherapy2333332232
16M-of-n20202020202020202020
17kr-vs-kp13221717171523231513
18Optical recognition27354238443133543531
19Page blocks5544356464
20Semeion125133136146197125156195258125
21Pendigits13131112131212121211
22Movement_libras29393942523838384236
23arrhythmia98140737323412815619423273
24isolet5218269293348550410322491278250
25pixraw10P3890475745284455486656163659490647582861
26Mturk290248262267402355362337468289

Average186.7223.3214.2213.8252.2265189.30247.23240147.15
The proposed algorithm achieves the highest total average accuracy over all datasets. The bar chart in Fig. 11 presents a comparison in terms of the total average classification accuracy. The figure shows that the proposed algorithm comes in the first position with total average accuracy (92.5%), followed by the GWOPSO algorithm with total average accuracy (90%).
Fig. 11

: The total average accuracy for all datasets.

The statistical results of the best, average, and standard deviation of the fitness values of each algorithm are presented in Table 7, Table 8, Table 9. The best results are in bold. As can be seen from the results, the binary COVIDOA outperforms the other algorithms in 19 out of 26 datasets in terms of best fitness. Each algorithm’s average best fitness is recorded in Table 7 and shown in the bar chart in Fig. 12.
Fig. 12

The total average best fitness overall datasets.

The proposed binary COVIDOA algorithm archives the minimum average best fitness value (0.092) for overall datasets followed by (0.0106) for the GWOPSO algorithm. The proposed algorithm achieves the minimum average fitness in 17 out of 26 datasets, followed by GA, which achieves the minimum average fitness in 7 out of 26 datasets, as seen in Table 8. A comparison between the algorithms in terms of the total average for the average fitness values is shown in Fig. 13. The figure shows that the proposed algorithm comes in the first position with a value of (0.0898), followed by the GWOPSO algorithm with a value of (0.01024).
Fig. 13

The total average mean fitness overall datasets.

The standard deviation is one of the essential metrics to evaluate the proposed algorithm. Lower standard deviation values indicate that the fitness values are clustered closely around the mean value, proving the proposed algorithm’s stability. As shown in Table 9, the proposed algorithm comes in the first rank as it achieves the minimum STD values in 17 out of 26 datasets and has the minimum average STD value (0.0019) compared to its peers, see Fig. 14.
Fig. 14

The total average STD overall datasets.

While maintaining the highest classification accuracy, another objective is to achieve the minimum number of selected features. Table 10 reports the number of selected features for all datasets. The proposed algorithm achieves the minimum selection size for 17 out of 26 datasets. It achieves the minimum average selection size (147.15) for all datasets, which means that the proposed algorithm has high size reduction capabilities. As shown in Fig. 15, the proposed algorithm is superior to its peers in the selected size.
Fig. 15

The total average selection size of overall datasets.

The Wilcoxon rank-sum test is a nonparametric statistical test that compares two paired groups. This test calculates the difference between sets of pairs and analyzes them to establish if they are statistically significantly different. The test results of the 26 benchmark datasets are compared using Wilcoxon rank-sum test at the %5 significance level. Table 11 introduces the p values computed by Wilcoxon rank-sum test that compares the binary COIDOA with nine well-known metaheuristic algorithms for the 26 benchmark datasets. We observed from the table that all p values are less than a 5% significance level for all comparative algorithms; this is strong evidence against the null hypothesis. Therefore, we conclude that the binary COVIDOA is better than all comparative algorithms.
Table 11

The results of Wilcoxon rank sum test.

DatasetCOVIDOA vs GACOVIDOA vs DECOVIDOA vs PSOCOVIDOA vs WOACOVIDOA vs WOASACOVIDOA vs GWOPSOCOVIDOA vs GWOCOVIDOA vs HHCOVIDOA Vs AOA
1Heart5.6032e−391.7658e−375.6003e−392.1981e−362.7477e−361.8002e−373.8011e−41.3865e−354.342e−33
2Zoo1.7534e−361.6259e−146.9608e−201.7181e−339.1823e−299.9344e−191.4233e−392.0497e−402.4533e−35
3Breast_cancer6.3499e−431.2756e−281.0760e−394.0981e−436.8100e−429.7131e−434.6912e−437.4605e−436.5322e−40
4Glass_identification2.3159e−442.6903e−423.5850e−445.4813e−443.3051e−261.5851e−402.5793e−343.5850e−441.6437e−30
5Australian2.1302e−438.5032e−354.1592e−233.9384e−171.1815e−344.2307e−301.9596e−189.7510e−157.3434e−22
6Spambase4.3465e−311.9975e−241.1097e−351.9636e−062.4298e−321.0498e−339.0754e−384.4347e−251.4765e−32
7EEG Eye State1.3910e−371.3954e−052.6110e−342.7204e−348.8026e−081.6882e−301.9302e−053.3901e−064.6433e−18
8Segment7.6710e−431.3937e−407.6484e−436.1152e−383.0861e−399.6344e−369.4052e−388.6619e−419.6023e−40
9Waveform7.6385e−314.7017e−355.5021e−313.8192e−154.5960e−312.1784e−322.9884e−291.9970e−402.4340e−25
10Auto MPG3.5216e−459.2924e−455.7625e−452.6851e−371.8127e−433.5216e−455.7625e−453.5216e−452.7850e−45
11House Voting3.4467e−408.6598e−413.9014e−407.8992e−402.4713e−395.9000e−444.8986e−373.5147e−416.3344e−39
12Wine4.8033e−422.6353e−439.2924e−459.8266e−412.0061e−421.4777e−443.6008e−401.6868e−405.3830e−40
13Vowel2.9574e−422.9557e−327.6711e−395.6250e−251.3419e−412.5666e−375.0621e−381.0285e−411.2673e−45
14Dermatology2.4166e−094.4232e−181.8017e−075.4397e−111.2786e−141.2684e−301.1505e−351.0661e−422.5742e−14
15Cryotherapy3.5216e−453.5216e−453.5216e−452.9642e−435.7625e−453.5216e−453.5216e−453.5216e−454.6472e−45
16M-of-n3.1186e−111.4396e−061.9623e−103.5325e−059.0593e−072.7821e−101.0333e−041.9571e−151.6467e−12
17kr-vs-kp1.1945e−401.2793e−398.3757e−404.6916e−394.6202e−404.1952e−394.5437e−399.9695e−401.3575e−40
18Optical recognition7.9569e−191.1762e−077.0097e−333.4097e−086.4914e−292.4700e−285.9524e−291.3792e−132.7345e−20
19Page blocks4.8485e−431.6000e−423.0580e−395.5883e−411.2629e−415.7625e−451.3127e−432.4083e−413.8567e−35
20Semeion1.4273e−376.0392e−378.5955e−271.2883e−361.7441e−365.2803e−421.9126e−396.9880e−336.7486e−32
21Pendigits1.5393e−401.1753e−384.5141e−387.2579e−381.2680e−256.8678e−406.6390e−247.9184e−319.3431e−33
22Movement_libras6.2654e−274.5994e−356.9745e−391.7639e−351.1697e−352.2858e−341.1526e−344.3247e−394.6727e−35
23Arrhythmia4.5345e−325.3278e−318.3454e−284.6544e−251.3249e−385.3453e−406.3437e−355.3453e−311.6346e−32
24isolet51.2343e−101.6489e−085.3475e−144.6479e−115.6324e−156.2140e−183.5359e−101.2536e−202.5362e−22
25pixraw10P1.3534e−284.6273e−245.3455e−126.3081e−321.3455e−355.2343e−342.1536e−333.5389e−252.3125e−28
26Mturk3.4537e−054.6471e−086.4340e−133.5467e−126.2138e−174.7543e−105.3572e−251.5366e−184.5920e−12
The frameshifting technique applied to the population in the replication stage of COVIDOA helps enhance the population diversity of the search space and converge to global optima. The binary COVIDOA reaches the minimum selection size while maintaining the highest classification accuracy, achieving the two objectives of the feature selection problem. The convergence curves and standard deviation values also prove its high convergence as it rapidly reaches the global optimum.

Conclusions and future work

Feature selection is a way to eliminate redundant and irrelevant data. This process leads to improved learning accuracy, reduced computational time, and enhanced understanding of learning models. This paper proposed a binary approach to the Coronavirus Optimization algorithm based on a wrapper method to solve feature selection problems. The proposed algorithm used the KNN classifier because its simplicity has two parameters: K and distance function. Many evaluation metrics are utilized to evaluate the performance, such as classification accuracy, best fitness, average fitness, standard deviation, and selection size. The proposed algorithm is tested on 26 benchmark datasets, and the results are compared with nine well-known metaheuristics. Additionally, the Wilcoxon rank-sum test is evaluated to prove the significance of the proposed algorithm. The statistical results reveal that the proposed algorithm performs better than the state-of-the-art algorithms. The convergence curves proved that it has a high convergence speed as it reaches the global optimum rapidly. The total average best fitness overall datasets. The total average mean fitness overall datasets. The total average STD overall datasets. The total average selection size of overall datasets. The results of the average fitness of the proposed and state-of-the-art algorithms. The results of the proposed and state-of-the-art algorithms’ standard deviation (STD). The results of selection size of the proposed and state-of-the-art algorithms. The results of Wilcoxon rank sum test. Future work may apply the binary COVIDOA with different classifiers such as support vector machines (SVM). Also, applying the proposed algorithm to solving real-world problems such as medical diagnoses, image processing, and industrial applications would be interesting. Another possible future work is hybridizing COVIDOA with another metaheuristic algorithm such as SA or PSO.

CRediT authorship contribution statement

Asmaa M. Khalid: Conceptualization, Methodology, Validation, Software, Writing – original draft. Hanaa M. Hamza: Validation, Software, Supervision. Seyedali Mirjalili: Conceptualization, Methodology, Writing – review & editing. Khalid M. Hosny: Conceptualization, Methodology, Writing – review & editing, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  7 in total

Review 1.  A review of feature selection techniques in bioinformatics.

Authors:  Yvan Saeys; Iñaki Inza; Pedro Larrañaga
Journal:  Bioinformatics       Date:  2007-08-24       Impact factor: 6.937

2.  Wavelet feature selection for image classification.

Authors:  Ke Huang; Selin Aviyente
Journal:  IEEE Trans Image Process       Date:  2008-09       Impact factor: 10.856

Review 3.  Pharmacological approaches for targeting cystic fibrosis nonsense mutations.

Authors:  Jyoti Sharma; Kim M Keeling; Steven M Rowe
Journal:  Eur J Med Chem       Date:  2020-05-21       Impact factor: 6.514

Review 4.  When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.

Authors:  Denes Szucs; John P A Ioannidis
Journal:  Front Hum Neurosci       Date:  2017-08-03       Impact factor: 3.169

Review 5.  Principles of Virus Uncoating: Cues and the Snooker Ball.

Authors:  Yohei Yamauchi; Urs F Greber
Journal:  Traffic       Date:  2016-03-31       Impact factor: 6.215

6.  Comparative genome analysis of novel coronavirus (SARS-CoV-2) from different geographical locations and the effect of mutations on major target proteins: An in silico insight.

Authors:  Mohd Imran Khan; Zainul A Khan; Mohammad Hassan Baig; Irfan Ahmad; Abd-ElAziem Farouk; Young Goo Song; Jae-Jun Dong
Journal:  PLoS One       Date:  2020-09-03       Impact factor: 3.240

7.  SARS-CoV-2 (COVID-19) by the numbers.

Authors:  Yinon M Bar-On; Avi Flamholz; Rob Phillips; Ron Milo
Journal:  Elife       Date:  2020-04-02       Impact factor: 8.140

  7 in total
  2 in total

1.  Multilevel thresholding satellite image segmentation using chaotic coronavirus optimization algorithm with hybrid fitness function.

Authors:  Khalid M Hosny; Asmaa M Khalid; Hanaa M Hamza; Seyedali Mirjalili
Journal:  Neural Comput Appl       Date:  2022-09-23       Impact factor: 5.102

2.  Binary dwarf mongoose optimizer for solving high-dimensional feature selection problems.

Authors:  Olatunji A Akinola; Jeffrey O Agushaka; Absalom E Ezugwu
Journal:  PLoS One       Date:  2022-10-06       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.