Literature DB >> 28753613

Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data.

Jinyan Li1, Lian-Sheng Liu2, Simon Fong1, Raymond K Wong3, Sabah Mohammed4, Jinan Fiaidhi4, Yunsick Sung5, Kelvin K L Wong6.   

Abstract

Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method.

Entities:  

Mesh:

Year:  2017        PMID: 28753613      PMCID: PMC5533448          DOI: 10.1371/journal.pone.0180830

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Big Data in medical fields, such as hospital informatization construction, the progress of treatments, and the extensive use of high-throughput equipment, have caused a geometric growth of attentions. It has been desirable to improve the efficiency, accuracy and quality of medical data processing [1]. The sources of health data include clinical medical treatments, pharmaceutical companies, medical research, medical assistance application, and more. Existing datasets bring in important medical and health information for research topics, such as understanding of the human genetic and disease systems [2] [3], medical and biological imaging [4]; and classification and prediction in medical engineering [5]. Specifically, we investigate disease diagnosis in the context of data mining and classification. Disease diagnosis can be divided into two stages: we first obtain the diagnostic rules from clinical data with known labels, and then apply the rules to diagnosis new patients. However, the high complexity, heterogeneous sources and uncertain reliabilities of medical data pose challenges for classification. For example, it is well known that compared with normal and healthy persons, patients comprise only a small part of the total population. Those more serious diseases, such as cancer and AIDS, have fewer numbers of cases. That constitutes the imbalanced dataset when we try to train classifiers on such data, which causes over-fitting the majority classes and biases our results For instance, in the binary classification of a cancer dataset, the amount of the negative samples (healthy) is dominant, and the obtained model is likely to have little discriminative ability on the positive samples (patient). However in practice, it is an unacceptable mistake to identify cancer patients as healthy people. In our experiments to solve the imbalanced dataset classification problem, we combine SMOTE and meta-heuristic algorithms to created two methods, which respectively process the data as a whole and partition it into segments. The first method is simple parameter optimization of SMOTE by the meta-heuristic algorithms, namely the Swarm Balancing Algorithms, and after the experiments, we find that it is effective in processing a static and relatively small imbalanced dataset. However the experimental results reflect that the effect of first method is not very good for handling big and highly imbalanced dataset. Therefore, the big data will be divided into several data segments (We used the concept of windows to name these several data segments) suitable for processing by the next methods. To perform SMOTE and classification, the parameters of the data in each window are established on the basis of last window, and the algorithm eventually collects the performances of each window and determines their average values. We call this method Adaptive Swarm Balancing Algorithms in this paper. We observed in our experiments that this latter method is faster and more efficient.

Related work

In recent years, more and more researchers from different fields have begun to focus on imbalanced dataset research. This research can be considered as having two different levels, the first concerns methods of data modification and optimization, and the second relates to improvement of the algorithms.

Data level methods

Random under-sampling [6] is a simple sampling technique in which parts of the majority class data are randomly removed to reduce the imbalance ratio, i.e., the ratio between minority and majority is not equal to one but with this method, it is easy to ignore the useful information in the majority class. Contrary to the under-sampling method, random over-sampling, as the other sampling technique [7], increases the number of minority class data to improve the imbalance problem of the dataset. However, its disadvantage is its focus on classification over learning [8]. Based on the over-sampling technique, the synthetic minority oversampling technique (SMOTE) algorithm [9] is a commonly used algorithm that often obtains excellent results in imbalanced dataset classification. The principle is the algorithm is to analyze the feature space of the minority class samples, then synthesize the minority class data and combine it with the original dataset to reduce the imbalance ratio. Assuming that the oversampling rate is S, the number of minority classes is equal to M, and each minority class can be signified as x (i = 1, 2, 3 … M), which belongs to S, then every x searches out K neighbors of the minority class, and the algorithm randomly sets an x from the K neighbors and finally synthesizes new data: Eq (1) synthesizes S times new samples, and the function rand [0, 1] produces a random number in the scope of 0 to 1. The two key parameters of this algorithm, S and K, influence the data synthesis and the classification performance. In our experiment, we use meta-heuristic algorithms to find the best and most suitable parameters for the SMOTE algorithm.

Algorithm level methods

The research emphasis in imbalanced dataset classification is the minority class data. It is more meaningful that the algorithm correctly identifies the minority class rather than the majority class samples. In other words, the cost is higher if the classification algorithm misclassifies the minority class data. The cost-sensitive learning [8] approach assigns different error prices to different classes. If a classifier misclassifies a minority class, it is “punished” in a manner that forces the classifier to increase its recognition rate of minority class samples. Meanwhile, on the basis of the kernel processing method, researchers modified the support vector machine classification in the field of machine learning, which also improved the imbalanced dataset classification problem [10]. The idea of ensemble learning methods is to use an algorithm to obtain a series of child classifiers from the training set and then by integrating these child classifiers, to improve the classification accuracy. SMOTEBoost [11] is an algorithm that combines the SMOTE and Boosting methods; it is a quite effective method among the ensemble learning methods. In recent year, swarm intelligence algorithms are widely used in different fields to solve the rough original dataset, especially feature selection [12, 13]. We use two different meta-heuristic algorithms, particle swarm optimization (PSO) [14] and the bat algorithm (BA) [15] for comparison of the optimization effect between the two different meta-heuristic algorithms. We choose the neural network algorithm, a representative and popular intelligence classification algorithm, for verification of the classification performance in each iteration. Pseudo code of PSO: For each particle Initialize particle and parameters End While maximum iterations or the termination mechanism is not satisfied. For each particle Calculate and update particle velocity and position as equation (2) and (3) End For each particle Calculation of fitness function If the fitness value is better than the best fitness value (pBest) in history current fitness value represent the older pBest to be the new pBest End End Selected the gBest whose fitness value is the best in the population. End PSO [15] is a widely used meta-heuristic algorithm that imitates the feeding process of birds. Above-mentioned pseudo code describe the process of PSO. Assuming there is a population X = (X1, X2,…, X) which is grouped by n particles in D dimension search space, the ith particle in this space is expressed as a vector X with D dimension, X = (x x, …, x), and the position of the ith particle in the search space represents a potential solution. As the objective function, the program can calculate the corresponding fitness of position X of each particle, where the speed of the ith particle is V = (V,V, …, V), the extremum value of each agent is P = (P, P, …, P) and the extremum of the population is P = (P, P, …, P). In the process of iteration, the extremum value of each agent and the population will update their position and speed [16]. Eqs 2 and 3 show the mathematical process as follows: In Eq 2, ω is inertia weight; d = 1, 2, …, D; i = 1, 2, …, n; k is the current iteration time; c1 and c2 are non-negative constants as the velocity factor, r1 and r2 are random values between 0 to 1 and V is the particle speed [17]. Pseudo code of BA [17]: For each bat Initialize bat Define pulse frequency at this bat Initialize pulse rates and the loudness End While maximum iterations or the termination mechanism is not satisfied. For each bat Generate new solutions by frequency, and updating velocities and positions as eqs ( End For each bat If rand bigger than pulse rates Select a solution among the best solutions and generate a local solution around the select best solution End Generate a new solution by flying randomly If rand smaller than loudness and the best value bigger than current fitness value Accept the new solutions and increase pulse rate and reduce loudness End End Rank the bats and select the best value in the population. End The other algorithm, BA [15], is a new meta-heuristic algorithm that has already shown good results in research. Moreover, we also list the pseudo code of BA of aforementioned part. It learns from the theory of echolocation in bats. The algorithm also assumes the bat populations in a D dimension search pace, and the following equations show the process of updating the bats’ position x and v, in the tth iteration: In these three equations, β is a random vector between 0 to 1, and x* is the current global best solution among the bats. In recently years, based on the original version, there are many variant versions of PSO and BA to improve their searching ability in accuracy and efficiency, like SEPSO [18], APSO [19]and FDR-PSO [20], as well as Self-Adaptive Bat algorithm [21], hybrid bat algorithm [22] and chaotic bat algorithm [23], etc. In the future we will try to adopt different version of meta-heuristic algorithms to expand our experiments.

Experiments and datasets

Differences in the sources and formats of datasets cause complexity. In this paper, the health and medical datasets are divided into two kinds according to the size of the datasets, which are processed by the two methods, Swarm Balancing Algorithms and Adaptive Swarm Balancing Algorithms. Therefore, two experiments are performed as follows. The following experimental results responded that the first method is more suitable for the relatively small dataset. However it would be invalid when the processed dataset is relatively big. As above mentioned that big data is common to seen in health care filed and imbalanced classification problem [24]. Therefore, the latter method was proposed to overcome the big and highly imbalanced dataset. For the optimizer, Table 1 contains the information of operating environments and the parameters of the two swarm algorithms. Since these parameters are susceptible to performance, thus they are carefully selected from several tests. In PSO, the two learning factors, c1 and c2 which were widely used in equal and smaller than 2. BA has more parameters. Loudness and pulse rate can influence the position of bats to search the neighbor of the objectives. Here we separately set the values of these two parameter to 0.5 and 1. What’s more, the other two factors of BA, Qmin and Qmax, which were commonly used the value of 0 and 1. Furthermore, the populations and the amount of iterations of PSO and BA were the same.
Table 1

The environment and parmeters of PSO and BA.

PSOBA
ParametervalueParametervalue
Population20Populations20
Iterations1000Iterations1000
c11.5A(loudness)0.5
c21.5R(pulse rate)1
Qmin0
Qmax2
All the software programs were coded in MATLAB version 2014a, and the computing environment for all experiments was a PC workstation (CPU: E5-2670 V2 @ 2.50 GHz, RAM: 128 GB). Both of the following two experiments used 10-fold cross validation method to perform the testing experiment. That means a dataset will be split into 10 non-redundant pieces. Each piece of the ten samples and the rest nine parts respectively are used as the testing dataset and training dataset. The algorithms compute the average value of the ten for verifying the performance. In the second experiment we find that the size of testing dataset (one-tenth of the original dataset) is bigger than the whole dataset in the first experiment.

Experiment 1: Swarm Balancing Algorithms with a moderately imbalanced dataset

We selected five datasets from the UCI [25] in our experiment. The imbalance ratios between majority class and minority class range from 2.05:1 to 70.3:1.The Surgery dataset in S1 Data, contains data on lung surgery for 5 years. Some datasets related to bioassay data are imbalanced datasets. We selected four of them and respectively stored them in S2 Data to S5 Data, for testing the basic method in the first experiment. The main problem in the classification of an imbalanced dataset is that the algorithm ignores the minority class data and tends to assign the trained classifier to the majority class with very good accuracy. However, the Kappa statistic is an index that can help people to judge the confidence level of the classification results. It is a very important value when judging problems of imbalanced class classification because even though the accuracy may be high, the Kappa [26, 27] for the classification results can be close to zero or sometimes even a negative value. The Kappa index ranges between -1 to 1. As mentioned in the introduction, in disease diagnosis, classifying a patient as normal is completely unacceptable, and the consequences can be tragic. As a monitor of the credibility of the classification results, a higher Kappa value indicates that the accuracy is more credible. The Kappa index is commonly divided into three levels to evaluate the credibility of classification accuracy [28, 29]. In the top level, the Kappa value is ≥0.75, which means that the classification accuracy is high in credibility. A Kappa value from ≥0.4 to <0.75 indicates general credibility. Finally, a Kappa value of <0.4 indicates a classification accuracy with either low or no credibility. Our aim in this experiment is to ensure relatively high accuracy by maintaining the largest possible Kappa value. In the experimental process, we used PSO and BA to globally search the two best parameters for SMOTE, K and S, and the neural network classification algorithm to help the meta-heuristics to estimate and check the two objectives in accordance with the fitness in every iteration of the meta-heuristic algorithms. It means that in the experimental process (also the same in experiment 2), accuracy was not the only objective; we also needed to consider the Kappa index as both are our objectives in optimization. TP means true positive, TN means true negative, FP means false positive, FN means false negative and P and N respectively stand for positive and negative. From the above equations, we can find that Kappa and Accuracy have inevitable links. Thus both Eqs 7 and 8 are our objective functions. To find a balance between the two, we set a condition for both. Since we knew the credible range of Kappa, therefore we fixed the Kappa value in the top two intervals to ≥0.4 (this value of Kappa can be changed just as with a threshold or parameter value). The swarms regarded Kappa and Accuracy as fitness function to find the optimal position, gradually[30]. Fig 1 illustrates the principle and flow chart of the Swarm Balancing Algorithms [30].We initialized two control conditions in the generation of the meta-heuristic algorithms to maintain the authenticity of accuracy, so the first condition was the value of Kappa that must fail in the first and second levels of the Kappa scope (Kappa ≥0.4). Secondly, after satisfying the first condition, the particles or bats needed to find the largest possible accuracy in the search space with control of the two parameters.
Fig 1

Flow chart of the Swarm Balancing Algorithms.

Pseudo code of Swarm Balancing Algorithm [30]: Specify a Meta-heuristic algorithm S (PSO/BA) and a Classifier C (Neural Network) Initialize the population of the S algorithm p (a = 1, 2, …, P) and the other related parameters Initialize the floor value of Kappa Define the scope of K and S //K ϵ [K K], S ϵ [S S], K is the selected Neighbor and S is the increased proportion of minority class //data Define the limit value of Kappa T Load dataset While (i < Maximum number of iterations) If (i = 1) // in S then K = Rnd (K ) S = Rnd (S ) // as initialized parameters of SMOTE to generate a new dataset and using C to get the Current Kappa //and Current Accuracy Else based on the last position or solution to generate a pair of K and S // through the SMOTE and C to get the Current Kappa and Current Accuracy End If (min (BestKappa,CurrentKappa) > 0.4 & CurrentAccuracy > BestAccuracy) then BestKappa = Current Kappa BestAccuracy = CurrentAccuracy Else (CurrentKappa >0.4 & BestKappa <0.4) then BestKappa = Current Kappa BestAccuracy = CurrentAccuracy End i = i+1 End In general, the Kappa value increases while accuracy rises. The interval of S is from 10% to the value of the number of majority classes divided by minority classes, and the scope of K is from 2 to the number of minority classes. To realize the effect of our method, we used normal SMOTE for compression, and in the experiment, we used SMOTE to synthesize minority class samples until the number of minority class data were equal to that of the majority class to obtain a compete balance of the dataset; meanwhile, we used the default value of K which is 5. Furthermore, a contrast test was performed using the traditional class balancing algorithm on the same imbalanced datasets, on which the neural network was also used as the classification algorithm for verification. The principle of this algorithm is to change an imbalanced dataset into a completely balanced dataset by splitting the majority class into minority classes.

Experiment 2: Adaptive Swarm Balancing Algorithms with highly imbalanced dataset

AID 362 dataset (it’s in S5 Data) in the first experiment and the other five highly imbalanced datasets (they are in S6 Data to S10 Data) are also selected from the Bioassay multiple dataset in UCI [25], and they were used in this experiment. However, regardless of the number of features or the imbalance ratio of these datasets, they are much larger than the datasets used in experiment 1. Compared with the datasets in the last experiments, the datasets in this experiments have increased not only in overall size but also in the scales of minority class dataset growth. Therefore, we treated them as big data in our experiments. The largest dataset has 47,831 data instances. Table 2 lists the characteristics of these highly imbalanced datasets, which are high in volume and dimensions. The approach of Adaptive Swarm Balancing Algorithms is to process the full dataset window by window or to break up the big data into several parts that imitate the data flow to improve the imbalanced dataset classification problem. In our experiment, due to considerations of data size and volume and to guarantee the integrity of the original dataset, we used three windows for each dataset (the concept mentioned in section I) when performing this experiment. Table 2 also shows the length of each window, which indicates how many instances of the dataset are present in each window.
Table 2

Characteristics of the highly imbalanced datasets used in experiment 2.

Bioassay No.NegativeLength of each window in NPositiveLength of each window in NTotal instancesImbalance ratio (Majority/Minority)
3623375562(+1), 1124(+1), 1686(+1)488, 16, 24342370.3125
1608772128(+1), 256(+1), 384(+2)559, 18, 27(+1)82714.03636364
746475387923, 15846, 2376929348(+1), 96(+2), 144(+2)47831162.2457338
687263784396, 8792(+1), 13188(+1)7612(+1), 24(+1), 36(+2)26454347.0789474
45679641327, 2654(+1), 3981(+1)223(+1), 6(+1), 9(+2)7986362
373477817963(+1), 15926(+1), 23889(+1)508, 16(+1), 24(+1)47831955.62
The principle or working flow of the Swarm Balancing Algorithm is presented in Fig 2, which clearly shows the important role of this algorithm in the experiment. Each of the data windows needs to use this method. From the figure, we can see that the length of Window X is X times longer than Window 1, which means that as the data flows, the size of the data or the window becomes longer and longer. In Window 1, the initial parameters input into the Swarm Balancing Algorithms are K = 100% and S = 2, and the algorithms will process the child dataset in Window 1 and generate a suitable K = A1% and S = B2 with the current Accuracy and Kappa values.
Fig 2

Principle of the Adaptive Swarm Balancing Algorithms.

Then, the A1% of K and the B2 of S will be regarded as the initial parameters for Window 2 to repeat the process, which also generates the present values of K, S, Accuracy and Kappa. This process is repeated until the last window, Window X. Here, the algorithm need only use the parameters generated from Window (X-1) as its setting parameters to perform the classification. The algorithm ultimately determines the average of each window’s Accuracy and Kappa as the final result. We can find that the processing direction and the data segment direction are opposite. Following is the pseudo code of the Adaptive Swarm Balancing Algorithm which is the novel method which is proposed in this paper: Pseudo code of Adaptive Swarm Balancing Algorithm: Specify a Meta-heuristic algorithm S (PSO/BA/…) and a Classifier C (Neural Network/Decision Tree/…) Initialize the population of the S algorithm p (a = 1, 2, …, P) and the other related parameters initialize the floor value of Kappa Initialize the number of windows as N (x = 1,2,…N)) Define the scope of K and N //K ϵ [K K], S ϵ [S S], K is the selected Neighbor and S is the increased proportion of minority class //data Define the limit value of Kappa T Load dataset and divide into x windows For x = 1 : N If (x = 1) then K = Rnd (K) // K from 2 S = Rnd (S) // S from 100 // as initialized parameters of SMOTE to generate a new dataset and using C to get the Current Kappa //and Current Accuracy Else K = K // K from the final results or last Window x-1 S = S // S from the final results or last Window x-1 End If (x ! = N) While (i < Maximum number of iterations)// input the current child dataset, current K and S into Swarm Balancing If (i = 1) S then K = K S = S // as initialized parameters of SMOTE to generate a new dataset and using C to get the Current Kappa //and Current Accuracy Else based on the last position or solution to generate a pair of K and S // through the SMOTE and C to get the Current Kappa and Current Accuracy End If (min (BestKappa,CurrentKappa) > 0.4 & CurrentAccuracy > BestAccuracy) then BestKappa = Current Kappa BestAccuracy = CurrentAccuracy Else (CurrentKappa >0.4 & BestKappa <0.4) then BestKappa = Current Kappa BestAccuracy = CurrentAccuracy End i = i+1 return K, S BestAcurracy, BestKappa End Else input K and S into SMOTE to generate a new dataset and using C to get the Current Kappa return BestAcurracy, BestKappa if End for = average (BestAccuracy, BestAccuracy, …, BestAccuracy) = average (BestKappa, BestKppa, …, BestKappa) In experiment 2 we also used SMOTE to synthesize the minority class samples until we obtained a compete balance of the dataset with the default value of K = 5 to do the contrast test. As mentioned above that the method of 10-fold cross-validation are used to verify the experimental results.

Results and discussion

Results of experiment 1

Our experiment collected performance results in terms of Accuracy, Kappa (Kappa statistic), Precision, Recall, F-measure, ROC area and the imbalance ratio between minority class and majority class. These results are presented in Table 3 to Table 7 with the different classification algorithms or data imbalance processing method, respectively.
Table 3

Results of surgery dataset in experiment 1.

Data name:Surgery Data
AlgorithmspositivenegativeAccuracyKappaImbalance ratio(Min/Maj)PrecisionRecallF-MeasureROC Area
PN7040083.19%0.000.180.750.830.780.64
Class balancer-PN23523558.89%0.181.000.590.590.590.61
SMOTE(complete balance, K = 5)39940072.09%0.4421.000.7340.7210.7170.774
PSO-Balancing Algorithm-PN40840082.55%0.651.020.830.830.830.85
BA-Balancing Algorithm-PN21340078.14%0.520.530.780.780.780.78
Table 7

Results of bioassay 362 dataset in experiment 1.

Data name:AID362   
AlgorithmspositivenegativeAccuracyKappaImbalance ratio(Min/Maj)PrecisionRecallF-MeasureROC Area
PN48337598.60%0.000.010.970.990.980.58
Class balancer-PN1711.51711.557.92%0.161.000.630.580.530.58
SMOTE(complete balance, K = 5)3374337563.14%0.26281.000.7860.6310.5740.641
PSO-Balancing Algorithm-PN3393337563.18%0.261.010.790.630.570.64
BA-Balancing Algorithm-PN3344337562.91%0.260.990.790.630.570.64
In Table 3, which shows the results from the Surgery dataset, the imbalance ratio (min/maj) in th eoriginal dataset is low, and the two key performance measures of Accuracy and Kappa are at the two extremes of low accuracy, both with Kappa values of zero, which means the classifier results are not credible. The performance of the Swarm Balancing Algorithm showed that our method pulls the classification results into a reliable range of scope, although the accuracy suffers slightly. With the imbalance ratio index, we can observe changes in the degree of imbalance of a dataset, which shows whether our methods need to bring the dataset into a completely balanced state. The performance of SMOTE in processing the completely balanced data is also worse than that with the Swarm Balancing Algorithm. The other four datasets are subsets of a large and diversified bioassay dataset. For the Bioassay dataset AID439, Table 4 clearly shows that our method is better than the traditional classbalancer method, which is used as a comparison benchmark. Our approach simultaneously improves the performance of both Accuracy and Kappa. We find that the PSO Balancing approach systhesizes fewer of the minority class data to obtain better performance than the traditional SMOTE approach. The results for Bioassay dataset AID721 are shown in Table 5. It is hard to attain a Kappa value >0.4 with the two meta-heuristic algorithms, and although the PSO and BA are almost equally effective in achieving a compeletly balanced dataset, their performance is still better than that of SMOTE. The results reflect that for this and the previous datasets, most of the time, PSO is slightly better than BA in finding the two parameters with which to acheive higher performance, but it also needs to synthesize more minority samples. The results in Table 6 also prove the better ability of the Swarm Balancing Algorithms to process the classification of an imbalanced dataset. The results of the last Bioassay dataset, AID362, show the highest imbalance of the five datasets. Thus, it is doubtful that high accuracy with a low Kappa value can be attained with the original classification, and the results are still not good after the orginal dataset is processed by the class balancer method. The performance of the neural network algorithm also remains poor as it does not result in a Kappa value high enough to reach the credible stage of ≥0.4. Meanwhile, the traditional SMOTE shows almost the same performance as that of the Swarm Balancing Algorithms.
Table 4

Results of bioassay 439 dataset in experiment 1.

Data name:AID439
AlgorithmspositiveNegativeAccuracyKappaImbalance ratio(Min/Maj)PrecisionRecallF-MeasureROC Area
PN114573.21%0.120.240.720.730.730.69
Class balancer-PN282849.29%-0.011.000.490.490.480.46
SMOTE(complete balance, K = 5)444579.78%0.600.980.8420.7980.7910.774
PSO-Balancing Algorithm-PN344582.28%0.650.760.860.820.820.87
BA-Balancing Algorithm-PN404578.82%0.580.890.810.790.790.80
Table 5

Results of bioassay 721 dataset in experiment 1.

Data name:AID721
AlgorithmspositiveNegativeAccuracyKappaImbalance ratio(Min/Maj)PrecisionRecallF-MeasureROC Area
PN175978.95%0.090.290.830.790.710.41
Class balancer-PN383840.88%-0.181.000.620.680.650.49
SMOTE(complete balance, K = 5)585965.81%0.320.980.7750.6580.6190.682
PSO-Balancing Algorithm-PN635970.49%0.401.070.400.410.400.39
BA-Balancing Algorithm-PN635969.67%0.381.070.440.460.410.46
Table 6

Results of bioassay 1284 dataset in experiment 1.

Data name:AID1284
AlgorithmspositivenegativeAccuracyKappaImbalance ratio(Min/Maj)PrecisionRecallF-MeasureROC Area
PN4624484.14%0.000.190.710.840.770.52
Class balancer-PN14514550.62%0.011.000.510.510.480.50
SMOTE(complete balance, K = 5)24324464.07%0.281.000.760.6410.5940.691
PSO-Balancing Algorithm-PN20224470.32%0.380.830.490.700.580.64
BA-Balancing Algorithm-PN25424467.07%0.331.040.780.670.630.68
The bar diagrams in Fig 3 clearly illustrate the contrast in the average values of the different methods that are used in experiment 1 and presented in Tables 3 to 6. We can find that although the class balancing method can bring the dataset into full balance, the performance still very poor.
Fig 3

Average performance results of each method in experiment 1.

The performance of the neural network algorithm also remains poor as it does not result in a Kappa value high enough to reach the credible stage of ≥0.4.For the other complete datasets artificially generated by traditional SMOTE, the performance is much better than that with the original and class balancing methods, but there is still a gap with the two Swarm Balancing Algorithms in terms of the two important indexes. Although both the PSO and BA can search the suitable parameters to make the Kappa value fall within the range of credibility, the PSO-Balancing algorithm is better than the BA-Balancing algorithm for both Accuracy and Kappa. From the perspective of the quantity of synthesis necessary, the BA-Balancing algorithm produces less synthesis of minority class samples than the PSO-Balancing algorithm does. As the results in this experiment, the Swarm Balancing Algorithms satisfied the expected goal, which is to attain the highest possible accuracy with a Kappa value falling within a credible and reasonable area. However, as the results from the AID362 dataset show, when this method meets a large and highly imbalanced dataset, the performance is not as good as that in the small datasets, in which a Kappa value of ≥0.4 cannot be reached. Hence, to ensure that our basic concepts are effective in highly imbalanced and larger datasets, we realize that the algorithm needs to be improved.

Results of experiment 2

All of the results from experiment 2 are listed in Tables 8–14. They include the performance results of Accuracy, Kappa (Kappa statistic), TPR (true positive rate), FPR (false positive rate), Precision, Recall, F-measure, ROC area, the imbalance ratio between minority class and majority class and Searching time.
Table 8

Average values of Accuracy, Kappa and imbalance ratio (min/maj) for the two methods in experiment 2.

NeuralNetworkAcc.Kap.Imb. RatioNeuralNetworkAcc.Kap.Imb. Ratio
Original data98.45%0.0000.004Original data98.45%0.0000.004
SMOTE (complete balance, K = 5)63.07%0.261401.00SMOTE (complete balance, K = 5)63.07%0.261401.00
PSO-Balancing Algorithm61.73%0.2390.790BA-Balancing Algorithm62.56%0.2520.877
APBA-Window 198.34%-0.0020.004ABBA-Window 198.34%-0.0020.004
APBA-Processed State 178.41%0.5650.861ABBA-Processed State 179.20%0.5900.816
APBA-Window 298.41%-0.0010.004ABBA-Window 298.41%-0.0010.004
APBA -Initial State = APBA -PS172.87%0.4080.862ABBA-Initial State = ABSB-PS171.41%0.4230.822
APBA-Processed State 274.13%0.4810.974ABBA-Processed State 275.58%0.5050.979
APBA-Window 398.43%0.0000.004ABBA-Window 398.43%0.0000.004
APBA-Processed State 168.69%0.2920.827ABBA-Processed State 167.22%0.3150.783
APBA-Processed State 269.70%0.3860.972ABBA-Processed State 270.71%0.3970.977
APBA-Finally Average results74.08%0.4770.936ABBA-Finally Average results75.17%0.4970.924
Table 14

Results of Bioassay 746 dataset in experiment 2.

Data Name:AID 746
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time (s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data99.39%0.000293.00047831.0000.9940.9940.9880.9940.9910.543
SMOTE(complete balance, K = 5)16124557.40%0.14847831.00047831.0000.7630.5740.4810.57757.40%0.148
PSO-Balancing Algorithm1374312956.98%0.13240559.00047831.00015176.9040.5700.4380.5690.5700.5690.607
APBA-Window 199.37%0.00049.0007923.0000.9940.9940.9870.9940.9910.585
APBA-Processed State 1153933164.79%0.3067591.0007923.000873.7250.6480.3370.7950.6480.6020.706
APBA-Window 299.39%0.00098.00015846.0000.9940.9940.9880.9940.9910.589
APBA-Initial State = APSB-PS160.25%0.21815183.00015846.0000.6020.3810.7800.6020.5330.630
APBA-Processed State 2161204161.24%0.22415895.00015846.0002813.1000.6120.3890.7810.6120.5440.622
APBA-Window 399.39%0.000147.00023768.0000.9940.9940.9880.9940.9910.560
APBA-Processed State 1153933158.89%0.17822774.00023768.0000.5890.4110.5890.5890.5890.632
APBA-Processed State 2161204160.38%0.20823843.00023768.0000.6040.3960.6040.6040.6040.638
APBA-Finally Average results62.14%0.2463686.8250.6210.3740.7270.6210.5830.655
BA-Balancing Algorithm1586319357.84%0.16346771.00047831.00025927.4230.5780.4150.7540.5780.4930.603
ABBA-Window 199.37%0.00050.0007923.0000.9940.9940.9870.9940.9910.585
ABBA-Processed State 1129274562.77%0.2386383.0007923.0002393.6970.6280.3930.6250.6280.6240.691
ABBA-Window 299.39%0.00098.00015846.0000.9940.9940.9880.9940.9910.589
ABBA-Initial State = ABSB-PS157.48%0.13812766.00015846.0000.5750.4370.5740.5750.5740.622
ABBA-Processed State 2158955560.94%0.22215675.00015846.0005962.2760.6090.3860.7810.6090.5410.617
ABBA-Window 399.39%0.000147.00023768.0000.9940.9940.9880.9940.9910.560
ABBA-Processed State 1129274560.86%0.21219149.00023768.0000.6090.3960.6110.6090.6090.644
ABBA-Processed State 2158955561.29%0.22623512.00023768.0000.6130.3870.6130.6130.6130.643
ABBA-Finally Average results61.67%0.2298355.9740.6170.3890.6730.6170.5930.650

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. the grey part means there is no searching time in this step. APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm. Similar to the results with the AID362 dataset in experiment 1, the Swarm Balancing Algorithms show poor effect in all five of the big and highly imbalanced datasets, and although the Kappa values are not equal to 0, they still lie within the areas of low or no credibility. However, both Accuracy and Kappa show great improvement when the Adaptive Swarm Balancing Algorithms are used. The results of the changes in the data with the Adaptive Swarm Balancing Algorithms shown in Tables 9 to 14 range from the best to the worst.
Table 9

Results of Bioassay 456 dataset in experiment 2.

Data Name:AID 456
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time(s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data99.72%0.00022.0007964.0000.9970.9970.9940.9970.9960.512
SMOTE(complete balance, K = 5)36ss1569.64%0.3937964.0007964.0000.6960.3040.8110.6960.6660.710
PSO-Balancing Algorithm262472164.88%0.3535796.0007964.000406.2680.6490.2560.8080.6490.6240.706
APBA-Window 199.70%0.0004.0001327.0000.9970.9970.9940.9970.9950.415
APBA-Processed State 127046397.64%0.9531085.0001327.000121.9000.9760.0190.9780.9760.9760.986
APBA-Window 299.74%0.0007.0002655.0000.9970.9970.9950.9970.9960.634
APBA-Initial State = APSB-PS127046392.27%0.8451900.0002655.0000.9230.0550.9350.9230.9230.960
APBA-Processed State 234165493.03%0.8612398.0002655.000119.4710.9300.0630.9390.9300.9300.947
APBA-Window 399.72%0.00011.0003982.0000.9970.9970.9940.9970.9960.463
APBA-Processed State 127046377.90%0.5762986.0003982.0000.7790.1660.8540.7790.7750.815
APBA-Processed State 234165479.96%0.6033769.0003982.0000.8000.1900.8580.8000.7920.831
APBA-Finally Average results90.21%0.806241.3700.9020.0910.9250.9020.8990.921
BA-Balancing Algorithm280601766.02%0.3626195.0007964.000504.0730.6600.2680.7910.6600.6370.712
ABBA-Window 199.70%0.0004.0001327.0000.9970.9970.9940.9970.9950.415
ABBA-Processed State 1295111097.65%0.9531184.0001327.000274.5400.9770.0210.9780.9770.9770.986
ABBA-Window 299.74%0.0007.0002655.0000.9970.9970.9950.9970.9960.634
ABBA-Initial State = ABSB-PS1295111092.66%0.8542072.0002655.0000.9270.0570.9370.9270.9270.946
ABBA-Processed State 2332941093.03%0.8612337.0002655.000123.3850.9300.0610.9390.9300.9300.943
ABBA-Window 399.72%0.00011.0003982.0000.9970.9970.9940.9970.9960.463
ABBA-Processed State 1295111078.35%0.5813257.0003982.0000.7840.1770.8540.7840.7780.836
ABBA-Processed State 2332941079.53%0.5973673.0003982.0000.7950.1890.8570.7950.7880.837
ABBA-Finally Average results90.07%0.804397.9250.9010.0900.9250.9010.8980.922

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

Table 8 shows the average performance of Accuracy. Kappa and imbalance ratio (min/maj) from Tables 9 to 14. The data highlighted in bold format are the classification results of the original dataset, Swarm Balancing Algorithms and Adaptive Swarm Balancing Algorithms. The other parts respectively reflect the results of Window 1, Window 2 and Window 3. Because the length from Window 1 to Window 3 becomes longer and longer, the results with the Adaptive Swarm Balancing Algorithms become worse and worse; however, the final results are, on the whole, much better than those with the Swarm Balancing Algorithms. When the algorithms process a big dataset, the traditional SMOTE is slightly better than the Swarm Balancing Algorithms, but the Adaptive BA-Balancing algorithm is better than the traditional SMOTE and the Adaptive PSO-Balancing algorithm for all three performance parameters because it uses less synthetic minority class data and achieves higher accuracy with a higher Kappa value than the latter two. At the same time, it is easy to see that the problem with the AID362 dataset in experiment 1 has been solved in experiment 2, which shows that the Adaptive Swarm Balancing Algorithms are highly effective in processing a large imbalanced dataset. It must be mentioned that the performance with the 746 dataset, which contains the most data of all, is not good due to its large data size. However, we believe that if we choose to use more windows to process this dataset, the results can be improved. Fig 4 separates the two key performance parameters of Accuracy and Kappa from Table 8 and depicts them graphically in a bar diagram. It is clear that both approaches increase the Kappa value, but that with the Adaptive Swarm Balancing Algorithms is two times higher than that with the Swarm Balancing Algorithms. However, the Kappa value with the latter method still indicates non-credibility, whereas that with the former method indicates credibility and, thereby, higher accuracy. Furthermore, in terms of average values, performance parameters with the traditional SMOTE are much worse than those with the two new Swarm Balancing Algorithms, indicating that optimization of the parameters is more important than rebalancing of the dataset and that a completely balanced dataset does not necessarily mean that a better result can be achieved.
Fig 4

Average Accuracy and Kappa of different methods in experiment 2.

The Swarm Balancing Algorithms can save time, compared with the brute-force method using same dataset, when finding the best global parameters. For example, consider an imbalanced dataset with an imbalance ratio between majority class and minority class of 10, and the number of minority classes is 20. This means that S can be 9801 different values from 100 to 9900, and the scope of K is from 2 to 20. Therefore, the brute-force method will try a total of 186,219 combinations, requiring many repetitions to find the most suitable values of S and K. If we meet with a large and highly imbalanced dataset, the brute-force method will need to try many more possible combinations. It is clear that the Adaptive Swarm Balancing Algorithms can save more time. As Fig 5 shows, the Adaptive Swarm Balancing Algorithms only take one-third to one-fourth the time required by the Swarm Balancing Algorithms. Meanwhile, it is also easy to see that PSO is faster than the BA in the experiment. Therefore, with real-world data, the latter approach is better in performance, and it is more practical because it can flexibly process a large dataset in real time. Furthermore, application of the brute-force method to dataset 1608, the smallest of the six datasets, required 10279963.41876 sec. In comparison, the PSO Swarm Balancing, Adaptive PSO-Balancing, BA Swarm Balancing and Adaptive BA-Balancing algorithms required only 157.216446 sec., 93.580466, 172.0922 sec., and 123.9651 sec.
Fig 5

Average time of our four methods in experiment 2.

Conclusion

Our methods clearly show their effectiveness in the processing of the imbalanced dataset classification problem with different dataset sizes. Meta-heuristic algorithms can blindly select the parameters of SMOTE to obtain a relatively high accuracy with a Kappa value that falls within the credible range. With changes in the sizes of the datasets, we used two methods to respectively improve processing of the normal-size imbalanced dataset and the large-size imbalanced dataset. The experiments indicate that the Swarm Balancing Algorithms are more suitable for a small dataset, and if we consider the big dataset as a data feed, the Adaptive Swarm Balancing Algorithms will more quickly and better solve the imbalance problem of the dataset. In the small- and normal-size datasets, no matter from which aspect is assessed, when compared with the neural network classification algorithm, PSO was better than BA. With large datasets however, except for search time, for which the PSO is still faster than BA, the other important performance parameters are better with BA rather than PSO. The Adaptive Swarm Balancing Algorithms operate more like a process of constant iteration and learning, which is more suitable to the actual problem in health and medical datasets. Because the number of diagnosed cases is constantly increasing daily, along with the gradual accumulation of cases, the dataset will grow into a large dataset that needs to be processed as a data feed. Therefore, the Adaptive Swarm Balancing Algorithms can effectively solve the imbalanced data classification problem in the large datasets typically found in the health and medical field. These methods will help the classifier to accurately classify and identify patient data.

Thoracic surgery in experiment 1.

(CSV) Click here for additional data file.

Bioassay AID439 dataset in expeirment1.

(CSV) Click here for additional data file.

Bioassay AID721 dataset in experiment 1.

(CSV) Click here for additional data file.

Bioassay AID1284 dataset in experiment 1.

(CSV) Click here for additional data file.

Bioassay AID362 dataset in experiment1 and 2.

(CSV) Click here for additional data file.

Bioassay AID1608 dataset in experiment 2.

(CSV) Click here for additional data file.

Bioassay AID746 dataset in experiment 2.

(CSV) Click here for additional data file.

Bioassay AID687 dataset in experiment 2.

(CSV) Click here for additional data file.

Bioassay AID456 dataset in experiment 2.

(CSV) Click here for additional data file.

Bioassay AID373 dataset in experiment 2.

(CSV) Click here for additional data file.
Table 10

Results of Bioassay 362 dataset in experiment 2.

Data Name:AID 362
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time(s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data98.60%0.00048.0003375.0000.9860.9860.9720.9860.9790.580
SMOTE(complete balance, K = 5)69.3125563.14%0.2633374.0003375.0000.6310.3690.7860.6310.5740.641
PSO-Balancing Algorithm69693663.18%0.2623393.0003375.000136.6730.6320.3700.7870.6320.5740.643
APBA-Window 198.60%0.0008.000563.0000.9860.9860.9720.9860.9790.398
APBA-Processed State 14710285.53%0.716384.000563.00064.7810.8550.0990.8930.8550.8560.894
APBA-Window 298.60%0.00016.0001125.0000.9860.9860.9720.9860.9790.555
APBA-Initial State = APSB-PS175.40%0.528769.0001125.0000.7540.1840.8190.7540.7530.862
APBA-Processed State 2970061082.92%0.6271568.0001125.00026.5750.8290.2380.8670.8290.8180.853
APBA-Window 398.60%0.00024.0001687.0000.9860.9860.9720.9860.9790.484
APBA-Processed State 14710261.99%0.2371154.0001687.0000.6200.3740.6360.6200.6230.694
APBA-Processed State 2970061073.30%0.3962354.0001687.0000.7330.3730.8170.7330.6950.727
APBA-Finally Average results80.58%0.58091.3560.8060.2370.8590.8060.7900.825
BA-Balancing Algorithm68672862.91%0.2613344.0003375.000149.3540.6290.3670.7870.6290.5710.638
ABBA-Window 198.60%0.0008.000563.0000.9860.9860.9720.9860.9790.398
ABBA-Processed State 14710285.53%0.716384.000563.00075.4650.8550.0990.8930.8550.8560.894
ABBA-Window 298.60%0.00016.0001125.0000.9860.9860.9720.9860.9790.555
ABBA-Initial State = ABSB-PS175.40%0.528769.0001125.0000.7540.1840.8190.7540.7530.862
ABBA-Processed State 298931085.57%0.6861599.0001125.00048.2990.8560.2050.8840.8560.8480.876
ABBA-Window 398.60%0.00024.0001687.0000.9860.9860.9720.9860.9790.484
ABBA-Processed State 14710261.99%0.2371154.0001687.0000.6200.3740.6360.6200.6230.694
ABBA-Processed State 298931072.91%0.3812400.0001687.0000.7290.3850.8150.7290.6880.696
ABBA-Finally Average results81.34%0.594123.7650.8130.2300.8640.8130.7970.822

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

Table 11

Results of Bioassay 1608 dataset in experiment 2.

Data Name:AID 1608
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time (s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data93.35%0.00055.000772.0000.9330.9330.8710.9330.9010.410
SMOTE(complete balance, K = 5)1304%559.33%0.18755.000772.0000.5930.4070.7480.5930.5180.601
PSO-Balancing Algorithm11625056.75%0.165694.000772.000157.2160.5680.3970.6590.5680.5150.616
APBA-Window 192.75%-0.0139.000129.0000.9280.9350.8730.9280.9000.462
APBA-Processed State 1386375.00%0.41943.000129.00064.0040.7500.2690.7930.7500.7630.866
APBA-Window 293.09%-0.00718.000257.0000.9310.9350.8730.9310.9010.372
APBA-Initial State = APSB-PS175.29%0.16587.000257.0000.7530.6230.7130.7530.7050.753
APBA-Processed State 210621971.03%0.448209.000257.00029.5760.7100.2360.8240.7100.6940.754
APBA-Window 393.24%0.00028.000386.0000.9320.9320.8690.9320.9000.562
APBA-Processed State 1386373.95%0.000136.000386.0000.7390.7390.5470.7390.6290.651
APBA-Processed State 210621965.22%0.338327.000386.0000.6520.2950.7980.6520.6180.678
APBA-Finally Average results70.42%0.40193.5800.7040.2670.8050.7040.6920.766
BA-Balancing Algorithm13982961.00%0.200823.000772.000172.0920.6100.4150.7630.6100.5350.613
ABBA-Window 192.75%-0.0139.000129.0000.9280.9350.8730.9280.9000.462
ABBA-Processed State 1685281.41%0.63670.000129.00076.1990.8140.1010.8780.8140.8180.866
ABBA-Window 293.09%-0.00718.000257.0000.9310.9350.8730.9310.9010.372
ABBA-Initial State = ABSB-PS171.36%0.305141.000257.0000.7140.4390.7050.7140.6890.770
ABBA-Processed State 215953075.98%0.496305.000257.00047.7660.7600.2840.8300.7600.7390.772
ABBA-Window 393.24%0.00028.000386.0000.9320.9320.8690.9320.9000.562
ABBA-Processed State 1685263.31%0.072219.000386.0000.6330.5720.5910.6330.5710.661
ABBA-Processed State 215953070.96%0.375475.000386.0000.6510.4210.7730.6510.5870.620
ABBA-Finally Average results76.12%0.502123.9650.7420.2690.8270.7420.7150.753

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

Table 12

Results of Bioassay 373 dataset in experiment 2.

Data Name:AID 373
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time (s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data99.90%0.00050.00047831.0000.9990.9990.9980.9990.9980.658
SMOTE(complete balance, K = 5)95462568.54%0.37147831.00047831.0000.6850.3150.8070.6850.6510.719
PSO-Balancing Algorithm612144365.27%0.25130657.00047831.000538.1950.6530.4090.6450.6530.6470.720
APBA-Window 199.90%0.0008.0007964.0000.9990.9990.9980.9990.9980.592
APBA-Processed State 181124774.48%0.5106497.0007964.000106.9640.7450.2080.8370.7450.7350.831
APBA-Window 299.89%0.00017.00015927.0000.9990.9990.9980.9990.9980.455
APBA-Initial State = APSB-PS167.12%0.38113808.00015927.0000.6710.2580.7840.6710.6550.462
APBA-Processed State 2914091469.89%0.40215556.00015927.000129.5140.6990.2950.7760.6990.6780.732
APBA-Window 399.90%0.00025.00023890.0000.9990.9990.9980.9990.9980.639
APBA-Processed State 181124777.98%0.57217793.00023890.0000.7800.1870.8510.7800.7730.850
APBA-Processed State 2914091479.19%0.58722877.00023890.0000.7920.1990.8540.7920.7840.836
APBA-Finally Average results74.52%0.500236.4780.7450.2340.8220.7450.7320.800
BA-Balancing Algorithm710764464.29%0.25635588.00047831.000541.3700.6430.3920.6380.6430.6380.718
ABBA-Window 199.90%0.0008.0007964.0000.9990.9990.9980.9990.9980.592
ABBA-Processed State 181124774.48%0.5106497.0007964.000106.9640.7450.2080.8370.7450.7350.831
ABBA-Window 299.89%0.00017.00015927.0000.9990.9990.9980.9990.9980.455
ABBA-Initial State = ABSB-PS167.12%0.38113808.00015927.0000.6710.2580.7840.6710.6550.462
ABBA-Processed State 2894031070.12%0.40115215.00015927.000167.1940.7010.3010.7020.7010.7000.772
ABBA-Window 399.90%0.00025.00023890.0000.9990.9990.9980.9990.9980.639
ABBA-Processed State 181124777.98%0.57217793.00023890.0000.7800.1870.8510.7800.7730.850
ABBA-Processed State 2894031079.05%0.58622375.00023890.0000.7910.1960.8540.7910.7830.844
ABBA-Finally Average results74.55%0.499274.1590.7460.2350.7980.7460.7390.816

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

Table 13

Results of Bioassay 687 dataset in experiment 2.

Data Name:AID 687
Neural NetworkPercentageNearest NeighborsAccuracyKappaPsizeNsizeSearching Time (s)TPRFPRPrecisionRecallF-MeasureROC Area
Original data99.71%0.00076.00026378.0000.9970.9970.9940.9970.9960.577
SMOTE(complete balance, K = 5)34608560.37%0.20726378.00026378.0000.6040.3960.7790.6040.5300.641
PSO-Balancing Algorithm323687563.30%0.26824675.0004396.0006813.2410.6330.3640.6360.6330.6330.673
APBA-Window 199.71%0.00013.0004396.0000.9970.9970.9940.9970.9960.533
APBA-Processed State 1275681173.02%0.4833596.0004396.000146.4080.7300.2210.8310.7300.7180.799
APBA-Window 299.72%0.00025.0008793.0000.9970.9970.9940.9970.9960.535
APBA-Initial State = APSB-PS166.91%0.3126719.0008793.0000.6690.3650.6680.6690.6610.743
APBA-Processed State 2311801266.69%0.3247820.0008793.000926.9720.6670.3470.6700.6670.6610.747
APBA-Window 399.71%0.00038.00013189.0000.9970.9970.9940.9970.9960.513
APBA-Processed State 1275681161.44%0.18610513.00013189.0000.6140.4360.6130.6140.5930.659
APBA-Processed State 2311801260.14%0.18511886.00013189.0000.6010.4200.6090.6010.5830.662
APBA-Finally Average results66.62%0.3311073.3800.6660.3290.7030.6660.6540.736
BA-Balancing Algorithm323687563.30%0.26824675.00026378.0005139.3000.6330.3640.6360.6330.6330.673
ABBA-Window 199.71%0.00013.0004396.0000.9970.9970.9940.9970.9960.533
ABBA-Processed State 128295773.35%0.4873691.3504396.000262.3710.7340.2240.8320.7340.7200.790
ABBA-Window 299.72%0.00025.0008793.0000.9970.9970.9940.9970.9960.535
ABBA-Initial State = ABSB-PS164.42%0.3327098.0008793.0000.6440.2870.8020.6440.6110.702
ABBA-Processed State 234008867.85%0.3638526.0008793.0001000.7360.6780.3120.8060.6780.6440.732
ABBA-Window 399.71%0.00038.00013189.0000.9970.9970.9940.9970.9960.513
ABBA-Processed State 128295760.82%0.21310790.00013189.0000.6080.3930.6110.6080.6090.657
ABBA-Processed State 234008860.51%0.21612961.00013189.0000.6050.3880.7800.6050.5340.669
ABBA-Finally Average results67.24%0.3551263.1070.6720.3080.8060.6720.6330.730

the grey part means there is no searching time in this step.

APBA means Adaptive PSO Balancing Algorithm; ABBA means Adaptive BA Balancing Algorithm.

  8 in total

1.  Medical imaging data reconciliation, part 3: reconciliation of historical and current radiology report data.

Authors:  Bruce I Reiner
Journal:  J Am Coll Radiol       Date:  2011-11       Impact factor: 5.532

2.  Understanding interobserver agreement: the kappa statistic.

Authors:  Anthony J Viera; Joanne M Garrett
Journal:  Fam Med       Date:  2005-05       Impact factor: 1.756

3.  High-b value diffusion-weighted MRI for detecting pancreatic adenocarcinoma: preliminary results.

Authors:  Tomoaki Ichikawa; Sukru Mehmet Erturk; Utarou Motosugi; Hironobu Sou; Hiroshi Iino; Tsutomu Araki; Hideki Fujii
Journal:  AJR Am J Roentgenol       Date:  2007-02       Impact factor: 3.959

4.  imDC: an ensemble learning method for imbalanced classification with miRNA data.

Authors:  C Y Wang; L L Hu; M Z Guo; X Y Liu; Q Zou
Journal:  Genet Mol Res       Date:  2015-01-15

5.  Potentiality of big data in the medical sector: focus on how to reshape the healthcare system.

Authors:  Kyoungyoung Jee; Gang-Hoon Kim
Journal:  Healthc Inform Res       Date:  2013-06-30

6.  A novel hybrid self-adaptive bat algorithm.

Authors:  Iztok Fister; Simon Fong; Janez Brest; Iztok Fister
Journal:  ScientificWorldJournal       Date:  2014-04-09

7.  Adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique algorithm for tackling binary imbalanced datasets in biomedical data classification.

Authors:  Jinyan Li; Simon Fong; Yunsick Sung; Kyungeun Cho; Raymond Wong; Kelvin K L Wong
Journal:  BioData Min       Date:  2016-12-01       Impact factor: 2.522

8.  nDNA-Prot: identification of DNA-binding proteins based on unbalanced classification.

Authors:  Li Song; Dapeng Li; Xiangxiang Zeng; Yunfeng Wu; Li Guo; Quan Zou
Journal:  BMC Bioinformatics       Date:  2014-09-08       Impact factor: 3.169

  8 in total
  7 in total

1.  Combining Resampling Strategies and Ensemble Machine Learning Methods to Enhance Prediction of Neonates with a Low Apgar Score After Induction of Labor in Northern Tanzania.

Authors:  Clifford Silver Tarimo; Soumitra S Bhuyan; Quanman Li; Weicun Ren; Michael Johnson Mahande; Jian Wu
Journal:  Risk Manag Healthc Policy       Date:  2021-09-07

2.  Internet of Medical Things (IoMT) and Reflective Belief Design-Based Big Data Analytics with Convolution Neural Network-Metaheuristic Optimization Procedure (CNN-MOP).

Authors:  A Sampathkumar; Miretab Tesfayohani; Shishir Kumar Shandilya; S B Goyal; Sajjad Shaukat Jamal; Piyush Kumar Shukla; Pradeep Bedi; Meshal Albeedan
Journal:  Comput Intell Neurosci       Date:  2022-03-18

3.  Efficacy Analysis of Team-Based Nursing Compliance in Young and Middle-Aged Diabetes Mellitus Patients Based on Random Forest Algorithm and Logistic Regression.

Authors:  Dongni Qian; Hong Gao
Journal:  Comput Math Methods Med       Date:  2022-07-29       Impact factor: 2.809

4.  Convolutional Neural Network in Microsurgery Treatment of Spontaneous Intracerebral Hemorrhage.

Authors:  Xiaoqiang Wu; Dan Chen
Journal:  Comput Math Methods Med       Date:  2022-08-09       Impact factor: 2.809

5.  Risk Factors and Prediction Models for Nonalcoholic Fatty Liver Disease Based on Random Forest.

Authors:  Qingqun Li; Xiuli Zhang; Chuxin Zhang; Ying Li; Shaorong Zhang
Journal:  Comput Math Methods Med       Date:  2022-08-09       Impact factor: 2.809

6.  Image Recognition of Pediatric Pneumonia Based on Fusion of Texture Features and Depth Features.

Authors:  Hao-Nan Wang; Li-Xin Zheng; Shu-Wan Pan; Tan Yan; Qiu-Ling Su
Journal:  Comput Math Methods Med       Date:  2022-08-26       Impact factor: 2.809

7.  Blind Image Inpainting with Mixture Noise Using 0 and Total Regularization.

Authors:  Xiaowei Xu; Shiqi Geng
Journal:  Comput Math Methods Med       Date:  2022-09-30       Impact factor: 2.809

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.