| Literature DB >> 35494841 |
Syed Fakhar Bilal1, Abdulwahab Ali Almazroi2, Saba Bashir1, Farhan Hassan Khan3, Abdulaleem Ali Almazroi4.
Abstract
Mobile communication has become a dominant medium of communication over the past two decades. New technologies and competitors are emerging rapidly and churn prediction has become a great concern for telecom companies. A customer churn prediction model can provide the accurate identification of potential churners so that a retention solution may be provided to them. The proposed churn prediction model is a hybrid model that is based on a combination of clustering and classification algorithms using an ensemble. First, different clustering algorithms (i.e. K-means, K-medoids, X-means and random clustering) were evaluated individually on two churn prediction datasets. Then hybrid models were introduced by combining the clusters with seven different classification algorithms individually and then evaluations were performed using ensembles. The proposed research was evaluated on two different benchmark telecom data sets obtained from GitHub and Bigml platforms. The analysis of results indicated that the proposed model attained the highest prediction accuracy of 94.7% on the GitHub dataset and 92.43% on the Bigml dataset. State of the art comparison was also performed using the proposed model. The proposed model performed significantly better than state of the art churn prediction models.Entities:
Keywords: Churn prediction; Classification; Clustering; Decision support system; Hybrid model
Year: 2022 PMID: 35494841 PMCID: PMC9044233 DOI: 10.7717/peerj-cs.854
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
State of the art techniques for customer churn prediction.
| Refs. | Year | Author | Techniques | Accuracy (%) | Precision (%) | Recall (%) | F-Measure (%) |
|---|---|---|---|---|---|---|---|
| ( | 2021 | Maldonado, S., Domínguez, G., Olaya, D., & Verbeke, W. | Logit | 56.43 | – | – | – |
| K Nearest Neighbor | 64.07 | – | – | – | |||
| CART | 57.05 | – | – | – | |||
| Random Forest | 66.24 | – | – | – | |||
| ( | 2020 | Adhikary, D. D., & Gupta, D. | Voting | 71.3 | NA | 71.4 | NA |
| Bagging | 71.9 | 69.3 | 71.9 | 62.1 | |||
| AdaBoost | 71.3 | NA | 71.4 | NA | |||
| Stacking | 71.3 | NA | 71.4 | NA | |||
| ( | 2020 | Vural, U., Okay, M. E., & Yildiz, E. M | Artificial Neural Network | 89 | – | – | – |
| ( | 2019 | Saghir, M., Bibi, Z., Bashir, S., & Khan, F. H. | Bagging | 80.8 | 81.88 | 75.28 | 78.44 |
| AdaBoost | 73.9 | 70.46 | 73.74 | 72.06 | |||
| ( | 2019 | Singh, B. E. R., & Sivasankar, E. | Bagging | 79.13 | NA | NA | NA |
| Boosting | 82.03 | NA | NA | NA | |||
| ( | 2019 | Pamina, J., Raja, B., SathyaBama | XG | 79.8 | – | – | 58.2 |
| K nearest neighbor | 75.4 | – | – | 49 | |||
| Random Forest | 77 | – | – | 50.6 | |||
| ( | 2019 | Halibas, A. S., Matthew | Gradient Boosted Tree | 79.1 | 73.1 | 79.6 | 76.2 |
| ( | 2019 | Amin, A., Shah, B., Abbas, | Genetic Algorithm +Naïve Bayes | 89.1 | 95.65 | 16.92 | 28.76 |
| ( | 2019 | ] Saghir, M., Bibi, Z., Bashir | Ensemble Classifier with Bagging and Neural network | 81 | 81.56 | 73.74 | 72.06 |
| ( | 2019 | Ullah, I., Raza, B., Malik, A. K | Random Forest | 88 | 89.1 | 89.6 | 87.6 |
| Random tree | 0.85 | – | – | 21.5 | |||
| ( | 2019 | Bharat, A | Logistic Regression | 70 | – | – | – |
| ( | 2019 | Gajowniczek, K., Orłowski, A. | Entropy Cost Function Neural network | 60 | 74 | 77 | N/A |
| ( | 2018 | J. Vijaya and E. Sivasankar | K-means | 87.61 | 93.68 | 12.23 |
|
| K-mediods | 90.91 | 98 | 28.4 | – | |||
| Naïve Bayes | 25.5 | 100 | – | – | |||
| K Nearest Neighbor | 91.39 | 99 | 01 | – | |||
| ( | 2018 | Höppner, S., Stripling, E | EMPC with Decision Tree | 89 | 94.81 | – | 60.7 |
| ( | 2018 | Ali, M., Rehman, A. | Support Vector Machine | 90 | 98.2 | N/A | 98.1 |
| Bagging Stacking | 85.5 | 73.1 | 78.8 | ||||
| Naïve Bayes | 92.9 | 92.7 | 92.7 | ||||
| ( | 2018 | Amin, A., Shah, B., Khattak | Naïve Bayes | 86 | N/A | N/A | 16.7 |
| K Nearest Neighbor | 85 | – | – | 16.6 | |||
| Gradient Boosted Tree | 72 | – | – | 17.3 | |||
| SRI | 16.7 | – | – | 87 | |||
| DP | 80 | – | – | 16.0 | |||
| ( | 2018 | Amin, A., Shah, B | JIT | 59 | – | – | – |
| ( | 2018 | Runsha Dong(&), Fei Su, | Support Vector Machine | 70.6 | – | – | – |
| ( | 2018 | Vo, N. N., Liu, S., Brownlo | XGBoostAlgorithm | 81.08 | – | – | – |
| ( | 2018 | Zhang, X., Zhang, Z., Liang | Decision Tree | 70 | – | – | – |
| ( | 2018 | Zhu, B., Xie, G., Yuan, Y | CART | 67.97 | – | – | 100 |
| ( | 2018 | De Caigny, A., Coussement, K | Logistic regression | 88.12 | – | – | – |
| Decision Tree | 88 | – | – | – | |||
| ( | 2017 | Amin, A., Al-Obeidat | Naïve Bayes | 57 | 54.14 | 61.30 | 57.50 |
| ( | 2017 | Amin, A., Al-Obeidat, F., Shah | JIT | 55.3 | 57.62 | 40.05 | 47.26 |
| ( | 2017 | Stripling, E., vanden Broucke, S | Proflogit | 70 | – | – | – |
| ( | 2017 | Zhu, B., Baesens, B., Backiel, A | Sampling Method | 86.06 | – | – | – |
| ( | 2017 | Mahajan, R., & Som, S | K-Local Maximum Features Extraction Method | 80 | 50 | 22 | 94 |
| ( | 2017 | Mishra, A., & Reddy | Bagging | 90.83 | – | 92.02 | – |
| Boosting | 90.32 | 97.91 | – | ||||
| Random Forest | 91.67 | 83.11 | 98.89 | – | |||
| Decision Tree | 90.96 | – | – | – | |||
| ( | 2016 | Tiwari, A., Sam, R., & Shaikh, | Naïve Bayes | 70 | – | – | – |
| ( | 2016 | Petkovski, A. J., Stojkoska | Naïve Bayes | 85.24 | – | 82 | |
| C4.5 | 91.57 | 84 | |||||
| K Nearest Neighbor | 90.59 | 85 | |||||
| ( | 2016 | Ahmed, A. A., & Maheswari | Firefly Algorithm | 86.38 | 90 | 80 | 93 |
| ( | 2016 | Yu, R., An, X., Jin, B., Shi, J., | Particle Classification Optimization Based BP | 69.64 | 87.84 | 51.43 | 48.57 |
Figure 1Proposed customer churn prediction model.
Clustering evaluation on churn prediction datasets.
| Technique | Accuracy | Recall | Precision | F-measure | RMSE | MSE | MAE |
|---|---|---|---|---|---|---|---|
|
| |||||||
| X-means | 50.58 | 52.05 | 14.72 | 22.94 | 0.78 | 0.6084 | 0.15 |
| K-means | 50.58 | 52.05 | 14.72 | 22.94 | 0.78 | 0.6084 | 0.15 |
| K-med | 65.44 | 29.13 | 14.37 | 19.25 | 0.69 | 0.4761 | 0.11 |
| Random | 50.96 | 48.93 | 14.19 | 22.01 | 0.75 | 0.5625 | 0.14 |
|
| |||||||
| X-means | 50.04 | 48.86 | 14.26 | 22.08 | 0.63 | 0.3969 | 0.0992 |
| K-means | 50.04 | 48.86 | 14.26 | 22.08 | 0.63 | 0.3969 | 0.0992 |
| K-med | 55.56 | 41.82 | 14.40 | 21.43 | 0.54 | 0.2916 | 0.0729 |
| Random | 50.94 | 49.06 | 14.57 | 22.47 | 0.61 | 0.3721 | 0.093 |
Clustering with single classifier on churn prediction datasets.
| GitHub dataset | Bigml dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Technique | Accuracy | Recall | Precision | F-measure | Accuracy | Recall | Precision | F-measure |
| K med+GBT | 94 | 62.23 | 93.02 | 74.57 | 92.25 | 51.96 | 90.61 | 66.05 |
| K med+DT | 86.4 | 4.10 | 93.54 | 7.85 | 86.76 | 8.90 | 97.72 | 16.31 |
| K med+RF | 87.6 | 14.14 | 88.49 | 24.39 | 91.53 | 65.42 | 73.31 | 69.14 |
| K med+KNN | 86.02 | 12.58 | 52.35 | 20.29 | 84.39 | 2.61 | 20.63 | 4.76 |
| K med+DL | 92.5 | 68.45 | 76.10 | 72.07 | 91.53 | 65.42 | 73.31 | 69.14 |
| K med+NB | 87.02 | 52.61 | 54.22 | 53.40 | 83.28 | 43.47 | 42.51 | 42.98 |
| K med+NB(K) | 83.56 | 42.43 | 41.95 | 42.19 | 91.14 | 52.79 | 79.19 | 63.35 |
| Average | 88.15 | 36.65 | 71.38 | 42.11 | 88.70 | 41.52 | 68.18 | 47.39 |
Clustering with voting ensemble for churn prediction datasets.
| GitHub dataset | Bigml dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Technique | Accuracy | Recall | Precision | F-measure | Accuracy | Recall | Precision | F-measure |
| K-med+GBT+DT+RF | 89.06 | 24.04 | 94.44 | 38.33 | 87.51 | 14.28 | 97.18 | 24.90 |
| K-med+GBT+DT+KNN | 89.62 | 28.14 | 94.76 | 43.40 | 86.04 | 4.14 | 90.90 | 7.92 |
| K-med+GBT+DT+DL | 94.06 | 61.52 | 94.56 | 74.55 | 92.40 | 51.34 | 93.23 | 66.22 |
| K-med+GBT+DT+NB | 92.58 | 53.18 | 90.38 | 66.96 | 91.92 | 45.34 | 97.76 | 61.95 |
| K-med+GBT+DT+NB(K) | 91.3 | 44.97 | 87.36 | 59.38 | 89.46 | 32.09 | 87.07 | 46.89 |
| K-med+DT+RF+KNN | 88.14 | 16.54 | 97.5 | 28.29 | 86.01 | 3.72 | 94.73 | 7.17 |
| K-med+DT+RF+DL | 88.94 | 23.62 | 92.77 | 37.65 | 87.78 | 15.94 | 98.71 | 27.45 |
| K-med+DT+RF+NB | 88.48 | 20.79 | 90.18 | 33.79 | 87.57 | 14.90 | 96 | 25.80 |
| K-med+DT+RF+NB(K) | 88.44 | 19.66 | 93.28 | 32.47 | 87.30 | 13.45 | 92.85 | 23.50 |
| K-med+RF+KNN+DL | 88.42 | 19.80 | 92.10 | 32.59 | 87.37 | 14.83 | 96.05 | 25.70 |
| K-med+RF+KNN+NB | 88.24 | 18.59 | 91.60 | 30.82 | 87.42 | 14.69 | 91.02 | 25.31 |
| K-med+RF+KNN+NB(K) | 88.14 | 20.36 | 82.75 | 32.69 | 87.18 | 12.62 | 92.42 | 22.22 |
| K-med+NB+NB(K)+KNN | 88.42 | 34.79 | 67.58 | 45.93 | 88.44 | 32.71 | 72.47 | 45.07 |
| K-med+NB+NB(K)+DL | 90.36 | 54.87 | 70.41 | 61.68 | 89.85 | 55.69 | 68.44 | 61.41 |
| Average | 89.58 | 31.49 | 88.55 | 44.18 | 88.31 | 23.27 | 90.63 | 33.68 |
Clustering with stacking ensemble on churn prediction datasets.
| GitHub dataset | Bigml dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Technique | Accuracy | Recall | Precision | F-measure | Accuracy | Recall | Precision | F-measure |
| K-med+GBT+DT+RF | 87.7 | 13.86 | 94.23 | 24.16 | 88.83 | 25.46 | 91.11 | 39.80 |
| K-med+GBT+DT+KNN | 87.74 | 13.57 | 97.95 | 23.85 | 87.69 | 19.04 | 82.88 | 30.97 |
| K-med+GBT+DT+DL | 94.7 | 75.10 | 87.04 | 80.63 | 92.43 | 66.45 | 78.10 | 71.81 |
| K-med+GBT+DT+NB | 94.48 | 72.70 | 86.09 | 78.83 | 90.03 | 51.13 | 72.01 | 59.80 |
| K-med+GBT+DT+NB(K) | 93 | 59.97 | 86.35 | 70.78 | 88.23 | 39.95 | 65.42 | 49.61 |
| K-med+DT+RF+KNN | 87.02 | 8.48 | 96.77 | 15.60 | 87.57 | 15.52 | 92.59 | 26.59 |
| K-med+DT+RF+DL | 87.72 | 13.57 | 96.96 | 23.82 | 89.07 | 27.32 | 91.03 | 42.03 |
| K-med+DT+RF+NB | 87.64 | 13.71 | 92.38 | 23.89 | 88.32 | 24.63 | 82.63 | 37.95 |
| K-med+DT+RF+NB(K) | 87.52 | 13.29 | 89.52 | 23.15 | 88.17 | 19.95 | 91.42 | 32.76 |
| K-med+RF+KNN+DL | 88.22 | 19.09 | 88.81 | 31.43 | 87.75 | 16.56 | 94.11 | 28.16 |
| K-med+RF+KNN+NB | 88.08 | 17.96 | 88.81 | 29.88 | 87.54 | 15.52 | 91.46 | 26.54 |
| K-med+RF+KNN+NB(K) | 87.7 | 17.68 | 79.11 | 28.90 | 87.30 | 14.28 | 88.46 | 24.59 |
| K-med+NB+NB(K)+KNN | 88.14 | 36.06 | 64.39 | 46.23 | 87.24 | 36.02 | 60 | 45.01 |
| K-med+NB+NB(K)+DL | 90.12 | 58.41 | 67.37 | 62.57 | 89.31 | 55.90 | 65.37 | 60.26 |
| Average | 89.29 | 30.82 | 87.66 | 40.24 | 88.54 | 30.54 | 81.89 | 41.13 |
Average accuracy comparison of different techniques.
|
| ||||
|---|---|---|---|---|
| Technique | Accuracy | Precision | Recall | F-Measure |
| Clustering | 54.39 | 45.54 | 14.50 | 22 |
| Classification(Single Classifier) | 88.14 | 36.75 | 71.24 | 48.49 |
| Clustering with Single Classifier | 88.15 | 36.65 | 71.38 | 42.11 |
| Clustering with Voting Ensemble Classifier | 89.58 | 31.49 | 88.55 | 44.18 |
| Clustering with Bagging Ensemble Classifier | 89.61 | 31.07 | 89.82 | 43.78 |
| Clustering with Stacking Ensemble Classifier | 89.29 | 30.82 | 87.66 | 40.24 |
| Clustering with AdaBoost Ensemble Classifier | 90.66 | 55.05 | 71.70 | 59.49 |
|
| ||||
| Clustering | 51.65 | 47.15 | 14.38 | 22.03 |
| Classification(Single Classifier) | 87.30 | 33.56 | 65.84 | 44.46 |
| Clustering with Single Classifier | 88.70 | 41.52 | 68.18 | 47.39 |
| Clustering with Voting Ensemble Classifier | 88.31 | 23.27 | 90.63 | 33.68 |
| Clustering with Bagging Ensemble Classifier | 88.37 | 23.58 | 90.78 | 34.11 |
| Clustering with Stacking Ensemble Classifier | 88.54 | 30.54 | 81.89 | 41.13 |
| Clustering with AdaBoost Ensemble Classifier | 89.87 | 52.05 | 74.42 | 56.97 |
Comparison with state of the art techniques with Bigml dataset.
| Techniques | References | Accuracy | Precision | Recall | F-Measure | Standard Dev | |
|---|---|---|---|---|---|---|---|
| Existing models with Bigml dataset | JIT | ( | 77.27 | 96.02 | 57.25 | 71.42 | NA |
| UDT | ( | 84.0 | 52.38 | 64.71 | 57.89 | NA | |
| Multilayer Perception | ( | 89.29 | 86.8 | 89.5 | 88.8 | NA | |
| Random Forest | ( | 89.59 | 89.1 | 89.6 | 87.6 | NA | |
| Bagging + Deep Learning | ( | 91.51 | 90.67 | 72.94 | 80.84 | NA | |
| EWD | ( | 88 | 86.01 | 78 | 79.01 | NA | |
| NB+LR | ( | 84.51 | 58.18 | 10.92 | 18.39 | NA | |
| Bagging | ( | 88.3 | 86.8 | 88.3 | 86.4 | NA | |
| AdaBoost | 86.8 | 84.6 | 86.8 | 84.8 | NA | ||
| Deep learning models | Long Short Term Memory (LSTM) | 85.1 | 77.5 | 76.5 | 76.9 | NA | |
| Gated Recurrent Unit (GRU) | 88.6 | 83.9 | 81.1 | 82.4 | NA | ||
| Convolutional Neural Network (CNN) | 87.9 | 85.6 | 82.7 | 84.12 | NA | ||
| Proposed models with Bigml dataset | K-med+GBT+DT+DL+Voting | 92.40 | 93.23 | 51.34 | 66.22 | 0.14 | |
| K-med+GBT+DT+DL+Bagging | 92.41 | 93.25 | 51.55 | 66.4 | 0.12 | ||
| K-med+GBT+DT+DL+Stacking | 92.40 | 78.04 | 66.25 | 71.66 | 0.15 | ||
| K-med+GBT+DT+DL+Adaboost | 92.43 | 78.10 | 66.45 | 71.81 | 0.10 | ||
Comparison with state of the art techniques for GitHub dataset.
| Techniques | Reference | Accuracy | Precision | Recall | F-Measure | Standard Dev | |
|---|---|---|---|---|---|---|---|
| Existing models with Github dataset | FLIC/FDT | ( | 81.5 | – | – | – | NA |
| K-Means+DT | ( | 84.26 | 95.68 | – | 90.00 | NA | |
| Bagging + DL | ( | 50 | 25 | 50 | 33.33 | NA | |
| AdaBoost+ MLP | 66.3 | 66.64 | 66.28 | 66.46 | NA | ||
| Majority Voting DL+NN+ML | 66.69 | 67.52 | 66.69 | 67.1 | NA | ||
| Bagging+ MLP | 67.57 | 71.54 | 67.57 | 69.5 | NA | ||
| Bagging | ( | 80.8 | 81.88 | 75.28 | 78.44 | NA | |
| AdaBoost | 73.9 | 70.46 | 73.74 | 72.06 | NA | ||
| Deep learning models | Long Short Term Memory (LSTM) | 90.04 | 84.8 | 79.7 | 82.1 | NA | |
| Gated Recurrent Unit (GRU) | 90.1 | 85.9 | 83.4 | 84.6 | NA | ||
| Convolutional Neural Network (CNN) | 89.8 | 83.1 | 82.2 | 82.6 | NA | ||
| Proposed models with GitHub dataset | K-med+GBT+DT+DL+Voting | 94.06 | 94.56 | 61.52 | 74.55 | 0.14 | |
| K-med+GBT+DT+DL+Bagging | 94.12 | 95.78 | 61.10 | 74.61 | 0.13 | ||
| K-med+GBT+DT+DL+Stacking | 94.65 | 87.33 | 73.12 | 79.59 | 0.11 | ||
| K-med+GBT+DT+DL+Adaboost | 94.7 | 87.04 | 75.10 | 80.63 | 0.12 | ||
Clustering with bagging ensemble on churn prediction datasets.
| GitHub dataset | Bigml dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Technique | Accuracy | Recall | Precision | F-measure | Accuracy | Recall | Precision | F-measure |
| K-med+GBT+DT+RF | 89.12 | 24.46 | 94.53 | 38.87 | 87.54 | 14.49 | 97.22 | 25.22 |
| K-med+GBT+DT+kNN | 89.7 | 28.71 | 94.85 | 44.08 | 86.10 | 4.55 | 91.66 | 8.67 |
| K-med+GBT+DT+DL | 94.12 | 61.10 | 95.78 | 74.61 | 92.41 | 51.55 | 93.25 | 66.4 |
| K-med+GBT+DT+NB | 92.64 | 53.60 | 90.45 | 67.31 | 91.98 | 45.75 | 97.78 | 62.34 |
| K-med+GBT+DT+NB(K) | 91.4 | 45.68 | 87.53 | 60.03 | 89.52 | 32.50 | 87.22 | 47.36 |
| K-med+DT+RF+kNN | 88.1 | 16.26 | 97.45 | 27.87 | 86.04 | 3.93 | 95 | 7.55 |
| K-med+DT+RF+DL | 88.98 | 23.90 | 92.85 | 38.02 | 87.81 | 16.14 | 98.73 | 27.75 |
| K-med+DT+RF+NB | 88.5 | 20.79 | 90.74 | 33.83 | 87.66 | 15.52 | 96.15 | 26.73 |
| K-med+DT+RF+NB(K) | 88.42 | 19.23 | 94.44 | 31.96 | 87.33 | 13.66 | 92.95 | 23.82 |
| K-med+RF+kNN+DL | 88.32 | 18.52 | 94.24 | 30.96 | 87.63 | 15.32 | 96.10 | 26.42 |
| K-med+RF+kNN+NB | 88.04 | 16.83 | 92.24 | 28.46 | 87.45 | 14.90 | 91.13 | 25.62 |
| K-med+RF+kNN+NB(K) | 88 | 17.11 | 89.62 | 28.74 | 87.24 | 13.04 | 92.64 | 22.86 |
| K-med+NB+NB(K)+kNN | 88.66 | 33.94 | 70.58 | 45.84 | 88.47 | 32.91 | 72.60 | 45.29 |
| K-med+NB+NB(K)+DL | 90.62 | 54.87 | 72.11 | 62.32 | 89.88 | 55.90 | 68.52 | 61.57 |
| Average | 89.61 | 31.07 | 89.82 | 43.78 | 88.37 | 23.58 | 90.78 | 34.11 |
Clustering with stacking ensemble on churn prediction datasets.
| GitHub dataset | Bigml dataset | |||||||
|---|---|---|---|---|---|---|---|---|
| Technique | Accuracy | Recall | Precision | F-measure | Accuracy | Recall | Precision | F-measure |
| K-med+GBT+DT+RF | 94.34 | 76.09 | 82.51 | 79.17 | 92.39 | 67.49 | 77.80 | 72.28 |
| K-med+GBT+DT+KNN | 94.56 | 73.26 | 86.18 | 79.20 | 92.31 | 66.04 | 77.61 | 71.36 |
| K-med+GBT+DT+DL | 94.65 | 73.12 | 87.33 | 79.59 | 92.40 | 66.25 | 78.04 | 71.66 |
| K-med+GBT+DT+NB | 94.5 | 74.82 | 84.50 | 79.36 | 89.97 | 72.04 | 63.61 | 67.57 |
| K-med+GBT+DT+NB(K) | 94.5 | 76.23 | 83.43 | 79.67 | 91.47 | 66.66 | 72.35 | 69.39 |
| K-med+DT+RF+KNN | 86.02 | 12.58 | 52.35 | 20.29 | 87.12 | 13.66 | 84.61 | 23.52 |
| K-med+DT+RF+DL | 92.38 | 69.02 | 75.07 | 71.92 | 91.32 | 68.73 | 70.63 | 69.67 |
| K-med+DT+RF+NB | 86.86 | 49.50 | 53.84 | 51.58 | 91.50 | 44.72 | 93.10 | 60.41 |
| K-med+DT+RF+NB(K) | 87.72 | 14.56 | 91.15 | 25.12 | 87.57 | 16.14 | 89.65 | 27.36 |
| K-med+RF+KNN+DL | 92.2 | 67.04 | 75.11 | 70.85 | 91.05 | 64.18 | 71.26 | 67.53 |
| K-med+RF+KNN+NB | 86.02 | 12.58 | 52.35 | 20.29 | 86.28 | 50.51 | 52.81 | 51.64 |
| K-med+RF+KNN+NB(K) | 86.36 | 49.08 | 51.86 | 50.43 | 87.21 | 14.07 | 86.07 | 24.19 |
| K-med+NB+NB(K)+KNN | 85.94 | 51.06 | 50.27 | 50.66 | 90.63 | 62.52 | 69.74 | 65.93 |
| K-med+NB+NB(K)+DL | 92.96 | 69.73 | 78.12 | 73.69 | 86.82 | 55.48 | 54.47 | 54.97 |
| Average | 90.66 | 55.05 | 71.70 | 59.49 | 89.87 | 52.05 | 74.42 | 56.97 |