| Literature DB >> 34945874 |
Małgorzata Przybyła-Kasperek1, Kwabena Frimpong Marfo1.
Abstract
The article concerns the problem of classification based on independent data sets-local decision tables. The aim of the paper is to propose a classification model for dispersed data using a modified k-nearest neighbors algorithm and a neural network. A neural network, more specifically a multilayer perceptron, is used to combine the prediction results obtained based on local tables. Prediction results are stored in the measurement level and generated using a modified k-nearest neighbors algorithm. The task of neural networks is to combine these results and provide a common prediction. In the article various structures of neural networks (different number of neurons in the hidden layer) are studied and the results are compared with the results generated by other fusion methods, such as the majority voting, the Borda count method, the sum rule, the method that is based on decision templates and the method that is based on theory of evidence. Based on the obtained results, it was found that the neural network always generates unambiguous decisions, which is a great advantage as most of the other fusion methods generate ties. Moreover, if only unambiguous results were considered, the use of a neural network gives much better results than other fusion methods. If we allow ambiguity, some fusion methods are slightly better, but it is the result of this fact that it is possible to generate few decisions for the test object.Entities:
Keywords: dispersed data; fusion method; independent data sources; k-nearest neighbors algorithm; neural network
Year: 2021 PMID: 34945874 PMCID: PMC8700412 DOI: 10.3390/e23121568
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1Stages in the dispersed classification model.
Data set characteristics.
| Data Set | # The Training | # The Test | # Conditional | # Decision |
|---|---|---|---|---|
| Vehicle Silhouettes | 592 | 254 | 18 | 4 |
| Lymphography | 104 | 44 | 18 | 4 |
| Soybean | 307 | 376 | 35 | 19 |
| Artificial Data | 299 | 100 | 30 | 7 |
Results of classification error e, classification ambiguity error and the average number of generated decisions for the dispersed system with neural network. Designation #Input is used for the number of neurons in the input layer.
| Data | No. on | No. of Neurons in Hidden Layer | ||||||
|---|---|---|---|---|---|---|---|---|
| 1 × #Input | 3 × #Input | 4 × #Input | 4.25 × #Input | 4.5 × #Input | 4.75 × #Input | 5 × #Input | ||
| Lypmho | 3 | 0.286/1 | 0.281/1 | 0.259/1 | 0.251/1 | 0.288/1 | 0.273/1 | |
| graphy | 5 | 0.251/1 | 0.258/1 | 0.264/1 | 0.253/1 | 0.258/1 | 0.259/1 | |
| 7 | 0.326/1 | 0.328/1 | 0.316/1 | 0.314/1 | 0.321/1 | 0.339/1 | ||
| 9 | 0.278/1 | 0.256/1 | 0.256/1 | 0.271/1 | ||||
| 11 | 0.278/1 | 0.276/1 | 0.269/1 | 0.276/1 | 0.271/1 | 0.294/1 | ||
| Vehicle | 3 | 0.365/1 | 0.296/1 | 0.279/1 | 0.283/1 | 0.284/1 | 0.096/1 | 0.089/1 |
| 5 | 0.331/1 | 0.284/1 | 0.294/1 | 0.296/1 | 0.291/1 | |||
| 7 | 0.326/1 | 0.299/1 | 0.291/1 | 0.304/1 | 0.284/1 | 0.298/1 | ||
| 9 | 0.298/1 | 0.284/1 | 0.293/1 | 0.307/1 | 0.314/1 | 0.295/1 | ||
| 11 | 0.327/1 | 0.303/1 | 0.293/1 | 0.284/1 | 0.315/1 | 0.302/1 | ||
| Soybean | 3 | 0.093/1 | 0.092/1 | 0.091/1 | 0.085/1 | 0.096/1 | 0.089/1 | |
| 5 | 0.088/1 | 0.094/1 | 0.091/1 | 0.099/1 | 0.093/1 | 0.093/1 | ||
| 7 | 0.099/1 | 0.093/1 | 0.090/1 | 0.089/1 | 0.090/1 | 0.093/1 | ||
| 9 | 0.078/1 | 0.081/1 | 0.081/1 | 0.075/1 | 0.079/1 | 0.084/1 | ||
| 11 | 0.068/1 | 0.069/1 | 0.068/1 | 0.071/1 | 0.074/1 | 0.077/1 | ||
| Artificial | 3 | 0.393/1 | 0.190/1 | 0.113/1 | 0.103/1 | 0.106/1 | 0.106/1 | |
| Data | 5 | 0.193/1 | 0.056/1 | 0.060/1 | 0.026/1 | 0.033/1 | 0.233/1 | |
| 7 | 0.166/1 | 0.030/1 | 0.033/1 | 0.016/1 | 0.233/1 | 0.233/1 | ||
| 9 | 0.143/1 | 0.036/1 | 0.036/1 | 0.033/1 | 0.036/1 | 0.033/1 | ||
| 11 | 0.130/1 | 0.050/1 | 0.043/1 | 0.050/1 | 0.043/1 | 0.040/1 | ||
| average | 0.221 | 0.183 | 0.172 | 0.171 | 0.173 | 0.187 | 0.197 | |
Figure 2Box-plot chart with (Median, the first quartile—Q1, the third quartile—Q3) the value of classification error e for the neural network with different numbers of neurons in the hidden layer.
Results of classification error e, classification ambiguity error and the average number of generated decisions for fusion methods—the sum rule, the Borda count, the majority voting, the method that is based on decision templates and the method that is based on theory of evidence.
| Data | No. of | Sum Rule | Borda Count | Majority Vote | Decision Templates | Theory of Evidence |
|---|---|---|---|---|---|---|
| Lypmho | 3 | 0.250/0.250/1 | 0.159/0.386/1.227 | 0.136/0.409/1.273 | 0.159/0.159/1 | 0.273/0.273/1 |
| graphy | 5 | 0.273/0.318/1.045 | 0.205/0.318/1.114 | 0.205/0.318/1.114 | 0.318/0.318/1 | 0.318/0.318/1 |
| 7 | 0.273/0.341/1.068 | 0.205/0.432/1.227 | 0.205/0.409/1.250 | 0.364/0.364/1 | 0.364/0.364/1 | |
| 9 | 0.205/0.364/1.159 | 0.182/0.409/1.227 | 0.182/0.409/1.227 | 0.409/0.409/1 | 0.364/0.364/1 | |
| 11 | 0.273/0.545/1.273 | 0.205/0.568/1.364 | 0.205/0.568/1.364 | 0.205/0.205/1 | 0.273/0.273/1 | |
| Vehicle | 3 | 0.260/0.260/1 | 0.256/0.276/1.035 | 0.232/0.307/1.165 | 0.315/0.315/1 | 0.366/0.366/1 |
| 5 | 0.299/0.299/1 | 0.280/0.319/1.067 | 0.264/0.362/1.150 | 0.417/0.417/1 | 0.472/0.472/1 | |
| 7 | 0.276/0.276/1 | 0.291/0.331/1.055 | 0.283/0.331/1.079 | 0.394/0.394/1 | 0.398/0.398/1 | |
| 9 | 0.354/0.354/1 | 0.339/0.390/1.063 | 0.299/0.402/1.146 | 0.402/0.402/1 | 0.472/0.472/1 | |
| 11 | 0.315/0.315/1 | 0.358/0.417/1.067 | 0.311/0.429/1.161 | 0.535/0.535/1 | 0.567/0.567/1 | |
| Soybean | 3 | 0.117/0.170/1.085 | 0.120/0.189/1.106 | 0.082/0.215/1.247 | 0.314/0.314/1 | 0.303/0.303/1 |
| 5 | 0.101/0.152/1.077 | 0.117/0.191/1.106 | 0.082/0.184/1.199 | 0.343/0.343/1 | 0.327/0.327/1 | |
| 7 | 0.088/0.202/1.149 | 0.109/0.253/1.189 | 0.072/0.234/1.295 | 0.327/0.327/1 | 0.237/0.237/1 | |
| 9 | 0.072/0.160/1.106 | 0.104/0.191/1.101 | 0.061/0.168/1.146 | 0.242/0.242/1 | 0.221/0.221/1 | |
| 11 | 0.088/0.213/1.157 | 0.088/0.229/1.184 | 0.090/0.226/1.181 | 0.215/0.215/1 | 0.160/0.160/1 | |
| Artificial | 3 | 0.050/0.050/1 | 0.060/0.060/1.020 | 0.030/0.070/1.080 | 0.060/0.060/1 | 0.060/0.060/1 |
| Data | 5 | 0.060/0.060/1 | 0.050/0.060/1.030 | 0.060/0.070/1.040 | 0.060/0.060/1 | 0.060/0.060/1 |
| 7 | 0.060/0.060/1 | 0.060/0.060/1 | 0.040/0.090/1.080 | 0.080/0.080/1 | 0.070/0.070/1 | |
| 9 | 0.070/0.070/1 | 0.070/0.080/1.010 | 0.060/0.100/1.050 | 0.100/0.100/1 | 0.090/0.090/1 | |
| 11 | 0.080/0.080/1 | 0.060/0.090/1.030 | 0.140/0.180/1.100 | 0.130/0.130/1 | 0.110/0.110/1 |
Classification ambiguity error for neural network and other fusion methods.
| Data | No. on | Neural | Sum | Borda | Majority | Decision | Theory of |
|---|---|---|---|---|---|---|---|
| Lypmho | 3 | 0.246 | 0.250 | 0.386 | 0.409 |
| 0.273 |
| graphy | 5 |
| 0.318 | 0.318 | 0.318 | 0.318 | 0.318 |
| 7 |
| 0.341 | 0.432 | 0.409 | 0.364 | 0.364 | |
| 9 |
| 0.364 | 0.409 | 0.409 | 0.409 | 0.364 | |
| 11 | 0.244 | 0.545 | 0.568 | 0.568 |
| 0.273 | |
| Vehicle | 3 | 0.274 |
| 0.276 | 0.307 | 0.315 | 0.366 |
| 5 |
| 0.299 | 0.319 | 0.362 | 0.417 | 0.472 | |
| 7 | 0.281 |
| 0.331 | 0.331 | 0.394 | 0.398 | |
| 9 |
| 0.354 | 0.390 | 0.402 | 0.402 | 0.472 | |
| 11 |
| 0.315 | 0.417 | 0.429 | 0.535 | 0.567 | |
| Soybean | 3 |
| 0.170 | 0.189 | 0.215 | 0.314 | 0.303 |
| 5 |
| 0.152 | 0.191 | 0.184 | 0.343 | 0.327 | |
| 7 |
| 0.202 | 0.253 | 0.234 | 0.327 | 0.237 | |
| 9 |
| 0.160 | 0.191 | 0.168 | 0.242 | 0.221 | |
| 11 |
| 0.213 | 0.229 | 0.226 | 0.215 | 0.160 | |
| Artificial | 3 | 0.090 |
| 0.060 | 0.070 | 0.060 | 0.060 |
| Data | 5 |
| 0.060 | 0.060 | 0.070 | 0.060 | 0.060 |
| 7 |
| 0.060 | 0.060 | 0.090 | 0.080 | 0.070 | |
| 9 |
| 0.070 | 0.080 | 0.100 | 0.100 | 0.090 | |
| 11 |
| 0.080 | 0.090 | 0.180 | 0.130 | 0.110 | |
| average | 0.160 | 0.227 | 0.262 | 0.274 | 0.269 | 0.275 |
Figure 3Box-plot chart with (Median, the first quartile—Q1, the third quartile—Q3) the value of classification ambiguity error e for the neural network and other fusion methods.