| Literature DB >> 24351632 |
Jing Zhu1, Yupin Luo, Jianjun Zhou.
Abstract
In the target classification based on belief function theory, sensor reliability evaluation has two basic issues: reasonable dissimilarity measure among evidences, and adaptive combination of static and dynamic discounting. One solution to the two issues has been proposed here. Firstly, an improved dissimilarity measure based on dualistic exponential function has been designed. We assess the static reliability from a training set by the local decision of each sensor and the dissimilarity measure among evidences. The dynamic reliability factors are obtained from each test target using the dissimilarity measure between the output information of each sensor and the consensus. Secondly, an adaptive combination method of static and dynamic discounting has been introduced. We adopt Parzen-window to estimate the matching degree of current performance and static performance for the sensor. Through fuzzy theory, the fusion system can realize self-learning and self-adapting with the sensor performance changing. Experiments conducted on real databases demonstrate that our proposed scheme performs better in target classification under different target conditions compared with other methods.Entities:
Mesh:
Year: 2013 PMID: 24351632 PMCID: PMC3892852 DOI: 10.3390/s131217193
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Analysis of different dissimilarity measure methods.
| BBM | Shafer [ |
| It can measure the dissimilarity of more than three pieces of evidences; and the implement efficiency is high. | Its results are often counterintuitive, for example the problem of one-vote veto. |
|
| ||||
| Jia [ |
| It includes both direct dissimilarity and potential conflict. | In different evidence conditions, the dissimilarity measure results between evidences are large relatively. | |
|
| ||||
| Distance | Wang [ |
| Its form is intuitive and simple with high execution efficiency. | The measure is not careful without considering the compatible parts of focal elements. |
|
| ||||
| Jousselme [ |
| It describes the dissimilarity between evidences and has the support of distance axiom. | Its computation is large when the number of elements for discernment framework is large, and it is not reasonable sometimes. | |
|
| ||||
| Complex | Liu [ |
| It includes both BBM type and distance type of dissimilarity measures. | The dualistic dissimilarity measure leads to the complexity of determining the threshold, which has not uniform criterion. |
|
| ||||
| Guo [ |
| It overcomes the operation complexity of dualistic dissimilarity measure. | The results of the dissimilarity measure are illogical sometimes. | |
Different consistency measure functions.
| Function Form |
|
|
|
|
Contrast results of different dissimilarity measurement methods.
|
| |||||
|---|---|---|---|---|---|
| <0, 0.6067> | 0.7430 | 0.1654 | 0.6429 | 0.5293 | |
| <0, 0.0233> | 0.0283 | 0.0114 | 0.0460 | 0.0279 | |
Contrast of different dissimilarity measurement methods.
|
| |||||
|---|---|---|---|---|---|
| The first pair | <0.9075,0.85> | 0.85 | 0.8296 | 0.93 | 0.878 |
| The second pair | <0.0975,0.55> | 0.6946 | 0.2059 | 0.6675 | 0.5306 |
| The third pair | <0,0.7> | 0.8062 | 0.1738 | 0.8 | 0.6543 |
Comparisons of different conflict measurement methods.
| A = {1} | <0.05, 0.605> | 0.78581 | 0.18801 | 0.825 | 0.63 |
| A = {1,2} | <0.05, 0.42667> | 0.68666 | 0.16353 | 0.825 | 0.4458 |
| A = {1,2,3} | <0.05, 0.24833> | 0.57053 | 0.12233 | 0.825 | 0.285 |
| A = {1,…,4} | <0.05, 0.195> | 0.42367 | 0.10597 | 0.825 | 0.2032 |
| A = {1,…,5} | <0.05, 0.125> | 0.13229 | 0.081178 | 0.825 | 0.1237 |
| A = {1,…,6} | <0.05, 0.25833> | 0.38837 | 0.12517 | 0.85167 | 0.2266 |
| A = {1,…,7} | <0.05, 0.35357> | 0.50292 | 0.14895 | 0.87071 | 0.304 |
| A = {1,…,8} | <0.05, 0.425> | 0.57053 | 0.16323 | 0.885 | 0.3648 |
| A = {1,…,9} | <0.05, 0.48056> | 0.61874 | 0.17247 | 0.89611 | 0.4141 |
| A = {1,…,10} | <0.05, 0.525> | 0.65536 | 0.17879 | 0.905 | 0.455 |
| A = {1,…,11} | <0.05, 0.56136> | 0.6844 | 0.18331 | 0.91227 | 0.4896 |
| A = {1,…,12} | <0.05, 0.59167> | 0.70817 | 0.18665 | 0.91833 | 0.5192 |
| A = {1,…,13} | <0.05, 0.61731> | 0.72809 | 0.1892 | 0.92346 | 0.545 |
| A = {1,…,14} | <0.05, 0.63929> | 0.74513 | 0.19118 | 0.92786 | 0.5677 |
| A = {1,…,15} | <0.05, 0.65833> | 0.75993 | 0.19276 | 0.93167 | 0.5877 |
| A = {1,…,16} | <0.05, 0.675> | 0.77298 | 0.19403 | 0.935 | 0.6056 |
| A = {1,…,17} | <0.05, 0.68971> | 0.78461 | 0.19508 | 0.93794 | 0.6216 |
| A = {1,…,18} | <0.05, 0.70278> | 0.79509 | 0.19595 | 0.94056 | 0.6361 |
| A = {1,…,19} | <0.05, 0.71447> | 0.80461 | 0.19668 | 0.94289 | 0.6493 |
| A = {1,…,20} | <0.05, 0.725> | 0.81333 | 0.1973 | 0.945 | 0.6613 |
Figure 1.Comparison of different methods when subset A changes.
Figure 2.Implementation framework of Guo's combining method.
Figure 3.Implementation framework of the our combining method.
Figure 4.The estimation of overall probability density function p̂.
Figure 5.Relational graph of fuzzy variables and static matching degree.
Description of the datasets [35] used in the experiments.
|
| |||||
|---|---|---|---|---|---|
| Yeast | 10 | 8 | 495 | 495 | 494 |
| Glass | 6 | 9 | 72 | 71 | 71 |
| Segment | 7 | 19 | 770 | 770 | 770 |
| Waveform | 3 | 21 | 1,667 | 1,667 | 1,666 |
| Pendigits | 10 | 16 | 3,664 | 3,664 | 3,664 |
Description of generating classifiers.
|
| |||||
|---|---|---|---|---|---|
| Yeast | 8 | 1→2 | 3→5 | 6→8 | 15 |
| Glass | 9 | 1→3 | 4→6 | 7→9 | 12 |
| Segment | 19 | 1→10 | 11→13 | 14→19 | 2 |
| Waveform | 21 | 1→8 | 9→13 | 14→21 | 7 |
| Pendigits | 16 | 1→5 | 6→10 | 11→16 | 3 |
Correct classification rates of classifiers and two methods.
| Classifier 1 | 0.4615 | 0.6197 | 0.8558 | 0.6351 | 0.7268 | |
| Classifier 2 | 0.3664 | 0.6197 | 0.8636 | 0.7293 | 0.8401 | |
| Classifier 3 | 0.3968 | 0.6056 | 0.8896 | 0.6447 | 0.8352 | |
| Majority Vote | 0.3725 | 0.7042 | 0.9104 | 0.7401 | 0.8799 | |
| Dempster (No Discounting) | 0.4008 | 0.7183 | 0.9221 | 0.7923 | 0.9427 |
Correct classification rates of five methods using static discounting factor.
| Elouedi [ | 0.5304 | 0.7183 | 0.9351 | 0.7791 | 0.9539 | |
| Elouedi( | 0.4615 | 0.7183 | 0.9286 | 0.7809 | 0.9419 | |
| Yang [ | 0.5385 | 0.7183 | 0.9390 | 0.7815 | 0.9525 | |
| Guo [ | 0.4595 | 0.7183 | 0.9260 | 0.7809 | 0.9421 | |
| Our static method | 0.7809 | 0.9531 |
Correct classification rates of three methods using dynamic discounting factor.
| Guo [ | 0.4352 | 0.7465 | 0.9338 | 0.7665 | 0.9301 | |
| Xu [ | 0.4352 | 0.7465 | 0.9338 | 0.7725 | 0.9432 | |
| Our dynamic method | 0.9312 |
Figure 6.Instant result of different methods in the whole test process on dataset glass (a) and pendigits (b).
Figure 7.Results of different methods on dataset yeast (a), glass (b), waveform (c), and pendigits (d) by adding the fixed Gaussian noise.
Figure 8.Instant results of different methods in the whole test process on different conditions. Add a fixed Gaussian noise on classifier 1 (a), Add a Gaussian noise on classifier 1 increasingly in the whole testing process (b), Add a Gaussian noise on classifier 1 increasingly to intermediate stage (c), Add a fixed random noise of uniform distribution on classifier 1 (d), Change partial labels of the targets of classifier 1 (e), Add a fixed Gaussian noise on all classifiers (f).