| Literature DB >> 23867742 |
Abstract
Sensor drift is currently the most challenging problem in gas sensing. We propose a novel ensemble method with dynamic weights based on fitting (DWF) to solve the gas discrimination problem, regardless of the gas concentration, with high accuracy over extended periods of time. The DWF method uses a dynamic weighted combination of support vector machine (SVM) classifiers trained by the datasets that are collected at different time periods. In the testing of future datasets, the classifier weights are predicted by fitting functions, which are obtained by the proper fitting of the optimal weights during training. We compare the performance of the DWF method with that of competing methods in an experiment based on a public dataset that was compiled over a period of three years. The experimental results demonstrate that the DWF method outperforms the other methods considered. Furthermore, the DWF method can be further optimized by applying a fitting function that more closely matches the variation of the optimal weight over time.Entities:
Mesh:
Substances:
Year: 2013 PMID: 23867742 PMCID: PMC3758642 DOI: 10.3390/s130709160
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1.Performances of ensembles with classifier weights estimated by different methods.
Figure 2.Performance of the classifiers under different settings.
Original optimal weights of each classifier used for the training datasets at different time steps, namely, .
| 0.50 | 0.05 | 0.05 | 0.15 | 0.35 | |
| 0.10 | 0.55 | 0 | 0.05 | 0 | |
| 0.25 | 0.15 | 0.90 | 0.20 | 0 | |
| 0.15 | 0.10 | 0.05 | 0.50 | 0 | |
| 0 | 0.15 | 0 | 0.1 | 0.65 | |
Normalized optimal weights of each classifier used for the training datasets at different time steps.
| 0.5001 | 0.1500 | 0.1500 | 0.1500 | 0.1500 | |
| 0.1000 | 1.6505 | 0 | 0.0500 | 0 | |
| 0.2500 | 0.4501 | 2.6993 | 0.1999 | 0 | |
| 0.1500 | 0.3001 | 0.1500 | 0.4999 | 0 | |
| 0 | 0.4501 | 0 | 0.1000 | 0.2785 | |
Parameters of the five fitting functions.
| 5.6821 | 3.0462 | −8.0483 | 0.1500 | |
| −0.9910 | −0.4797 | 6.7208 | 0.8296 | |
| −12.3862 | −4.4880 | 68.5792 | 1.0641 | |
| 21.1944 | −0.01274 | 0.1071 | −20.9216 | |
| −21.5056 | 1.3284 | 3.3586 | 0.2065 | |
Predicted weights of each classifier.
| 0.0707 | 0.0687 | 0.0697 | 0.0774 | 0.1173 | |
| 0.2839 | 0.3481 | 0.3697 | 0.4260 | 0.6489 | |
| 0.5015 | 0.4872 | 0.4944 | 0.5489 | 0.8324 | |
| 0.0466 | 0.0014 | −0.0298 | −0.1588 | −0.7601 | |
| 0.0973 | 0.0946 | 0.0960 | 0.1065 | 0.1616 | |
Figure 3.Performance of the classifiers under the proposed method and for other settings. (a)Training batches: S1–5 (collected from the first to the 16th month); testing batches: S6–−10 (collected from the 17th to the 36th month); (b) Training batches: S1–6 (collected from the first to the 20th month); testing batches: S7–−10 (collected from the 21st to the 36th month); (c) Training batches: S1–7 (collected from the first to the 21st month); testing batches: S8–10 (collected from the 22nd to the 36th month); (d) Training batches: S1–8 (collected from the first to the 23rd month); testing batches: S9–10 (collected from the 24th to the 36th month); (e) Legend.
Figure 4.Performance of each ensemble using different gaps of the training dataset.