| Literature DB >> 24977234 |
Bin Li1, Xuewen Rong2, Yibin Li2.
Abstract
Robot execution failures prediction (classification) in the robot tasks is a difficult learning problem due to partially corrupted or incomplete measurements of data and unsuitable prediction techniques for this prediction problem with little learning samples. Therefore, how to predict the robot execution failures problem with little (incomplete) or erroneous data deserves more attention in the robot field. For improving the prediction accuracy of robot execution failures, this paper proposes a novel KELM learning algorithm using the particle swarm optimization approach to optimize the parameters of kernel functions of neural networks, which is called the AKELM learning algorithm. The simulation results with the robot execution failures datasets show that, by optimizing the kernel parameters, the proposed algorithm has good generalization performance and outperforms KELM and the other approaches in terms of classification accuracy. Other benchmark problems simulation results also show the efficiency and effectiveness of the proposed algorithm.Entities:
Mesh:
Year: 2014 PMID: 24977234 PMCID: PMC3996967 DOI: 10.1155/2014/906546
Source DB: PubMed Journal: ScientificWorldJournal ISSN: 1537-744X
Feature information and class distribution of the robot execution failures.
| Datasets | Instances | Classes |
|---|---|---|
| LP1 | 88 | 4 (1 = 24%; 2 = 19%; 3 = 18%; 4 = 39%) |
| LP2 | 47 | 5 (1 = 43%; 2 = 13%; 3 = 15%; 4 = 11%; 5 = 19%) |
| LP3 | 47 | 4 (1 = 43%; 2 = 19%; 3 = 32%; 4 = 6%) |
| LP4 | 117 | 3 (1 = 21%; 2 = 62%; 3 = 18%) |
| LP5 | 164 | 5 (1 = 27%; 2 = 16%; 3 = 13%; 4 = 29%; 5 = 16%) |
Figure 1Relationship between the classification accuracy and the number of some parameters of kernel function on LP1 dataset.
Classification accuracy of robot execution failures based on KELM and AKELM algorithms.
| Kernel data | Gaussian | Tangent | Wavelet | |||
|---|---|---|---|---|---|---|
| KELM | AKELM | KELM | AKELM | KELM | AKELM | |
| LP1 |
|
| 62.50% |
|
|
|
| LP2 | 57.14% | 57.14% | 57.14% |
| 57.14% |
|
| LP3 | 57.14% |
| 57.14% |
| 57.14% |
|
| LP4 | 75% |
| 75% |
| 83.33% |
|
| LP5 | 57.14% |
| 0% |
| 50% |
|
Specification of benchmarks of regression and classification problems.
| Datasets | Names | Attributes | Classes | Training data | Testing data |
|---|---|---|---|---|---|
| Regression | Box and Jenkins gas furnace data | 10 | 1 | 200 | 90 |
| Auto-Mpg | 7 | 1 | 320 | 78 | |
|
| |||||
| Classification | Wine | 13 | 3 | 150 | 28 |
| Diabetes | 8 | 2 | 576 | 192 | |
Comparison of performance by AKELM and KELM learning algorithms for the regression problems.
| Algorithms with different kernel functions | Box and Jenkins gas furnace data | Auto-Mpg | ||||
|---|---|---|---|---|---|---|
| Training error | Testing error | Training time (seconds) | Training error | Testing | Training time (seconds) | |
| KELM (parameters = 1, Gaussian) | 0.0120 | 0.0188 | 0.0394 | 0.0529 | 0.0599 | 0.1213 |
| KELM (parameters = 1, tangent) | 0.0627 | 0.0655 | 0.0116 | 0.6680 | 0.7756 | 0.0346 |
| KELM (parameters = 1, wavelet) | 0.0121 | 0.0206 | 0.0177 | 0.0509 |
| 0.0415 |
| KELM (parameters = 10, Gaussian) | 0.0183 | 0.0213 | 0.0149 | 0.0685 | 0.0732 | 0.0286 |
| KELM (parameters = 10, tangent) | 0.2245 | 0.1986 | 0.0044 | 0.2071 | 0.2085 | 0.0261 |
| KELM (parameters = 10, wavelet) | 0.0306 | 0.0382 | 0.0101 | 0.0662 | 0.0712 | 0.0360 |
| AKELM (Gaussian) | 0.0133 |
| 26.1250 | 0.0503 |
| 74.7656 |
| AKELM (tangent) | 0.0223 |
| 25.2500 | 0.0735 |
| 73.8906 |
| AKELM (wavelet) | 0.0133 |
| 28.3906 | 0.0502 |
| 84.9688 |
Comparison of performance by AKELM and KELM learning algorithms for the classification problems.
| Algorithms with different kernel functions | Wine | Diabetes | ||||
|---|---|---|---|---|---|---|
| Training accuracy | Testing accuracy | Training time | Training accuracy | Testing accuracy | Training time | |
| KELM (parameters = 1, Gaussian) | 100% |
| 0.0277 | 84.38% | 77.08% | 0.1394 |
| KELM (parameters = 1, tangent) | 51.33% | 50% | 0.0067 | 73.78% | 73.44% | 0.1326 |
| KELM (parameters = 1, wavelet) | 100% |
| 0.0070 | 86.81% | 76.56% | 0.1347 |
| KELM (parameters = 10, Gaussian) | 100% |
| 0.0083 | 78.99% | 79.17% | 0.0919 |
| KELM (parameters = 10, tangent) | 39.33% | 42.86% | 0.0023 | 65.80% | 65.63% | 0.0904 |
| KELM (parameters = 10, wavelet) | 100% | 96.43% | 0.0061 | 80.03% | 77.08% | 0.1361 |
| AKELM (Gaussian) | 100% |
| 17.8594 | 90.45% |
| 260.7031 |
| AKELM (tangent) | 97.33% |
| 13.9375 | 73.26% |
| 313.8750 |
| AKELM (wavelet) | 100% |
| 16 | 89.06% |
| 335.5469 |