| Literature DB >> 30845959 |
Daniel Nogueira1, Paulo Abreu1, Maria Teresa Restivo2.
Abstract
BACKGROUND: This work presents a comparison and selection of different machine learning classification techniques applied in the identification of objects using data collected by an instrumented glove during a grasp process. The selected classifiers techniques can be applied to e-rehabilitation and e-training exercises for different pathologies, as in aphasic patients.Entities:
Keywords: Classification; Instrumented glove; Machine learning; Objects identification
Mesh:
Year: 2019 PMID: 30845959 PMCID: PMC6407276 DOI: 10.1186/s12938-019-0639-0
Source DB: PubMed Journal: Biomed Eng Online ISSN: 1475-925X Impact factor: 2.819
Examples of studies based on the use of technology applied in aphasia
| Authors | Work |
|---|---|
| University of London and Stroke Association | Development of a multi-user online virtual world for practicing speech and communication [ |
| Macoir et al. | Review of technology-based aphasia treatments and highlight the critical determinants for the success of treatments [ |
| Marshal et al. | Feasibility study of e-rehabilitation systems used in the treatment of patients with aphasia [ |
| Roper et al. | Analysis of the benefits and limitations of using a system based on gesture therapy for people with severe aphasia [ |
| Lanyi et al. | Software package for an interactive virtual world to assist the speech therapy and the capacity of orientation for aphasic patients [ |
Fig. 1System architecture
Fig. 2Diagram of the communication between the e-rehabilitation software application and the database
Fig. 3Set of objects
Division of shape group
| Shape groups | Objects |
|---|---|
| Spherical | Ball and small ball |
| Cylindrical | Bottle and cup |
| Rectangular | Box and phone |
| Others | Mouse and tool |
Training and test time of model M0, and accuracy of the first classifier structure
| Classification technique | First classifier structure scenario 1 (universal use) | ||
|---|---|---|---|
| Training time (ms) | Test time (ms) | Accuracy (%) | |
| Bagging | 411.62 | 15.68 | 91.9 |
| Decision tree (entropy) | 7.2 | 0.40 | 55.7 |
| Decision tree (Gini) | 5.8 | 0.30 | 86.6 |
| kNN | 1.36 | 4.75 | 91.5 |
| Linear discriminant analysis | 10.43 | 0.24 | 66.8 |
| SVM (linear SVC) | 144.81 | 0.53 | 66.1 |
| Logistic regression | 41.49 | 0.42 | 67.8 |
| Logistic regression CV | 915.77 | 0.48 | 67.8 |
| MLP | 3113.93 | 0.72 | 83.5 |
| Naive Bayes (Bernoulli) | 1.57 | 0.35 | 52.2 |
| Naive Bayes (Gaussian) | 1.62 | 0.86 | 74.0 |
| NearestCentroid | 1.19 | 0.49 | 67.9 |
| Quadratic discriminant analysis | 3.09 | 0.76 | 87.0 |
| Radius neighbors | 1.17 | 146.89 | 47.5 |
| Random forest |
|
|
|
| Ridge | 2.9 | 0.22 | 64.2 |
| Ridge CV | 3.05 | 0.33 | 64.4 |
| Label propagation |
|
|
|
| Label spreading |
|
|
|
Classification technique with better results are in italic
Training time of models M1 and M2 and test time and accuracy of the second classifier structure
| Classification technique | Training time (ms) | Second classifier structure scenario 1 (universal use) | ||
|---|---|---|---|---|
| Model M1 | Model M2 | Test time (ms) | Accuracy | |
| Bagging | 392.18 | 276.36 | 39.74 | 95.9 |
| Decision tree (entropy) | 5.16 | 6.82 | 10.96 | 82.8 |
| Decision tree (Gini) | 5.04 | 5.14 | 10.49 | 92.0 |
| kNN | 0.84 | 0.78 | 19.72 | 94.7 |
| Linear discriminant analysis | 5.62 | 6.96 | 11.35 | 67.1 |
| SVM (linear SVC) | 146.66 | 202.21 | 12.47 | 69.1 |
| Logistic regression | 16.21 | 59.15 | 11.58 | 73.6 |
| Logistic regression CV | 314.46 | 1320.23 | 11.81 | 74.0 |
| MLP | 2478.57 | 2886.71 | 12.07 | 90.5 |
| Naive Bayes (Bernoulli) | 1.46 | 1.04 | 11.73 | 56.0 |
| Naive Bayes (Gaussian) | 1.5 | 1.04 | 12.09 | 85.3 |
| NearestCentroid | 1.02 | 0.6 | 11.49 | 64.3 |
| Quadratic discriminant analysis | 1.52 | 1.4 | 11.87 | 82.5 |
| Radius neighbors | 1.14 | 0.82 | 247.42 | 54.7 |
| Random forest |
|
|
|
|
| Ridge | 1.79 | 1.28 | 11.76 | 60.9 |
| Ridge CV | 2.64 | 2.38 | 11.42 | 61.0 |
| Label propagation |
|
|
|
|
| Label spreading |
|
|
|
|
Classification technique with better results are in italic
Fig. 4First structure for object classification
Fig. 5Second classifier structure of objects classification using M1 and M2 models
Testing times and accuracies—both scenarios
| Classification technique | First classifier structure (M0) | Second classifier structure (M1 + M2) | ||||||
|---|---|---|---|---|---|---|---|---|
| Scenario 1 (universal use) | Scenario 2 (personalised use) | Scenario 1 (universal use) | Scenario 2 (personalised use) | |||||
| Test time (ms) | Acc. (%) | Test time (ms) | Acc. (%) | Test time (ms) | Acc. (%) | Test time (ms) | Acc. (%) | |
| Label propagation | 16.38 | 93.2 | 2.14 | 94.2 | 46.22 | 96.7 | 4.13 | 99.0 |
| Label spreading | 16.90 | 93.2 | 2.28 | 94.0 | 45.64 | 96.3 | 4.33 | 98.0 |
| Random forest | 14.38 | 93.2 | 8.26 | 95.0 | 36.58 | 96.6 | 4.45 | 99.0 |
Fig. 6Normalized confusion matrices: first classifier structure (M0) using the first scenario (universal use)
Fig. 7Normalized confusion matrices: first classifier structure (M0) using the second scenario (personalised use)
Fig. 8Normalized confusion matrices: second classifier structure (M1 + M2) using the first scenario (universal use)
Fig. 9Normalized confusion matrices: second classifier structure (M1 + M2) using the second scenario (personalised use)