| Literature DB >> 30205509 |
Min-Cheol Kwon1, Geonuk Park2, Sunwoong Choi3.
Abstract
In recent years, with an increase in the use of smartwatches among wearable devices, various applications for the device have been developed. However, the realization of a user interface is limited by the size and volume of the smartwatch. This study aims to propose a method to classify the user's gestures without the need of an additional input device to improve the user interface. The smartwatch is equipped with an accelerometer, which collects the data and learns and classifies the gesture pattern using a machine learning algorithm. By incorporating the convolution neural network (CNN) model, the proposed pattern recognition system has become more accurate than the existing model. The performance analysis results show that the proposed pattern recognition system can classify 10 gesture patterns at an accuracy rate of 97.3%.Entities:
Keywords: Internet of things; convolution neural network; gesture pattern recognition; machine learning; smartwatch; wearable device
Mesh:
Year: 2018 PMID: 30205509 PMCID: PMC6164391 DOI: 10.3390/s18092997
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Overall system architecture.
Figure 2Label selection screen of smartwatch application.
Figure 3Gesture patterns.
Figure 4How to collect the gesture pattern dataset.
Figure 5The graph of gesture 1 and 10 traces.
Figure 6The overall architecture of the convolution neural network (CNN) used in the proposed algorithm.
The detailed structure of the CNN used in the proposed algorithm.
| Layers | Patch Size/Stride | Number of Parameters | Output Size |
|---|---|---|---|
| Input | - | - | 1 × 100 × 3 |
| Conv1 | 1 × 3 × 3 × 9 + 9/1 | 90 | 1 × 100 × 9 |
| Pool1 | 1 × 2/2 | - | 1 × 50 × 9 |
| Conv2 | 1 × 3 × 9 × 18 + 18/1 | 504 | 1 × 50 × 18 |
| Pool2 | 1 × 2/2 | - | 1 × 25 × 18 |
| Conv3 | 1 × 3 × 18 × 36 + 36/1 | 1980 | 1 × 25 × 36 |
| Pool3 | 1 × 2/2 | - | 1 × 13 × 36 |
| Conv4 | 1 × 3 × 36 × 72 + 72/1 | 7848 | 1 × 13 × 72 |
| Pool4 | 1 × 2/2 | - | 1 × 7 × 72 |
| Conv5 | 1 × 3 × 72 × 144 + 144/1 | 31,248 | 1 × 7 × 144 |
| Pool5 | 1 × 2/2 | - | 1 × 4 × 144 |
| Conv6 | 1 × 3 × 144 × 288 + 288/1 | 124,704 | 1 × 4 × 288 |
| Pool6 | 1 × 2/2 | - | 1 × 2 × 288 |
| FC1 | 576 × 576 + 576 | 332,352 | 576 |
| Dropout | - | - | - |
| FC2 | 576 × 144 + 144 | 83,088 | 144 |
| Dropout | - | - | - |
| FC3 | 144 × 10 + 10 | 1450 | 10 |
| Softmax | - | - | 10 |
The number of a dataset according to gesture pattern types.
| Gesture Pattern | Number of Datasets |
|---|---|
| P1 | 500 |
| P2 | 500 |
| P3 | 500 |
| P4 | 500 |
| P5 | 500 |
| P6 | 500 |
| P7 | 500 |
| P8 | 500 |
| P9 | 500 |
| P10 | 500 |
| Total | 5000 |
Confusion matrix of two-class classification.
| True Condition | |||
|---|---|---|---|
| Total Population | Condition Positive | Condition Negative | |
|
| Predicted condition positive | True Positive (TP) | False Positive (FP) |
| Predicted condition negative | False Negative (FN) | True Negative (TN) | |
The score of the proposed classification model.
| Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|
| 97.3% | 97.36% | 97.32% | 97.32% |
The confusion matrix of the proposed classification model.
| Actual Class | Predicted Class | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | |
| P1 | 99.1% | 0.9% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% |
| P2 | 0.0% | 96.3% | 2.8% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.9% | 0.0% |
| P3 | 0.0% | 4.9% | 92.2% | 0.0% | 0.0% | 1.0% | 0.0% | 0.0% | 1.9% | 0.0% |
| P4 | 0.0% | 0.0% | 1.0% | 98.0% | 0.0% | 0.0% | 0.0% | 0.0% | 1.0% | 0.0% |
| P5 | 0.0% | 0.0% | 0.0% | 0.0% | 96.7% | 1.1% | 0.0% | 2.2% | 0.0% | 0.0% |
| P6 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 99.0% | 0.0% | 0.0% | 1.0% | 0.0% |
| P7 | 0.0% | 0.0% | 0.0% | 1.0% | 0.0% | 0.0% | 97.1% | 0.0% | 1.0% | 1.0% |
| P8 | 1.0% | 0.0% | 0.0% | 0.0% | 1.0% | 1.0% | 0.0% | 96.9% | 0.0% | 0.0% |
| P9 | 0.0% | 1.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 99.0% | 0.0% |
| P10 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 1.1% | 98.9% |
Figure 7The results of the various algorithms.