| Literature DB >> 32183473 |
Ali Raza Asif1, Asim Waris1, Syed Omer Gilani1, Mohsin Jamil1,2, Hassan Ashraf1, Muhammad Shafique3, Imran Khan Niazi4.
Abstract
Electromyography (EMG) is a measure of electrical activity generated by the contraction of muscles. Non-invasive surface EMG (sEMG)-based pattern recognition methods have shown the potential for upper limb prosthesis control. However, it is still insufficient for natural control. Recent advancements in deep learning have shown tremendous progress in biosignal processing. Multiple architectures have been proposed yielding high accuracies (>95%) for offline analysis, yet the delay caused due to optimization of the system remains a challenge for its real-time application. From this arises a need for optimized deep learning architecture based on fine-tuned hyper-parameters. Although the chance of achieving convergence is random, however, it is important to observe that the performance gain made is significant enough to justify extra computation. In this study, the convolutional neural network (CNN) was implemented to decode hand gestures from the sEMG data recorded from 18 subjects to investigate the effect of hyper-parameters on each hand gesture. Results showed that the learning rate set to either 0.0001 or 0.001 with 80-100 epochs significantly outperformed (p < 0.05) other considerations. In addition, it was observed that regardless of network configuration some motions (close hand, flex hand, extend the hand and fine grip) performed better (83.7% ± 13.5%, 71.2% ± 20.2%, 82.6% ± 13.9% and 74.6% ± 15%, respectively) throughout the course of study. So, a robust and stable myoelectric control can be designed on the basis of the best performing hand motions. With improved recognition and uniform gain in performance, the deep learning-based approach has the potential to be a more robust alternative to traditional machine learning algorithms.Entities:
Keywords: classification; deep learning; electromyography; machine learning; myoelectric control; prostheses
Mesh:
Year: 2020 PMID: 32183473 PMCID: PMC7146563 DOI: 10.3390/s20061642
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Hand gestures performed by each subject in this study the neutral or rest position are shown in (A). The gestures are: (B) hand open, (C) hand close, (D) pronation (forearm), (E) supination (forearm), (F) extension (wrist), (G) flexion (wrist), (H) side grip (I), fine grip (J), pointer and (K) agree.
Figure 2Architecture of the convolutional neural network.
Mean classification error for each learning rate vs. epochs.
| Learning Rate | Epochs | ||||
|---|---|---|---|---|---|
| 20 | 40 | 60 | 80 | 100 | |
| 0.00001 | 50.9% | 40.1% | 33.4% | 29.5% | 28.8% |
| 0.0001 | 20.7% | 14.1% | 10.4% | 10.2% | 10% |
| 0.001 | 14.1% | 11.3% | 9.8% | 8.3% | 8% |
| 0.01 | 31.2% | 25% | 23.4% | 20.7% | 23% |
| 0.1 | 68.8% | 66.1% | 66% | 63.4% | 58.8% |
Subject-wise average performance.
| Subjects | Learning Rate Average Classification Accuracy (%) | ||||
|---|---|---|---|---|---|
| 0.00001 | 0.0001 | 0.001 | 0.01 | 0.1 | |
| Subject 1 | 52.9 | 63 | 87.3 | 86.1 | 13.3 |
| Subject 2 | 57.6 | 89.2 | 88.4 | 67.1 | 31.9 |
| Subject 3 | 68.8 | 93.5 | 92.3 | 79.3 | 24.3 |
| Subject 4 | 70.5 | 90 | 92.6 | 82.3 | 30.6 |
| Subject 5 | 74.6 | 92.6 | 93.5 | 86.8 | 45.5 |
| Subject 6 | 68.4 | 90 | 94.2 | 75.8 | 40 |
| Subject 7 | 57.8 | 89.6 | 88.6 | 67.2 | 31.6 |
| Subject 8 | 63.1 | 85.4 | 89.4 | 77.7 | 39 |
| Subject 9 | 56 | 88.3 | 90.5 | 73.6 | 24.4 |
| Subject 10 | 61 | 85.4 | 88.2 | 67.7 | 31.5 |
| Subject 11 | 64 | 83.3 | 87.3 | 77.3 | 42 |
| Subject 12 | 66.4 | 89.4 | 90 | 76.6 | 42.5 |
| Subject 13 | 61.7 | 83.3 | 88.4 | 81.3 | 42.3 |
| Subject 14 | 59 | 85.8 | 87.4 | 53.8 | 31.6 |
| Subject 15 | 70.5 | 90.4 | 87.5 | 69.8 | 32.6 |
| Subject 16 | 66.8 | 84.4 | 86.7 | 75.1 | 44.5 |
| Subject 17 | 63.7 | 87.9 | 87.8 | 73.4 | 45.9 |
| Subject 18 | 74.6 | 92.9 | 93.1 | 86.7 | 45.4 |
Figure 3Mean classification error averaged for all subjects for each learning rate across different training iterations (lower is better).
Average classification accuracy for each learning rate vs. epochs.
| Learning Rate | Epochs | ||||
|---|---|---|---|---|---|
| 20 | 40 | 60 | 80 | 100 | |
| 0.00001 | 49.1% | 59.9% | 66.6% | 70.5% | 71.2% |
| 0.0001 | 79.3% | 85.9% | 89.6% | 89.8% | 90% |
| 0.001 | 85.9% | 88.7% | 90.2% | 91.7% | 92% |
| 0.01 | 68.8% | 75% | 76.6% | 79.3% | 77% |
| 0.1 | 31.2% | 33.9% | 34% | 36.6% | 41.2% |
Figure 4Performance comparison of the network at different learning rates for classification of individual gestures. Values closer to circumference indicate better performance and values closer to origin represent poor performance.
Mean classification error (learning rate).
| Learning Rate | Mean Classification Error |
|---|---|
| 0.00001 |
|
| 0.0001 |
|
| 0.001 |
|
| 0.01 |
|
| 0.1 |
|