| Literature DB >> 35607430 |
Xiaoguang Liu1,2, Jiawei Wang1,2, Tingwen Han1,2, Cunguang Lou1,2, Tie Liang1,2, Hongrui Wang1,2, Xiuling Liu1,2.
Abstract
Intelligent prosthetic hand is an important branch of intelligent robotics. It can remotely replace humans to complete various complex tasks and also help humans to complete rehabilitation training. In human-computer interaction technology, the prosthetic hand can be accurately controlled by surface electromyography (sEMG). This paper proposes a new multichannel fusion scheme (MSFS) to extend the virtual channels of sEMG and improve the accuracy of gesture recognition. In addition, the Temporal Convolutional Network (TCN) in deep learning has been improved to enhance the performance of the network. Finally, the sEMG is collected by the Myo armband and the prosthetic hand is controlled in real time to validate the new method. The experimental results show that the method proposed in this paper can improve the accuracy of the control intelligent prosthetic hand, and the accuracy rate is 93.69%.Entities:
Year: 2022 PMID: 35607430 PMCID: PMC9124145 DOI: 10.1155/2022/6488599
Source DB: PubMed Journal: Appl Bionics Biomech ISSN: 1176-2322 Impact factor: 1.664
Figure 1The scheme of the entire process.
Subject information in the Myo_data dataset.
| Number of subjects | Male to female ratio | Average height (cm) | Average weight (kg) | Average body mass index (kg/m2) |
|---|---|---|---|---|
| 10 | 1 : 1 | 170.6 ± 9.49 | 62.19 ± 3.12 | 21.37 ± 3.28 |
Figure 2Data acquisition status of the gestures.
Specific information for 4 subjects.
| Subject | Gender | Age | Height (cm) | Weight (kg) | Average body mass index (kg/m2) |
|---|---|---|---|---|---|
| A | Male | 24 | 175 | 74 | 24.16 |
| B | Female | 23 | 165 | 55 | 20.20 |
| C | Male | 26 | 180 | 81 | 25.00 |
| D | Female | 24 | 155 | 50 | 20.81 |
Figure 3The operation of the MSFS method.
Pearson correlation coefficient.
| Pearson correlation coefficient (average value) | Adjacent rows | Adjacent columns |
|---|---|---|
|
| 0.47 | 0.42 |
|
| 0.29 | 0.39 |
Figure 4Comparison of sEMG signals before and after denoising.
Figure 5The operation of sliding window.
Figure 6Visual structure diagram of the expanded convolutional layer.
Figure 7The TCNS architecture.
Figure 8The TCND architecture.
Figure 9Cumulative variance contribution rate of PCA dimension reduction.
Dimensionality reduction results.
| Database | Input dimension | The number of principal components |
|---|---|---|
| Myo_data | 80 | 10 |
| Myo_data+MSFS | 160 | 20 |
| DB5 | 160 | 20 |
| DB5+MSFS | 320 | 40 |
The accuracy, recall, and precision of 10-gesture recognition (%).
| Algorithm | MSFS (Y/N) | Dimension | Accuracy | Recall | Precision |
|
|---|---|---|---|---|---|---|
| KNN | N | 10 | 87.59 | 87.60 | 87.70 | 87.65 |
| KNN | Y | 20 | 88.63 | 88.64 | 88.70 | 88.67 |
| LDA | N | 10 | 87.67 | 87.68 | 87.85 | 87.76 |
| LDA | Y | 20 | 88.15 | 88.19 | 88.32 | 88.25 |
| SVM | N | 10 | 84.74 | 84.70 | 84.89 | 84.79 |
| SVM | Y | 20 | 85.27 | 85.24 | 85.42 | 85.33 |
The average accuracy of 10-gesture recognition (%).
| Algorithm | MSFS (Y/N) | Accuracy | Recall | Precision |
|
|---|---|---|---|---|---|
| TCNS | N | 91.52 | 91.54 | 91.60 | 91.57 |
| TCNS | Y | 92.34 | 92.36 | 92.42 | 92.39 |
| TCND | N | 92.41 | 92.44 | 92.61 | 92.52 |
| TCND | Y | 93.69 | 93.71 | 93.80 | 93.75 |
Figure 10Accuracy curves of the network training set for the proposed two TCN models.
Figure 11Loss curves of the network training set for the proposed two TCN models.
Figure 12Accuracy curve of network training set.
Figure 13The hardware structure of prosthetic hand.
Figure 14The implementation process of the system.
Figure 15The status of intelligent prosthetic hand control.
Figure 16Number of successes and failures in online gesture recognition.
Average accuracy (%) of 10-gesture recognition by applying MSFS-TCND algorithm.
| Algorithm | Accuracy | Recall | Precision |
|
|---|---|---|---|---|
| MSFS-TCND | 90.03 | 90.25 | 90.11 | 89.97 |