| Literature DB >> 36051640 |
Mingqiang Li1, Ziwen Liu2, Siqi Tang1, Jianjun Ge1, Feng Zhang1.
Abstract
Feature extraction is a key task in the processing of surface electromyography (SEMG) signals. Currently, most of the approaches tend to extract features with deep learning methods, and show great performance. And with the development of deep learning, in which supervised learning is limited by the excessive expense incurred due to the reliance on labels. Therefore, unsupervised methods are gaining more and more attention. In this study, to better understand the different attribute information in the signal data, we propose an information-based method to learn disentangled feature representation of SEMG signals in an unsupervised manner, named Layer-wise Feature Extraction Algorithm (LFEA). Furthermore, due to the difference in the level of attribute abstraction, we specifically designed the layer-wise network structure. In TC score and MIG metric, our method shows the best performance in disentanglement, which is 6.2 lower and 0.11 higher than the second place, respectively. And LFEA also get at least 5.8% accuracy lead than other models in classifying motions. All experiments demonstrate the effectiveness of LEFA.Entities:
Keywords: disentangled representation; feature extraction; information bottleneck; information theory; surface electromyography; unsupervised learning
Year: 2022 PMID: 36051640 PMCID: PMC9427327 DOI: 10.3389/fnins.2022.975131
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
FIGURE 1The diagram of Layer-wise Feature Extraction Algorithm (LFEA). LEFA contains three core modules: Information Compression Module (ICM), Information Expression Module (IEM) and Information Separation Module (ISM), to ensure compression, expression and disentanglement of representation, respectively.
FIGURE 2Discriminator D(.). To compute and optimize L, we need an additional discriminator as shown in Eq. (13).
FIGURE 3Movements in NinaPro DB2. (A) Isometric, isotomic hand configurations. (B) Basic movements of the wrist. (C) Grasps and functional movements. (D) Single and multiple fingers force measurement patterns. (E) Rest position. Available from: http://ninapro.hevs.ch/node/123.
Subject attribute information of NinaPro DB2 dataset.
| Subject | Hand | Laterality | Gender | Age | Height (cm) | Weight (kg) |
| 1 | Intact | Right Handed | Male | 29 | 187 | 75 |
| 2 | Intact | Right Handed | Male | 29 | 183 | 75 |
| 3 | Intact | Right Handed | Male | 31 | 174 | 69 |
| 4 | Intact | Left Handed | Female | 30 | 154 | 50 |
| 5 | Intact | Right Handed | Male | 25 | 175 | 70 |
FIGURE 4Sample data image.
FIGURE 512 basic movements signal of fingers in Exercise A.
Detail parameters for LFEA.
| Parameter | Value |
| Number of layers | 4 |
| Size of | 5 |
| λ | 0.1 |
| β | 0.2 |
Results of TC score.
| Method | TC score | MIG |
| LFEA (Ours) |
|
|
| VAE | 23.6 | 0.54 |
| β-VAE | 25.8 | 0.61 |
| PCA | 18.5 | 0.49 |
We compare our method the classic methods including VAE, β-VAE and PCA. Our HFEA method is much better than others. The bold indicates the best results.
FIGURE 6Feature distribution in layer 1–4 with (A–D).
Classification results on NinaPro DB2 dataset.
| Methods | Windowing | Train/Test | Accuracy |
| LFEA + SVM(Ours) | 200 ms | 2/1 | 75.2 ± 2.3% |
| CNN | 200 ms | 2/1 | 65.7 ± 5.9% |
| LSTM + MLP | 200 ms | 1/1 |
|
| Random forest | 200 ms | 2/1 | 75.0 ± 5.1% |
| KNN | 200 ms | 2/1 | 61.1 ± 3.4% |
| SVM | 200 ms | 2/1 | 67.2 ± 5.2% |
The bold indicates better result.
FIGURE 7Feature discrimination results for DB1.
FIGURE 8Feature discrimination results for DB2.
Feature combinations.
|
| ( |
|
| ( |
|
| ( |
|
| ( |
|
| ( |
Classification results with different feature combinations for Exercise A.
| Feature Combinations | Accuracy | Discrimination (C1-Accuracy) |
| C1 | 0.79 | 0 |
| C2 | 0.72 | 0.07 |
| C3 | 0.74 |
|
| C4 | 0.53 |
|
| C5 | 0.61 | 0.18 |
The bold values mean the lowest and highest discrimination values.
Classification results with different feature combinations for Exercise C.
| Feature Combinations | Accuracy | Discrimination (-C1) |
| C1 | 0.82 | 0 |
| C2 | 0.63 |
|
| C3 | 0.64 | 0.18 |
| C4 | 0.74 |
|
| C5 | 0.71 | 0.11 |
The bold values mean the lowest and highest discrimination values.
Classification results with different feature combinations for Exercise B.
| Feature Combinations | Accuracy | Discrimination (-C1) |
| C1 | 0.8 | 0 |
| C2 | 0.53 |
|
| C3 | 0.59 | 0.21 |
| C4 | 0.69 | 0.11 |
| C5 | 0.73 |
|
The bold values mean the lowest and highest discrimination values.