| Literature DB >> 30823508 |
Baojun Chen1, Francesco Lanotte2, Lorenzo Grazi3, Nicola Vitiello4,5, Simona Crea6,7.
Abstract
The number of exoskeletons providing load-lifting assistance has significantly increased over the last decade. In this field, to take full advantage of active exoskeletons and provide appropriate assistance to users, it is essential to develop control systems that are able to reliably recognize and classify the users' movement when performing various lifting tasks. To this end, the movement-decoding algorithm should work robustly with different users and recognize different lifting techniques. Currently, there are no studies presenting methods to classify different lifting techniques in real time for applications with lumbar exoskeletons. We designed a real-time two-step algorithm for a portable hip exoskeleton that can detect the onset of the lifting movement and classify the technique used to accomplish the lift, using only the exoskeleton-embedded sensors. To evaluate the performance of the proposed algorithm, 15 healthy male subjects participated in two experimental sessions in which they were asked to perform lifting tasks using four different techniques (namely, squat lifting, stoop lifting, left-asymmetric lifting, and right-asymmetric lifting) while wearing an active hip exoskeleton. Five classes (the four lifting techniques plus the class "no lift") were defined for the classification model, which is based on a set of rules (first step) and a pattern recognition algorithm (second step). Leave-one-subject-out cross-validation showed a recognition accuracy of 99.34 ± 0.85%, and the onset of the lift movement was detected within the first 121 to 166 ms of movement.Entities:
Keywords: exoskeleton control; hip exoskeleton; lifting detection; pattern recognition
Mesh:
Year: 2019 PMID: 30823508 PMCID: PMC6412280 DOI: 10.3390/s19040963
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1(A) A subject wearing the APO with an IMU board on the backpack. (B) Experimental setup of this study.
Figure 2Block diagram of the recognition algorithm. The white region and the light blue region denote the first and the second steps of the algorithm, respectively. Rules 1–4 detect different phase transitions. Rule 1 refers to Equations (1)–(3); Rule 2 refers to Equations (4)–(6); Rule 3 refers to Equation (7), and Rule 4 refers to Equations (8) and (9).
List of candidate features.
| Feature | Definition and Description |
|---|---|
|
| |
|
| |
|
| |
|
| |
|
| Standard deviation of |
|
| |
|
|
Note: and denote the initial moment of the Pre-extension phase and Extension phase; and denote roll and yaw value estimated with raw IMU signals; ; is the mean value of in the 500-ms analysis window before .
Figure 3(A–D) Sensor signals of squat, stoop, left-asymmetric and right-asymmetric lifting over the lifting cycle for a representative subject. The figures were plotted using the data for lifting the load and placing it on the table, which were collected in Session 1 of the experiment. Solid curves and shaded regions denote mean values and standard deviations of corresponding angles over multiple cycles, respectively. The three classification outputs are denoted by backgrounds of different colors: Other (blue), Pre-extension (yellow) and Extension (green). Left and right hip angles are measured by exoskeleton’s hip joint encoders, and roll and yaw angles are estimated using raw signals collected by the IMU on exoskeleton’s backpack.
Confusion matrix (Mean ± STD) for subject-dependent recognition (%).
| Estimated Mode | ||||||
|---|---|---|---|---|---|---|
| No-Lift | Squat | Stoop | Left-Asymmetric | Right-Asymmetric | ||
|
|
| 100.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 |
|
| 0.67 ± 1.38 | 98.40 ± 2.25 | 0.22 ± 0.86 | 0.00 ± 0.00 | 0.71 ± 2.02 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 100.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 100.00 ± 0.00 | 0.00 ± 0.00 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 100.00 ± 0.00 | |
Confusion matrix (Mean ± STD) for subject-independent recognition (%).
| Estimated Mode | ||||||
|---|---|---|---|---|---|---|
| No Lift | Squat | Stoop | Left-Asymmetric | Right-Asymmetric | ||
|
|
| 98.67 ± 3.99 | 1.33 ± 3.99 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 |
|
| 0.24 ± 0.92 | 98.81 ± 2.20 | 0.48 ± 1.26 | 0.00 ± 0.00 | 0.48 ± 1.26 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 99.29 ± 2.77 | 0.71 ± 2.77 | 0.00 ± 0.00 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 100.00 ± 0.00 | 0.00 ± 0.00 | |
|
| 0.00 ± 0.00 | 0.00 ± 0.00 | 0.24 ± 0.92 | 0.00 ± 0.00 | 99.76 ± 0.92 | |
Figure 4Lifting detection delay of the algorithm. (A,B) show absolute and normalized detection delay for four different lifting techniques, respectively. Error bars denote STDs of the delay across 15 subjects.