| Literature DB >> 29527161 |
Clemente Lauretti1, Francesca Cordella1, Anna Lisa Ciancio1, Emilio Trigili2, Jose Maria Catalan3, Francisco Javier Badesa4, Simona Crea2, Silvio Marcello Pagliara5, Silvia Sterzi6, Nicola Vitiello2,7, Nicolas Garcia Aracil3, Loredana Zollo1.
Abstract
The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.Entities:
Keywords: assistive robotics; dynamics movement primitives; learning by demonstration; machine learning; motion planning
Year: 2018 PMID: 29527161 PMCID: PMC5829101 DOI: 10.3389/fnbot.2018.00005
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Figure 1NESM upper-limb exoskeleton with the wrist-hand exoskeleton.
Figure 2Block scheme of the proposed motion planning system.
Figure 3c and σ functions for the optimal allocation of the Gaussian Kernels. X* and T* are the state value and time instant corresponding to the critical point (Lauretti et al., 2017a).
Figure 4Structure of the adopted neural network.
Figure 5Block scheme of the recursive method used to adjust the NN outputs for different subject anthropometries.
Figure 6NESM reference frames positioning according to the Denavit–Hartenberg (D–H) convention.
Figure 7Block scheme of the platform.
Tasks description.
| Task 1: Drinking | |
|---|---|
| subtask 1-1 | reach the glass |
| subtask 1-2 | reach the mouth |
| subtask 1-3 | reach the table for releasing the glass |
| subtask 1-4 | go back to the rest position |
| subtask 2-1 | reach the bottle |
| subtask 2-2 | pour the water into the glass |
| subtask 2-3 | reach the table for releasing the bottle |
| subtask 2-4 | go back to the rest position |
| subtask 3-1 | reach the sphere |
| subtask 3-2 | move the sphere to another position on the table |
| subtask 3-3 | go back to the rest position |
Figure 8A representative subject performing the task (the subject signed an informed consent document to authorize publication of this picture).
Figure 9The workspace reached during the assistive tasks is delimited by the black line. Object positions during training are indicated by black dots [the glass in the drinking task in (A), the bottle in the pouring task in (B) and the initial position of the sphere in the SHAP task in (C)]. Conversely, the glass positions during the pouring task and the sphere final positions in the SHAP task are indicated by red dots in (B, C), respectively.
Figure 10(A) a graphical representation of the end-effector and the base reference frame is shown; (B) the α angle for task 1-1, 2-1 is shown; (C) the α angle for task 3 is shown; (D) The base reference frame and bottle, end-effector, and glass reference frames are shown; (E) the β angle for task 2-2 is shown.
Figure 11Experimental results obtained for CA. The red lines denote the range within which the task is considered successfully accomplished.
Experimental results obtained for GCA.
| GCA–sim | GCA–real | ||
|---|---|---|---|
| Task 1 | Position Err [mm] | 2.7 ± 0.4 | 3.9 ±0.5 |
| Orientation Err [rad] | 0.14 ±0.01 | 0.164 ±0.008 | |
| PhJL | 0.51 ±0.03 | 0.64 ±0.04 | |
| Task 2-1 | Position Err1 [mm] | 3.2 ±0.9 | 4.0 ± 3.1 |
| Orientation Err1 [rad] | 0.10 ±0.07 | 0.116 ±0.05 | |
| Task 2-2 | Position Err2 [mm] | 19 ±6 | 21 ±5 |
| Orientation Err2 [rad] | 0.57 ±0.07 | 0.5 ±0.1 | |
| Task 2 | PhJL | 0.56 ±0.04 | 0.64 ±0.05 |
| Task 3 | Position Err [mm] | 7.3 ±1.2 | 9.5 ±1.9 |
| Orientation Err [rad] | 0.157 ±0.02 | 0.14 ±0.02 | |
| PhJL | 0.6 ±0.4 | 0.5 ±0.36 | |
| Success rate [%] | 100 | 100 | |