| Literature DB >> 36202855 |
Manuel Palermo1,2, João M Lopes1,2, João André1,2, Ana C Matias3, João Cerqueira3,4, Cristina P Santos5,6,7.
Abstract
Monitoring gait and posture while using assisting robotic devices is relevant to attain effective assistance and assess the user's progression throughout time. This work presents a multi-camera, multimodal, and detailed dataset involving 14 healthy participants walking with a wheeled robotic walker equipped with a pair of affordable cameras. Depth data were acquired at 30 fps and synchronized with inertial data from Xsens MTw Awinda sensors and kinematic data from the segments of the Xsens biomechanical model, acquired at 60 Hz. Participants walked with the robotic walker at 3 different gait speeds, across 3 different walking scenarios/paths at 3 different locations. In total, this dataset provides approximately 92 minutes of total recording time, which corresponds to nearly 166.000 samples of synchronized data. This dataset may contribute to the scientific research by allowing the development and evaluation of: (i) vision-based pose estimation algorithms, exploring classic or deep learning approaches; (ii) human detection and tracking algorithms; (iii) movement forecasting; and (iv) biomechanical analysis of gait/posture when using a rehabilitation device.Entities:
Mesh:
Year: 2022 PMID: 36202855 PMCID: PMC9537285 DOI: 10.1038/s41597-022-01722-7
Source DB: PubMed Journal: Sci Data ISSN: 2052-4463 Impact factor: 8.501
Participants’ main anthropometric data.
| Participant_ID | Gender (M/F) | Age (years) | Body mass (kg) | Body height (cm) |
|---|---|---|---|---|
| 00 | M | 23 | 63 | 180 |
| 01 | F | 24 | 51 | 151 |
| 02 | M | 22 | 89 | 185 |
| 03 | F | 28 | 70 | 159 |
| 04 | F | 27 | 53 | 157 |
| 05 | M | 30 | 68 | 172 |
| 06 | M | 28 | 75 | 185 |
| 07 | M | 24 | 86 | 170 |
| 08 | M | 24 | 73 | 170 |
| 09 | M | 26 | 84 | 175 |
| 10 | M | 23 | 64 | 172 |
| 11 | M | 26 | 64 | 175 |
| 12 | M | 26 | 74 | 182 |
| 13 | F | 24 | 63 | 171 |
| Average (±std) | 4 females, 10 males | 25.4 (±2.31) | 69.7 (±11.4) | 172 (±10.2) |
More anthropometric information can be found on the dataset’s metadata file.
Fig. 1Inertial sensors placement: (a) participant’s anterior view, and (b) participant’s posterior view. Note that each sensor was placed in the inner of the strap to better secure the sensors. Additionally, the sternum, head, and foot sensors were placed inside the frontal pocket of the Xsens suit, on the pocket of the headband and inside each shoe, respectively.
Fig. 2Robotic walker used to collect this dataset, considering (a) the frontal view and (b) the rear view. Both upper and lower cameras are highlighted in (a) with a white box.
Fig. 3Transformations to spatially align the Xsens skeleton data from a world referential (left) to the walker’s posture camera referential (right). Transformations are shown in 3D space along with the 3D point cloud for reference. Once the skeleton is aligned in 3D space, it is possible to project it to the cameras’ referential.
Fig. 4Hierarchical folder structure of the database.
List of discarded trials.
| Trial | Observation |
|---|---|
| participant05_left_0.3_corner1 | Loose Xsens sensor |
| participant05_right_0.5_corner2 | Incorrect Xsens data |
| participant05_straight_0.5_corridor1 | Incorrect c3d data |
| participant05_straight_0.5_corridor2 | Incorrect c3d data |
| participant05_straight_0.5_corridor3 | Incorrect c3d data |
| participant05_straight_0.7_corridor1 | Incorrect c3d data |
| participant05_straight_0.7_corridor2 | Incorrect c3d data |
| participant05_straight_0.7_corridor3 | Incorrect c3d data |
| participant08_left_0.3_corner3 | Loose Xsens sensor |
| participant08_left_0.5_corner3 | Loose Xsens sensor |
| participant08_straight_0.3_corridor1 | Invalid depth data |
| participant09_right_0.3_corner3 | Loose Xsens sensor |
| participant09_straight_0.3_corridor3 | Loose Xsens sensor |
| participant09_straight_0.5_corridor3 | Loose Xsens sensor |
| participant09_straight_0.7_corridor3 | Loose Xsens sensor |
Fig. 5Samples of processed data from a forward scenario (upper) and cornering scenario (lower): (a) Outside view of the acquisition setup taken from a smartphone; (b) and (d) Depth camera frame overlaid with aligned 2D skeleton; and (c) and (e) 3D point cloud overlaid with aligned 3D skeleton. Data from the upper and lower cameras are related through the extrinsic transformation and displayed together.
| Measurement(s) | depth images • kinematic data |
| Technology Type(s) | RGB-D camera • Inertial Motion Capture System |
| Sample Characteristic - Organism | Homo Sapiens |