| Literature DB >> 36080782 |
Yoonjeong Choi1, Yoosung Bae1, Baekdong Cha1, Jeha Ryu1.
Abstract
The timed up-and-go (TUG) test is an efficient way to evaluate an individual's basic functional mobility, such as standing up, walking, turning around, and sitting back. The total completion time of the TUG test is a metric indicating an individual's overall mobility. Moreover, the fine-grained consumption time of the individual subtasks in the TUG test may provide important clinical information, such as elapsed time and speed of each TUG subtask, which may not only assist professionals in clinical interventions but also distinguish the functional recovery of patients. To perform more accurate, efficient, robust, and objective tests, this paper proposes a novel deep learning-based subtask segmentation of the TUG test using a dilated temporal convolutional network with a single RGB-D camera. Evaluation with three different subject groups (healthy young, healthy adult, stroke patients) showed that the proposed method demonstrated better generality and achieved a significantly higher and more robust performance (healthy young = 95.458%, healthy adult = 94.525%, stroke = 93.578%) than the existing rule-based and artificial neural network-based subtask segmentation methods. Additionally, the results indicated that the input from the pelvis alone achieved the best accuracy among many other single inputs or combinations of inputs, which allows a real-time inference (approximately 15 Hz) in edge devices, such as smartphones.Entities:
Keywords: TUG subtask segmentation; deep learning; temporal convolutional network; timed up-and-go test
Mesh:
Year: 2022 PMID: 36080782 PMCID: PMC9459743 DOI: 10.3390/s22176323
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1System configuration for the study. The participants performed a 3 m TUG test. A cone was placed at 3 m straight from a standard chair, and Azure Kinect was installed perpendicular to the walking direction at the height of 1.5 m.
Figure 2Overall flowchart of the proposed method.
Labeling guidelines for six events.
| TUG Events | Label | Criteria |
|---|---|---|
| StartMove | 0 | When body is tilted 45 degrees to get up from the chair |
| StartWalk | 1 | After getting up from the chair, when the first step is off the ground |
| StartTurn | 2 | When subject rotates the body to turn at the TUG marker |
| EndTurn | 3 | After turning at the TUG marker, when the body looks back at the chair |
| StartSit | 4 | When body stands against the chair after turning body to sit on the chair |
| EndSit | 5 | When body is titled 45 degrees to lean on the chair |
Figure 3Preprocessing by low pass filter and normalization for a pelvis trajectory.
Figure 4Architecture of dilated temporal convolutional network (TCN).
Figure 5Sample postprocessing for correcting frame-level misclassification.
Types of sensors and associated inputs.
| Article | Purpose (Method) | Population | System (Sensor) | Inputs | Reason of Input Location |
|---|---|---|---|---|---|
| Hsieh et al. [ | subtask | 5 healthy people | 3 wearable sensors | waist, R/L thigh | To acquire body acceleration and angular velocity while focusing on detecting changes of subtasks. (trunk bending, rotating, etc.) |
| Nguyen et al. [ | subtask segmentation (rule-based) | 16 healthy | motion capture suit (17 IMU) | each body segment | To capture full-body 3D movement |
| Nguyen et al. [ | subtask segmentation (rule-based) | 12 older adults diagnosed with early PD | motion capture suit (17 IMU) | each body segment | To capture full-body 3D movement |
| Lohmann et al. [ | subtask segmentation (rule-based) | 5 older adults who suffered from age-related medical conditions, | 2 Kinect | shoulder center (vel, acc), | To detect TUG events while focusing on detecting changes of subtasks |
| Kampel et al. [ | subtask segmentation (rule-based) | 11 older adults | 1 Kinect v2 | spine shoulder, | To detect TUG events while focusing on detecting changes of subtasks |
| Kampel et al. [ | subtask segmentation (rule-based) | 11 older adults | 1 Kinect v2 | Center of Mass | To acquire moving history data, COM is calculated using silhouette extraction method |
| Salarian et al. [ | subtask segmentation (rule-based) | 12 older adults in | 7 IMUs | forearms, shanks, thighs, trunk | To detect and analyze each subtask. |
| Hsieh et al. [ | subtask segmentation (ANN-based) | 26 patients with severe knee osteoarthritis | 6 wearable sensors | chest, lower back, R/L thigh, R/L shank | To acquire body movement from various parts |
| Li et al. [ | subtask segmentation (ANN-based) | 24 PD patients | 1 RGB camera | neck, R/L shoulder, R/L hip, R/L knee, R/L ankle | In total, 9 body keypoints were used to represent the human poses |
| Savoie et al. [ | subtask segmentation (ANN-based) | 30 healthy young | 1 Kinect V2, | center of shoulder, | To detect TUG events while focusing on detecting changes of the subtasks, |
| Ortega-Bastidas et al. [ | fall risk prediction | 25 healthy young | IMU sensor RGB video | back | To detect all gait, biomechanical elements of the pelvis, and other spatial and temporal kinematics factor |
| Jian et al. [ | fall risk prediction | 40 subjects | 1 RGB camera, | full joints | To compute gait characteristics such as gait speed, step length, etc. |
| Wang et al. [ | abnormal gait classification | 404 subjects | 1 RGB camera | vertical location sequence of R/L shoulder | Considering the visibility and stability of joint detection |
Figure 6Skeleton joints tracked by the Azure Kinect and used inputs for comparison (red box).
Comparison results of TUG subtask segmentation accuracy of joints closest to COG.
| Input | Healthy Young | Older Adults | Stroke Patients | ||
|---|---|---|---|---|---|
| Joint | No. | Acc. [%] | Acc. [%] | Acc. [%] | |
| Results | pelvis | input 1 | 95.46 | 94.53 | 93.58 |
| spine chest | input 2 | 94.29 | 94.25 | 92.83 | |
| head | input 3 | 94.1 | 93.86 | 90.86 | |
| hand (left/right) | input 4 | 92.24 | 91.32 | 79.46 | |
| ankle(left/right) | input 5 | 89.89 | 86.53 | 80.58 | |
| pelvis, head | input 6 | 94.4 | 94.22 | 91.59 | |
| pelvis, spine chest | input 7 | 94.46 | 94.045 | 92.29 | |
| pelvis, ankle | input 8 | 93.42 | 93.81 | 87.56 | |
| pelvis, hand | input 9 | 93.46 | 93.68 | 87.07 | |
| pelvis, head, spine chest | input 10 | 93.78 | 93.44 | 91.72 | |
| pelvis, head, ankle | input 11 | 93.59 | 93.93 | 90.89 | |
| pelvis, hand, ankle | input 12 | 93.15 | 93.84 | 91.944 | |
| pelvis, head, hand | input 13 | 93.3 | 93.63 | 91.31 | |
| head, hand, ankle | input 14 | 93.25 | 93.39 | 91.72 | |
| pelvis, head spine chest, | input 15 | 93.62 | 93.95 | 92.4 | |
Optimal values of kernel and window sizes.
| Title 1 | Kernel Size (Window Size = 8) | Window Size (Kernel Size = 3) | |||||
|---|---|---|---|---|---|---|---|
| 3 | 5 | 7 | 4 | 8 | 16 | 32 | |
| Accuracy | 94.53 | 93.21 | 92.73 | 92.8 | 94.53 | 92.11 | 87.26 |
| # of parameters | 41,879 | 112,537 | 218,521 | 40,921 | 41,879 | 43,801 | 47,641 |
Figure 7Loss and accuracy plot for model with pelvis input.
Accuracy from ablation study of dilated TCN.
| Number of Temporal Blocks. | |||
|---|---|---|---|
| 1.1 | 1.2 | 1.3 | |
| Acc [%] | 92.3 | 93.53 | 92.54 |
| 2.1 | 2.2 | 2.3 | |
| Acc [%] | 93.94 | 90.42 | 92.03 |
| 3.1 | 3.2 | 3.3 | |
| Acc [%] | 92.34 | 94.53 | 93.44 |
| 4.1 | 4.2 | 4.3 | |
| Acc [%] | 92.75 | 92.31 | 91.86 |
Figure 8TUG events in Skeleton TUG and this study. Based on the dataset obtained in the study, it shows how the proposed method and the events of the skeleton TUG are matched, re-spectively. Each truncated plot is the result of detection of the event of the skeleton TUG.
Figure 9MAE and STD in seconds between skeleton TUG (gray bar) and the proposed method (blue bar) for each TUG event. Error bars are ± the STD of the values.
Comparison for TUG subtask segmentation for older adults.
| Method | MAE, STD, Precision, Recall, and F1 Score of TUG Phases | |||||||
|---|---|---|---|---|---|---|---|---|
| Metric | Total | Sit-to-Stand | Walk | Turn #1 | Walk | Turn #2 | Stand-to-Sit | |
| Skeleton TUG [ | MAE | 0.227 | 1.024 | 0.903 | 1.061 | 1.224 | 2.182 | 1.570 |
| Prec. | 0.997 | 0.647 | 0.961 | 0.793 | 0.831 | 0.832 | 0.593 | |
| Recall | 0.990 | 0.928 | 0.906 | 0.871 | 0.983 | 0.759 | 0.952 | |
| F1 score | 0.994 | 0.753 | 0.933 | 0.830 | 0.900 | 0.793 | 0.731 | |
| Proposed | MAE | 0.221 | 0.138 | 0.134 | 0.182 | 0.196 | - | 0.181 |
| STD | 0.237 | 0.228 | 0.109 | 0.136 | 0.148 | - | 0.145 | |
| Prec. | 0.986 | 0.955 | 0.947 | 0.967 | 0.913 | - | 0.884 | |
| Recall | 0.990 | 0.973 | 0.966 | 0.96 | 0.932 | - | 0.818 | |
| F1 score | 0.988 | 0.964 | 0.957 | 0.963 | 0.923 | - | 0.849 | |
Figure 10Comparison of MAE and STD for total TUG time and subtask segmentation.
Figure 11Comparison results with ANN-based method [22].