| Literature DB >> 31277682 |
Jirapat Likitlersuang1,2, Elizabeth R Sumitro1,2, Tianshi Cao2, Ryan J Visée1,2, Sukhvinder Kalsi-Ryan2,3, José Zariffa4,5.
Abstract
BACKGROUND: Current upper extremity outcome measures for persons with cervical spinal cord injury (cSCI) lack the ability to directly collect quantitative information in home and community environments. A wearable first-person (egocentric) camera system is presented that aims to monitor functional hand use outside of clinical settings.Entities:
Keywords: Community-based rehabilitation; Egocentric vision; Outcome measure; Rehabilitation engineering; Tetraplegia; Upper extremity
Mesh:
Year: 2019 PMID: 31277682 PMCID: PMC6612110 DOI: 10.1186/s12984-019-0557-1
Source DB: PubMed Journal: J Neuroeng Rehabil ISSN: 1743-0003 Impact factor: 4.262
Fig. 1Algorithmic framework for the proposed hand used system. A simplified flowchart of the algorithmic framework showing the developed sequential preprocessing steps as well as input and output format for each step
Fig. 2Example frames describing the methodology in each of the processing steps. (a) Hand detection step, where the left image is the output bounding box of the hand from the R-CNN, the centre image is the Haar-like feature rotating around the bounding box centroid, and the right image is the final detection output. (b) Hand segmentation step, where the left image is the hand contour identification generated by combining skin colour information (in black and white) with edge detection of hand contours (in purple), and the right image shows the re-centering and selection of the final hand contour. (c) Regions involved in the interaction detection step, where the left image is the hand region, the centre image is the boxed neighbourhood of the hand, and the right image is the background region
Participant Demographics and Injury Characteristics
| Participant | Age (Years) | Sex | Level of Injury | AIS grade | Traumatic (T)/ Non-traumatic (NT) | Time since injury (Years) | Upper Extremity Motor Score (UEMS) |
|---|---|---|---|---|---|---|---|
| 1 | 63 | Male | C5-C6 | Aa | T | 8 | 15 |
| 2 | 58 | Male | C3-C5 | D | T | 1 | 24 |
| 3 | 59 | Male | C2-C6 | D | T | 1 | 20 |
| 4 | 55 | Male | C7-T1 | C/Da | T | 4 | 18 |
| 5 | 56 | Male | C2-C7 | D | T | 2 | 19 |
| 6 | 56 | Male | C5-C6 | D | T | 2 | 16 |
| 7 | 20 | Male | C5 | B | T | 4 | 9 |
| 8 | 58 | Male | C5 | C/Da | T | 32 | 13 |
| 9 | 44 | Female | C6-C7 | A | T | 20 | 20 |
| 10 | 51 | Male | C4-C6 | D | T | 1 | 22 |
| 11 | 34 | Male | C5-C6 | C | T | 5 | 21 |
| 12 | 40 | Female | C2-T1 | D | NT | 2 | 20 |
| 13 | 70 | Male | C4-C6 | C | T | 1 | 24 |
| 14 | 42 | Male | C4-C6 | B | T | 0.4 | 16 |
| 15 | 56 | Male | C1-C6 | D | NT | 0.3 | 23 |
| 16 | 44 | Male | C4-C5 | Ba | T | 21 | 21 |
| 17 | 41 | Male | C6-C7 | Aa | T | 20 | 14 |
aThese AIS grades are based on self-report
F1-Score and accuracy for left (L) and right (R) hand for each participant as well as the average for each of the features
| F1-score | Accuracy | |||
|---|---|---|---|---|
| L | R | L | R | |
| Participant | ||||
| 1 | 0.54 | 0.53 | 0.42 | 0.42 |
| 2 | 0.6 | 0.75 | 0.73 | 0.79 |
| 3 | 0.86 | 0.73 | 0.79 | 0.63 |
| 4 | 0.85 | 0.58 | 0.79 | 0.59 |
| 5 | 0.72 | 0.67 | 0.75 | 0.7 |
| 6 | 0.55 | 1 | 0.48 | 0.99 |
| 7 | 0.84 | 0.8 | 0.82 | 0.69 |
| 8 | 0.78 | 0.63 | 0.67 | 0.51 |
| 9 | 0.93 | 0.91 | 0.88 | 0.88 |
| Mean ± S.D. | 0.74 ± 0.15 | 0.73 ± 0.15 | 0.70 ± 0.16 | 0.68 ± 0.18 |
| Feature | ||||
| Optical Flow | 0.73 ± 0.14 | 0.70 ± 0.13 | 0.68 ± 0.16 | 0.66 ± 0.15 |
| HOG | 0.72 ± 0.12 | 0.72 ± 0.14 | 0.69 ± 0.12 | 0.68 ± 0.15 |
| Colour Histogram | 0.70 ± 0.12 | 0.66 ± 0.17 | 0.68 ± 0.10 | 0.66 ± 0.16 |
Fig. 3Hand use metrics. Scatter plots comparing the interaction metrics predicted from the algorithm (y-axis) with the actual value from the human observer (x-axis), for each of the three proposed metrics in both hands (left and right hand). (a) Proportion of interaction over total recording time, (b) average duration of interactions (seconds), and (c) number of interactions per hour. The result of a Pearson correlation is shown for (a) and (c) because the data were normally distributed, while (b) was calculated with a Spearman correlation
Fig. 4Example binary hand-object interaction graphs of 3 participants. The graphs compare the predicted interactions from the algorithm output to the actual interactions from the manually labeled data, after applying the moving average filter. Example frames of the activities in different segments of videos are shown underneath. (a) Participant # 2. (b) Participant # 5. (c) Participant # 9. Note that in some cases the videos were briefly paused in between the activities shown