| Literature DB >> 34960280 |
Peter Beshara1,2,3, David B Anderson4, Matthew Pelletier2,3, William R Walsh2,3.
Abstract
Advancements in motion sensing technology can potentially allow clinicians to make more accurate range-of-motion (ROM) measurements and informed decisions regarding patient management. The aim of this study was to systematically review and appraise the literature on the reliability of the Kinect, inertial sensors, smartphone applications and digital inclinometers/goniometers to measure shoulder ROM. Eleven databases were screened (MEDLINE, EMBASE, EMCARE, CINAHL, SPORTSDiscus, Compendex, IEEE Xplore, Web of Science, Proquest Science and Technology, Scopus, and PubMed). The methodological quality of the studies was assessed using the consensus-based standards for the selection of health Measurement Instruments (COSMIN) checklist. Reliability assessment used intra-class correlation coefficients (ICCs) and the criteria from Swinkels et al. (2005). Thirty-two studies were included. A total of 24 studies scored "adequate" and 2 scored "very good" for the reliability standards. Only one study scored "very good" and just over half of the studies (18/32) scored "adequate" for the measurement error standards. Good intra-rater reliability (ICC > 0.85) and inter-rater reliability (ICC > 0.80) was demonstrated with the Kinect, smartphone applications and digital inclinometers. Overall, the Kinect and ambulatory sensor-based human motion tracking devices demonstrate moderate-good levels of intra- and inter-rater reliability to measure shoulder ROM. Future reliability studies should focus on improving study design with larger sample sizes and recommended time intervals between repeated measurements.Entities:
Keywords: Kinect; inertial sensor; meta-analysis; range of motion; reliability
Mesh:
Year: 2021 PMID: 34960280 PMCID: PMC8705315 DOI: 10.3390/s21248186
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Flow chart of the systematic review process.
Characteristics of studies included in this review.
| Author | Sample Size ( | Age (yr) Mean (SD) | Males (%) | Inclusion Criteria | Rater ( | Movement Assessed | Position | Device | Sessions ( | Time Interval |
|---|---|---|---|---|---|---|---|---|---|---|
| Awan et al., 2002 [ | 56 | Not reported | 57.1 | No history of neurologic disease, arthritis, connective tissue disorder, or shoulder/neck injury or surgery | 2 | Passive | Supine | Digital inclinometer | 2 | 90–120 min |
| Beshara et al., 2016 [ | 9 | 36.6 (±13.3) | 33.3 | No history of neurologic disease, arthritis, connective tissue disorder, or shoulder/neck injury or surgery | 1 | Active | Standing | Microsoft Kinect (V.2) and Inertial sensors | 2 | 7 days |
| Bonnechère et al., 2014 [ | 48 | 26 (±8) | 62.5 | Healthy adults | 1 | Active | Standing | Microsoft Kinect (V.1.5) | 2 | 7 days |
| Cai et al., 2019 [ | 10 | 24.6 (±2.8) | 100 | No upper limb injuries or medication use that would have influenced their upper limb functions | 1 | Active | Standing | Microsoft Kinect (V.2) | 2 | 7 days |
| Chan et al., 2010 [ | 1 | Not reported | 100 | Healthy, no pathology | 2 | Active | Standing Supine | iPod touch | 2 | Same day |
| Chen et al., 2020 [ | 10 | Not reported | Not reported | Healthy, aged 20–70 yrs, no discomfort or limited ROM of shoulder in the last year | 2 | Active | Standing | Inertial sensor (BoostFix) | 1 | Same day |
| Cools et al., 2014 [ | 30 | 22.1 (1.4) | 50 | No history of shoulder or neck pain or current participation in overhead sports on a competition level | 2 | Passive | Sitting Supine | Digital inclinometer | 2 | 10 s |
| Correll et al., 2018 [ | 42 | 32.3 (2.1) | 71.4 | Healthy, 18–75 yrs old, able to easily move between standing and supine positions, able to actively move at least one shoulder into 90° of glenohumeral abduction | 2 | Active | Supine | Digital inclinometer (HALO) | 2 | Same day |
| Çubukçu et al., 2020 [ | 40 | 22.1 (±3.1) | 55 | Healthy volunteers | 1 | Active | Standing | Microsoft Kinect (V.2) | 3 | 3 days |
| Cuesta-Vargas et al., 2016 [ | 37 | 56.1 (Healthy) 52.8 (Pathologic) | 40.5 | Healthy: no shoulder pain, negative NEER/Hawkin’s testPathologic: 18–75 yrs old, BMI 18–42 | 2 | Active | Standing | Inertial sensors (Inertia-Cube 3)- Sampling frequency 1000 Hz Smartphone (Nexus 4) 1280 × 768p resolution | 3 | 2 days |
| Da Cunha Neto et al., 2018 [ | 10 | Notreported | Not reported | Healthy | 2 | Active | Standing | Microsoft Kinect (V.2) | 2 | Same day |
| De Baets et al., 2020 [ | 10 | 54 (±6) | 57.1 | Diagnosis of adhesive capsulitis in the past 6 months based on criterial described by the American Physical Therapy Association | 2 | Active | Standing Seated | Inertial sensor (MCN Awinda motion capture system)-Sampling frequency 60 Hz | 2 | 2–5 days |
| de Winter et al., 2004 [ | 155 | 47 | 35.5 | Shoulder pain, 18–75 yrs, ability to co-operate (no dementia), sufficient knowledge of Dutch language | 2 | Passive | Seated Supine | Digital inclinometer (Cybex EDI 320) | 1 | 1 h |
| Dougherty et al., 2015 [ | 90 | 23.5 (8.9) | 40 | 18 yrs +, pain free shoulder movement, no history of shoulder pain in preceding 12 months | 1 | Passive | Seated Supine | Digital inclinometer | 2 | 7 days |
| Hawi et al., 2014 [ | 7 | Not Reported | Not Reported | Age 18+, free ROM without deficits | 1 | Active | Standing | Microsoft Kinect | 2 | Same day |
| Huber et al., 2015 [ | 10 | 22.1 (±0.9) | 60 | No shoulder pathology, pain-free | 1 | Active | Standing | Microsoft Kinect | 1 | Same day |
| Hwang et al., 2017 [ | 8 | 36.5 (±13.7) | Not Reported | Using a wheelchair for 1 yr, able to sit upright for at least 4 h/day, over 18 yrs old, use a wheelchair over 40 h/week | 1 | Active | Seated | Microsoft Kinect (V.2) | 2 | Same day |
| Kolber et al., 2011 [ | 30 | 25.9 (3.1) | 40 | Asymptomatic adults | 2 | Active | Seated Supine Prone | Digital inclinometer (Acumar) | 2 | 2 days |
| Kolber et al., 2012 [ | 30 | 26 (4.2) | 30 | No cervical spine or upper extremity pain or recent shoulder surgery on dominant arm | 2 | Active | Seated | Digital inclinometer (Acumar) | 2 | 1 day |
| Lim et al., 2015 [ | 47 | 24.9 (±3.5) | 59.6 | No shoulder injuries or history of musculoskeletal and nervous system damage that could affect ROM, no pain around shoulder no performance of specialized shoulder muscle stretch or exercises or stretching in preceding 6 months | 2 | Passive | Supine Side-lying | Smartphone (iPhone 5) | 2 | 2 days |
| Mejia-Hernandez et al., 2018 [ | 75 | 46 | 72 | Older than 18 yrs, documented current shoulder diseases | 2 | Active & Passive | Seated Supine | Smartphone (iPhone 5) | 2 | Same day |
| Milgrom et al., 2016 [ | 5 | Not reported | 80 | Possess ability to self-propel a manual wheelchair, uses a wheelchair for at least 75% of daily activities, ≥18 yrs of age | 3 Kinect sensors “individual rater” | Active | Seated | Microsoft Kinect (V.1.8) | 2 | Same day |
| Mitchell et al., 2014 [ | 94 | Not reported | 0 | No shoulder pathology | 5 | Active | Supine | Smartphone (iPhone 4) | 2 | At least 15 min (<30 min) |
| Picerno et al., 2015 [ | 45 | M: 27 (±8) F: 22 (±3) | 55.6 | No previous or current shoulder impairment, no involvement in competitive sports at a professional level | 1 | Active | Seated | Inertial sensor (FreeSense)-Sampling frequency 200 Hz | 2 | Same day |
| Poser et al., 2015 [ | 23 | 44 | 39.1 | Asymptomatic people who are attending a Pilates gym | 3 | Active | Supine Seated Side-lying | Digital Inclinometer (J-Tech) | 2 | Days (unspecific) |
| Ramos et al., 2019 [ | 54 | 26.3 (6) Healthy 25 (6) Shoulder pain | 25.9 | Healthy: Not reported Shoulder pain: Symptoms for at least 6 months and positive clinical tests for shoulder impingement | 1 | Active | Seated | Mobile application (iPod) | 2 | 7 days |
| Rigoni et al., 2019 [ | 30 | 32.8 | 40 | Healthy volunteers | 2 | Active | Standing | Inertial Sensor (Biokin) | 1 | Same day |
| Schiefer et al., 2015 [ | 20 | 37.4 (±9.9) | 70 | Healthy subjects without or with known functional deficits, free of musculoskeletal complaints for at least one week before examination | 3 | Passive | Not reported | Inertial Sensor (CUELA system) | 1 | 1 day |
| Scibek et al., 2013 [ | 11 | 21.4 (±1.4) | 55.6 | Healthy, reporting no history of neck, upper extremity pathology in the last six months | Not reported | Active | Seated | Digital inclinometer (Pro 360, Baseline) | 2 | 12–48 h |
| Shin et al., 2012 [ | 41 | 52.7 (±17.5) | 48.8 | Unilateral symptomatic shoulders | 3 | Active & Passive | StandingSupine | Smartphone (Galaxy S) | 2 | Same day |
| Walker et al., 2016 [ | 17 | 17 (±3) | 47 | Healthy, competitive swimmers, at least five swim sessions per week | 2 | Active | SupineStanding | Digital inclinometer (Dualer, J-Tech) | 2 | 30 min |
| Werner et al., 2014 [ | 24 | Not reported | 37.5 | Healthy and symptomatic shoulders, college students | 5 | Active | SupineStanding | Smartphone (iPhone) | 2 | Same day |
Abd = abduction, Add = adduction, ELE = elevation, ER = external rotation, E = extension, ER= external rotation, F = flexion, GH = Glenohumeral, Hor = horizontal, IR = internal rotation, Max = maximum, MP = medical physician, PT = physiotherapist, ROM = range of motion.
Intra-rater and Inter-rater reliability (95% CI) for measurement of shoulder range of motion by device and movement direction.
| Device | Author | Intra-Rater Reliability | Inter-Rater Reliability | Level of Reliability |
|---|---|---|---|---|
| Microsoft Kinect | ||||
| Shoulder | ||||
| Flexion | Da Cuncha Neto et al. (2018) | ICC 0.97 | ICC 0.91 |
|
| Extension | Da Cuncha Neto et al. (2018) | ICC 0.97 | ICC 0.97 |
|
| Abduction | Bonnechère et al. (2014) | ICC 0.73 | ICC 0.94 (0.72–0.99) |
|
| Adduction | Hawi et al. (2014) | ICC 0.99 |
| |
| External rotation | Huber et al. (2015) | ICC 0.98 |
| |
| Internal rotation | Çubukçu et al. (2020) | ICC 0.97 |
| |
| Microsoft Kinect & Inertial Sensor | ||||
| Shoulder | ||||
| Flexion | Beshara et al. (2016) | ICC 0.84 (0.45–0.96), 0.93 (0.72–0.98) |
| |
| Abduction | Beshara et al. (2016) | ICC 0.52 (-0.17–0.87, 0.85 (0.47–0.96) |
| |
| Inertial Sensor | ||||
| Shoulder | ||||
| Flexion | Rigoni et al. (2019) | ICC 0.68, 0.87, 0.91 | ICC 0.88 (0.80–0.92) |
|
| Extension | Chen et al. (2020) | ICC 0.68, 0.87, 0.91 | ICC 0.77 (0.64–0.87), 0.80 (0.68–0.89) |
|
| Abduction | Cuesta-Vargas et al. (2016) | ICC 0.78 (0.40–0.93), 0.98 (0.94–0.99) | ICC 0.49 (0.08–0.82), 0.99 (0.98–1.00), |
|
| Adduction | De Baets et al. (2020) | ICC 0.73, 0.95 | ICC 0.74, 0.80, 0.93 |
|
| External rotation | Schiefer et al. (2015) | ICC 0.85, 0.87, 0.89, 0.90 | ICC 0.71, 0.76, 0.81, 0.86 |
|
| Internal rotation | Schiefer et al. (2015) | ICC 0.85, 0.87, 0.89, 0.90 | ICC 0.68, 0.78, 0.87, 0.98 |
|
| Smartphone/Mobile App | ||||
| Shoulder | ||||
| Flexion | Chan et al. (2010) | ICC 0.99 | ICC 0.99 |
|
| Abduction | Lim et al. (2015) | ICC 0.72, 0.89, 0.95, 0.97 | ICC 0.79, 0.94 |
|
| Glenohumeral abduction | Mejia-Hernandez et al. (2018) | ICC 0.98 (0.97–0.99), 0.97 (0.95–0.99) |
| |
| External rotation | Chan et al. (2010) | ICC 0.94, 0.96 | ICC 0.88, 0.98 |
|
| Internal rotation | Shin et al. (2012) | ICC 0.79, 0.97, 0.90, 0.93 0.99 | ICC 0.63, 0.66, 0.67, 0.68 |
|
| Scaption | Ramos et al. (2019) | ICC −0.04, 0.10, 0.12, 0.31, 0.32, 0.39, 0.40, 0.45, 0.47, 0.52, 0.57, 0.63 | ICC −0.17, −0.06, 0.03, 0.07, 0.23, 0.26, |
|
| Digital Inclinometer/Goniometer | ||||
| Shoulder | ||||
| Flexion | Dougherty et al. (2015) | ICC 0.77, 0.82 | ICC 0.58 |
|
| Elevation | Walker et al. (2016) | ICC 0.91, 0.92, 0.93, 0.95 |
| |
| Glenohumeral flexion | Dougherty et al. (2015) | ICC 0.75, 0.77 | ICC 0.14, 0.35, 0.43, 0.63, 0.65, 0.69, |
|
| Abduction | deWinter et al. (2004) | ICC 0.91 | ICC 0.28, 0.78, 0.83 |
|
| Glenohumeral abduction | Dougherty et al. (2015) | ICC 0.60, 0.75 |
| |
| Horizontal abduction | Poser et al. (2015) | ICC 0.66, 0.81, 0.91, 0.94, 0.96 | ICC 0.17, 0.18, 0.24, 0.28, 0.31 |
|
| Digital Inclinometer/Goniometer | Awan et al. (2002) | ICC 0.58, 0.67 | ICC 0.41, 0.51 |
|
| Internal rotation | Awan et al. (2002) | ICC 0.64, 0.65, 0.72 | ICC 0.50, 0.52, 0.62, 0.66 |
|
| Scaption | Kolber et al. (2012) | ICC 0.88 | ICC 0.89 |
|
ICC = intra-class correlation coefficient. Level of reliability determined by the criteria identified by Swinkels et al. [87].
Assessment of reliability using the COSMIN standards for studies on reliability checklist.
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
|
| |
| 1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | A |
| 2. Was the time interval between the repeated measurements appropriate? | A | VG | VG | VG | A | A | A | A | VG | VG | A |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | A | VG | VG | VG | VG | VG | A |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
| 6. Were there any other important flaws in the design or statistical methods of the study? | D | D | VG | A | I | A | VG | VG | VG | VG | A |
| 7. For continuous scores: was an intraclass correlation (ICC) calculated? | A | VG | A | VG | A | VG | VG | VG | VG | VG | A |
| 8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
| 9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
| D | D | A | A | I | A | A | A | A | A | A |
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
|
| |
| 1 Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | A | VG | VG | VG | VG | VG | VG | VG | VG |
| 2. Was the time interval between the repeated measurements appropriate? | A | A | VG | A | A | A | A | VG | VG | A | A |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
| 6. Were there any other important flaws in the design or statistical methods of the study? | A | VG | VG | D | A | D | VG | VG | VG | VG | D |
| 7. For continuous scores: was an intraclass correlation (ICC) calculated? | VG | A | A | VG | VG | A | VG | VG | VG | A | A |
| 8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
| 9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
| A | A | A | D | A | D | A | VG | VG | A | D |
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
| ||
| 1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
| 2. Was the time interval between the repeated measurements appropriate? | A | A | VG | VG | A | A | A | A | A | A | |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
| 6. Were there any other important flaws in the design or statistical methods of the study? | VG | VG | A | VG | VG | A | A | VG | A | A | |
| 7. For continuous scores: was an intraclass correlation (ICC) calculated? | VG | VG | VG | A | VG | VG | VG | VG | VG | VG | |
| 8. For ordinal scores: was a (weighted) kappa calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
| 9. For dichotomous/nominal scores: was Kappa calculated for each category against the other categories combined? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
|
| A | A | A | A | A | A | A | A | A | A | |
Abbreviations: VG: very good; A: adequate, D: doubtful; I: inadequate; N/A: not applicable.
Assessment of measurement error using the COSMIN standards for studies on measurement error checklist.
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
|
| |
| 1 Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | A |
| 2. Was the time interval between the repeated measurements appropriate? | A | VG | A | VG | A | A | A | A | VG | VG | D |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | A | VG | VG | VG | VG | VG | A |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | VG | VG | A | VG | VG | VG | VG | A | A | A |
| 6 Were there any other important flaws in the design or statistical methods of the study? | D | D | VG | D | I | VG | VG | VG | VG | VG | VG |
| 7. For continuous scores: was the Standard Error of Measurement (SEM), Smallest Detectable Change (SDC), Limits of Agreement (LoA) or Coefficient of Variation (CV) calculated? | I | VG | VG | I | I | VG | VG | VG | VG | I | I |
| 8. For dichotomous/nominal/ordinal scores: was the percentage specific (e.g., positive and negative) agreement calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
| I | D | A | I | I | A | A | A | A | I | I |
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
|
| |
| 1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | A | VG | VG | VG | VG | VG | VG | VG | VG |
| 2. Was the time interval between the repeated measurements appropriate? | A | A | VG | A | A | A | A | VG | VG | A | A |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | A | VG | VG | VG | VG | VG | VG | VG |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | A | A | VG | A | VG | A | VG | VG | VG | VG | A |
| 6. Were there any other important flaws in the design or statistical methods of the study? | VG | VG | VG | D | VG | D | VG | VG | D | VG | D |
| 7. For continuous scores: was the Standard Error of Measurement (SEM), Smallest Detectable Change (SDC), Limits of Agreement (LoA) or Coefficient of Variation (CV) calculated? | VG | N/A | VG | I | VG | VG | VG | VG | I | VG | I |
| 8. For dichotomous/nominal/ordinal scores: was the percentage specific (e.g., positive and negative) agreement calculated? | N/A | VG | A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
| A | A | A | I | A | D | A | VG | I | A | I |
|
|
| ||||||||||
|
|
|
|
|
|
|
|
|
|
| ||
| 1. Were patients stable in the time between repeated measurements on the construct to be measured? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
| 2. Was the time interval between the repeated measurements appropriate? | A | A | VG | VG | A | A | A | A | A | A | |
| 3. Were the measurement conditions similar for the repeated measurements–except for the condition being evaluated as a source of variation? | VG | VG | VG | VG | VG | VG | VG | VG | VG | VG | |
| 4. Did the professional(s) administer the measurement without knowledge of scores or values of other repeated measurement(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
| 5. Did the professionals(s) assign scores or determine values without knowledge of scores or values of other repeated measurements(s) in the same patients? | VG | A | A | A | VG | VG | A | VG | VG | VG | |
| 6. Were there any other important flaws in the design or statistical methods of the study? | VG | VG | VG | VG | VG | VG | D | VG | VG | VG | |
| 7. For continuous scores: was the Standard Error of Measurement (SEM), Smallest Detectable Change (SDC), Limits of Agreement (LoA) or Coefficient of Variation (CV) calculated? | I | I | VG | VG | VG | VG | I | VG | VG | VG | |
| 8. For dichotomous/nominal/ordinal scores: was the percentage specific (e.g., positive and negative) agreement calculated? | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | |
|
| I | I | A | A | A | A | I | A | A | A | |
Abbreviations: VG: very good; A: adequate, D: doubtful; I = inadequate; N/A: not applicable.
Figure 2Meta-analysis for intra-rater AROM. DG = digital goniometer, DI = digital inclinometer, IS = inertial sensor, K = Kinect, SP = smartphone, Abd = abduction, AROM = active range of motion, Int = internal rotation, Ext = external rotation.
Figure 3Meta-analysis for intra-rater pROM. DI = digital inclinometer, PROM = passive range of motion, Flex = flexion, Int = internal rotation, Ext = external rotation.
Figure 4Meta-analysis for inter-rater aROM. DG = digital goniometer, DI = digital inclinometer, IS = inertial sensor, K = Kinect, SP = smartphone, Abd = abduction, Int = internal rotation.
Anatomical landmarks by device.
| Device | Author | Anatomical Landmarks |
|---|---|---|
| Microsoft Kinect | Bonnechère et al. (2014) | Shoulder girdle centre, elbow, wrist, hand |
| Hawi et al. (2014) | Shoulder centre and elbow | |
| Huber et al. (2015) | Positions of shoulder and elbow joints relative to the trunk for flexion and abduction. Position of elbow and hand relative to trunk for external rotation | |
| Milgrom et al. (2016) | Angle between the humerus vector (shoulder to elbow) and the torso vector (neck to shoulder midpoint) | |
| Cai et al. (2019) | X = Unit vector perpendicular to the Y-axis and the Z-axis pointing anteriorly, Y: Unit vector going from the elbow joint center to the shoulder joint center, Z: Unit vector perpendicular to the plane formed by the Y-axis of the upper arm and the long axis vector of the forearm. | |
| Microsoft Kinect & Inertial Sensor | Beshara et al. (2016) | 2 3D vectors, a vector from shoulder joint centre (below the acromion process) to the elbow centre (between the medical and lateral epicondyles). A vector from shoulder joint centre defined as a point on the 6th rib along the midaxillary line of the trunk. |
| Inertial Sensor | Cuesta-Vargas et al. (2016) | Middle third of the humerus slightly posterior and in the flat part of the sternum |
| Picerno et al. (2015) | Arbitrary point of the upper arm | |
| Schiefer et al. (2015) | Laterally on the upper arms and on the forearms close to the wrist, on the dorsum of the hand. Sensors were placed in the middle of the segments. | |
| Rigoni et al. (2019) | 10 cm distal to the lateral epicondyle | |
| De Baets et al. (2020) | ||
| Smartphone/mobile app | Chan et al. (2010) | Acromion, humeral axis. |
| Lim et al. (2015) | Front centre of humerus. | |
| Mitchell et al. (2014) | Superior border of the mid-ulna. | |
| Ramos et al. (2019) | Attached below the deltoid muscle origin | |
| Mejia-Hernandez et al. (2018) | Distal portion of the humerus for seated movements. Wrist for supine movements. | |
| Shin et al. (2012) | Ventral side of the patient’s forearm at the wrist level. | |
| Digital Inclinometer | Dougherty et al. (2015) | |
| Kolber et al. (2011) | ||
| Kolber et al. (2012) | ||
| Poser et al. (2015) | ||
| Scibek et al. (2013) | ||
| Walker et al. (2016) | ||
| Correll (2018) |