Stuart J Warden1, Allie C Kemp2, Ziyue Liu3, Sharon M Moe4. 1. Department of Physical Therapy, School of Health and Human Sciences, Indiana University, Indianapolis, IN, United States; Indiana Center for Musculoskeletal Health, Indiana University, Indianapolis, IN, United States; La Trobe Sport and Exercise Medicine Research Centre, La Trobe University, Bundoora, Victoria, Australia. Electronic address: stwarden@iu.edu. 2. Department of Physical Therapy, School of Health and Human Sciences, Indiana University, Indianapolis, IN, United States. 3. Department of Biostatistics, Richard M. Fairbanks School of Public Health, Indiana University, Indianapolis, IN, United States. 4. Indiana Center for Musculoskeletal Health, Indiana University, Indianapolis, IN, United States; Division of Nephrology, Department of Medicine, School of Medicine, Indiana University, Indianapolis, IN, United States.
Abstract
BACKGROUND: There is a clinical need to be able to reliably detect meaningful changes (0.1 to 0.2 m/s) in usual gait speed (UGS) considering reduced gait speed is associated with morbidity and mortality. RESEARCH QUESTION: What is the impact of tester on UGS assessment, and the influence of test repetition (trial 1 vs. 2), timing method (manual stopwatch vs. automated timing), and starting condition (stationary vs. dynamic start) on the ability to detect changes in UGS and fast gait speed (FGS)? METHODS: UGS and FGS was assessed in 725 participants on a 8-m course with infrared timing gates positioned at 0, 2, 4 and 6 m. Testing was performed by one of 13 testers trained by a single researcher. Time to walk 4-m from a stationary start (i.e. from 0-m to 4-m) was measured manually using a stopwatch and automatically via the timing gates at 0-m and 4-m. Time taken to walk 4-m with a dynamic start was measured during the same trial by recording the time to walk between the timing gates at 2-m and 6-m (i.e. after 2-m acceleration). RESULTS: Testers differed for UGS measured using manual vs. automated timing (p = 0.02), with five and two testers recording slower and faster UGS using manual timing, respectively. 95% limits of agreement for trial 1 vs. 2, manual vs. automated timing, and dynamic vs. stationary start ranged from ±0.15 m/s to ±0.20 m/s, coinciding with the range for a clinically meaningful change. Limits of agreement for FGS were larger ranging from ±0.26 m/s to ±0.35 m/s. SIGNIFICANCE: Repeat testing of UGS should performed by the same tester or using an automated timing method to control for tester effects. Test protocol should remain constant both between and within participants as protocol deviations may result in detection of an artificial clinically meaningful change.
BACKGROUND: There is a clinical need to be able to reliably detect meaningful changes (0.1 to 0.2 m/s) in usual gait speed (UGS) considering reduced gait speed is associated with morbidity and mortality. RESEARCH QUESTION: What is the impact of tester on UGS assessment, and the influence of test repetition (trial 1 vs. 2), timing method (manual stopwatch vs. automated timing), and starting condition (stationary vs. dynamic start) on the ability to detect changes in UGS and fast gait speed (FGS)? METHODS:UGS and FGS was assessed in 725 participants on a 8-m course with infrared timing gates positioned at 0, 2, 4 and 6 m. Testing was performed by one of 13 testers trained by a single researcher. Time to walk 4-m from a stationary start (i.e. from 0-m to 4-m) was measured manually using a stopwatch and automatically via the timing gates at 0-m and 4-m. Time taken to walk 4-m with a dynamic start was measured during the same trial by recording the time to walk between the timing gates at 2-m and 6-m (i.e. after 2-m acceleration). RESULTS: Testers differed for UGS measured using manual vs. automated timing (p = 0.02), with five and two testers recording slower and faster UGS using manual timing, respectively. 95% limits of agreement for trial 1 vs. 2, manual vs. automated timing, and dynamic vs. stationary start ranged from ±0.15 m/s to ±0.20 m/s, coinciding with the range for a clinically meaningful change. Limits of agreement for FGS were larger ranging from ±0.26 m/s to ±0.35 m/s. SIGNIFICANCE: Repeat testing of UGS should performed by the same tester or using an automated timing method to control for tester effects. Test protocol should remain constant both between and within participants as protocol deviations may result in detection of an artificial clinically meaningful change.
Authors: James E Graham; Glenn V Ostir; Yong-Fang Kuo; Steven R Fisher; Kenneth J Ottenbacher Journal: Arch Phys Med Rehabil Date: 2008-05 Impact factor: 3.966
Authors: Stephanie Studenski; Subashan Perera; Kushang Patel; Caterina Rosano; Kimberly Faulkner; Marco Inzitari; Jennifer Brach; Julie Chandler; Peggy Cawthon; Elizabeth Barrett Connor; Michael Nevitt; Marjolein Visser; Stephen Kritchevsky; Stefania Badinelli; Tamara Harris; Anne B Newman; Jane Cauley; Luigi Ferrucci; Jack Guralnik Journal: JAMA Date: 2011-01-05 Impact factor: 56.272
Authors: Yohanna MejiaCruz; Jean Franco; Garret Hainline; Stacy Fritz; Zhaoshuo Jiang; Juan M Caicedo; Benjamin Davis; Victor Hirth Journal: Curr Geriatr Rep Date: 2021-01-20