Rachel S Tappan1, Lois D Hedman, Roberto López-Rosado, Heidi R Roth. 1. Dep. of Physical Therapy and Human Movement Sciences, Feinberg School of Medicine, Northwestern University, 645 N. Michigan Ave., Suite 1100, Chicago, IL 60611, USA. Tel 312-503-2184, fax 312-908-0741. Rachel-tappan@northwestern.edu.
Abstract
BACKGROUND: Grading rubrics used in the assessment of physical therapy students' clinical skills should be developed in a method that promotes validity. This study applied a systematic approach to the development of rubrics to assess student performance within a Doctor of Physical Therapy curriculum. PARTICIPANTS: Ten faculty participated. METHODS: Checklist-style rubrics covering four clinical skills were developed using a five-step process: 1) evidence-based rubric item development; 2) multiple Delphi review rounds to achieve consensus on item content; 3) pilot testing and formatting of rubrics; 4) final Delphi review; 5) weighting of rubric sections. Consensus in the Delphi review was defined as: ≥75% of participants rate each item Agree/Strongly Agree in two consecutive rounds, no statistically significant difference between Likert ratings on the final two rounds for each item using the Wilcoxon signed-rank test (p>0.05), and a reduction in participant comments between the first and last rounds. RESULTS: All rubric items achieved consensus with: 100% agreement, no statistically significant difference between the two final sets of ratings (p=0.102 to 1.000), and a decrease in the number of comments from 81 in Round 1 to 21 in Round 5. CONCLUSION: This method of rubric development resulted in rubrics with validity, acceptability, and time efficiencies.
BACKGROUND: Grading rubrics used in the assessment of physical therapy students' clinical skills should be developed in a method that promotes validity. This study applied a systematic approach to the development of rubrics to assess student performance within a Doctor of Physical Therapy curriculum. PARTICIPANTS: Ten faculty participated. METHODS: Checklist-style rubrics covering four clinical skills were developed using a five-step process: 1) evidence-based rubric item development; 2) multiple Delphi review rounds to achieve consensus on item content; 3) pilot testing and formatting of rubrics; 4) final Delphi review; 5) weighting of rubric sections. Consensus in the Delphi review was defined as: ≥75% of participants rate each item Agree/Strongly Agree in two consecutive rounds, no statistically significant difference between Likert ratings on the final two rounds for each item using the Wilcoxon signed-rank test (p>0.05), and a reduction in participant comments between the first and last rounds. RESULTS: All rubric items achieved consensus with: 100% agreement, no statistically significant difference between the two final sets of ratings (p=0.102 to 1.000), and a decrease in the number of comments from 81 in Round 1 to 21 in Round 5. CONCLUSION: This method of rubric development resulted in rubrics with validity, acceptability, and time efficiencies.
Authors: Rafael García-Ros; Maria-Arantzazu Ruescas-Nicolau; Natalia Cezón-Serrano; Juan J Carrasco; Sofía Pérez-Alenda; Clara Sastre-Arbona; Constanza San Martín-Valenzuela; Cristina Flor-Rufino; Maria Luz Sánchez-Sánchez Journal: Int J Environ Res Public Health Date: 2021-05-06 Impact factor: 3.390