Literature DB >> 25893939

Validation of the 25-Item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness.

Marcy Mintz1, Danielle A Southern, William A Ghali, Irene W Y Ma.   

Abstract

UNLABELLED: CONSTRUCT: The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness assesses clinical teaching effectiveness.
BACKGROUND: Valid and reliable rating of teaching effectiveness is helpful for providing faculty with feedback. The 25-item Stanford Faculty Development Program Tool on Clinical Teaching Effectiveness was intended to evaluate seven dimensions of clinical teaching. Confirmation of the structure of this tool has not been previously performed. APPROACH: This study sought to validate this tool using a confirmatory factor analysis, testing a 7-factor model and compared its goodness of fit with a modified model. Acceptability of the use of the tool was assessed using a 6-item survey, completed by final year medical students (N = 119 of 156 students; 76%).
RESULTS: The testing of the goodness of fit indicated that the 7-factor model performed poorly, χ(2)(254) = 457.4, p < .001 (root mean square error of approximation [RMSEA] = 0.08, comparative fit index [CFI] = 0.91, non-normed fit index [NNFI] = 0.89). Only standardized root mean square residual (SRMR) indicated acceptable fit (0.06). Further exploratory analysis identified 10 items that cross-loaded on 2 factors. The remainder of the items loaded on factors as originally intended. By removing these 10 items, repeat confirmatory factor analysis on the modified 15-item, 5-factor model demonstrated a better fit than the original model: SRMR = 0.075, NNFI = 0.91, χ(2)(80) = 150.1, p < .001; RMSEA = 0.09; CFI = 0.93. Although 75% of the participants stated they were willing to fill the tool on their preceptors on a biweekly basis, only 25% were willing to do so on a weekly basis.
CONCLUSIONS: Our study failed to confirm factor structure of the 25-item tool. A modified tool with fewer, more conceptually distinct items was best fit by a 5-factor model. Further, the acceptability of use for the 25-item tool may be poor for rotations with a new preceptor weekly. The abbreviated tool may be preferable in that setting.

Entities:  

Keywords:  assessment; teaching effectiveness; undergraduate medical education

Mesh:

Year:  2015        PMID: 25893939     DOI: 10.1080/10401334.2015.1011645

Source DB:  PubMed          Journal:  Teach Learn Med        ISSN: 1040-1334            Impact factor:   2.414


  5 in total

1.  Measuring and assessing the competencies of preceptors in health professions: a systematic scoping review.

Authors:  Andrew D Bartlett; Irene S Um; Edward J Luca; Ines Krass; Carl R Schneider
Journal:  BMC Med Educ       Date:  2020-05-24       Impact factor: 2.463

2.  Milestone-Based Tool for Learner Evaluation of Faculty Clinical Teaching.

Authors:  Karyn Kassis; Rebecca Wallihan; Larry Hurtubise; Sara Goode; Margaret Chase; John D Mahan
Journal:  MedEdPORTAL       Date:  2017-09-18

3.  Peer Coaching as a Faculty Development Tool: A Mixed Methods Evaluation.

Authors:  Kristy Carlson; Allison Ashford; Marwa Hegagi; Chad Vokoun
Journal:  J Grad Med Educ       Date:  2020-04

4.  A behaviorally anchored assessment tool for bedside teaching in the emergency department.

Authors:  Hamza Ijaz; Matthew Stull; Erin McDonough; Robbie Paulsen; Jeffrey Hill
Journal:  AEM Educ Train       Date:  2022-08-11

5.  Shadowing emergency medicine residents by medical education specialists to provide feedback on non-medical knowledge-based ACGME sub-competencies.

Authors:  Anna L Waterbrook; Karen C Spear Ellinwood; T Gail Pritchard; Karen Bertels; Ariel C Johnson; Alice Min; Lisa R Stoneking
Journal:  Adv Med Educ Pract       Date:  2018-05-04
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.