Literature DB >> 35222824

Development of and Preliminary Validity Evidence for the EFeCT Feedback Scoring Tool.

Shelley Ross1, Deena Hamza2, Rosslynn Zulla3, Samantha Stasiuk4, Darren Nichols5.   

Abstract

BACKGROUND: Narrative feedback, like verbal feedback, is essential to learning. Regardless of form, all feedback should be of high quality. This is becoming even more important as programs incorporate narrative feedback into the constellation of evidence used for summative decision-making. Continuously improving the quality of narrative feedback requires tools for evaluating it, and time to score. A tool is needed that does not require clinical educator expertise so scoring can be delegated to others.
OBJECTIVE: To develop an evidence-based tool to evaluate the quality of documented feedback that could be reliably used by clinical educators and non-experts.
METHODS: Following a literature review to identify elements of high-quality feedback, an expert consensus panel developed the scoring tool. Messick's unified concept of construct validity guided the collection of validity evidence throughout development and piloting (2013-2020).
RESULTS: The Evaluation of Feedback Captured Tool (EFeCT) contains 5 categories considered to be essential elements of high-quality feedback. Preliminary validity evidence supports content, substantive, and consequential validity facets. Generalizability evidence supports that EFeCT scores assigned to feedback samples show consistent interrater reliability scores between raters across 5 sessions, regardless of level of medical education or clinical expertise (Session 1: n=3, ICC=0.94; Session 2: n=6, ICC=0.90; Session 3: n=5, ICC=0.91; Session 4: n=6, ICC=0.89; Session 5: n=6, ICC=0.92).
CONCLUSIONS: There is preliminary validity evidence for the EFeCT as a useful tool for scoring the quality of documented feedback captured on assessment forms. Generalizability evidence indicated comparable EFeCT scores by raters regardless of level of expertise.

Entities:  

Mesh:

Year:  2022        PMID: 35222824      PMCID: PMC8848874          DOI: 10.4300/JGME-D-21-00602.1

Source DB:  PubMed          Journal:  J Grad Med Educ        ISSN: 1949-8357


  23 in total

1.  An investigation of medical student reactions to feedback: a randomised controlled trial.

Authors:  Margaret L Boehler; David A Rogers; Cathy J Schwind; Ruth Mayforth; Jacquelyn Quin; Reed G Williams; Gary Dunnington
Journal:  Med Educ       Date:  2006-08       Impact factor: 6.251

2.  Assessing the quality of supervisors' completed clinical evaluation reports.

Authors:  Nancy L Dudek; Meridith B Marks; Timothy J Wood; A Curtis Lee
Journal:  Med Educ       Date:  2008-06-14       Impact factor: 6.251

3.  The Hidden Value of Narrative Comments for Assessment: A Quantitative Reliability Analysis of Qualitative Data.

Authors:  Shiphra Ginsburg; Cees P M van der Vleuten; Kevin W Eva
Journal:  Acad Med       Date:  2017-11       Impact factor: 6.893

4.  Evolving concepts of assessment in a competency-based world.

Authors:  Peter Harris; Farhan Bhanji; Maureen Topps; Shelley Ross; Steven Lieberman; Jason R Frank; Linda Snell; Jonathan Sherbino
Journal:  Med Teach       Date:  2017-06       Impact factor: 3.650

5.  Core principles of assessment in competency-based medical education.

Authors:  Jocelyn Lockyer; Carol Carraccio; Ming-Ka Chan; Danielle Hart; Sydney Smee; Claire Touchie; Eric S Holmboe; Jason R Frank
Journal:  Med Teach       Date:  2017-06       Impact factor: 3.650

6.  How learning transfers: a study of how graduates of a faculty education fellowship influenced the behaviors and practices of their peers and organizations.

Authors:  Margaret M Plack; Ellen F Goldman; Marilyn Wesner; Nisha Manikoth; Yolanda Haywood
Journal:  Acad Med       Date:  2015-03       Impact factor: 6.893

7.  The Quality of Assessment of Learning (Qual) Score: Validity Evidence for a Scoring System Aimed at Rating Short, Workplace-Based Comments on Trainee Performance.

Authors:  Teresa M Chan; Stefanie S Sebok-Syer; Christopher Sampson; Sandra Monteiro
Journal:  Teach Learn Med       Date:  2020-02-04       Impact factor: 2.414

8.  Numbers Encapsulate, Words Elaborate: Toward the Best Use of Comments for Assessment and Feedback on Entrustment Ratings.

Authors:  Shiphra Ginsburg; Christopher J Watling; Daniel J Schumacher; Andrea Gingerich; Rose Hatala
Journal:  Acad Med       Date:  2021-07-01       Impact factor: 6.893

9.  Consensus methods: characteristics and guidelines for use.

Authors:  A Fink; J Kosecoff; M Chassin; R H Brook
Journal:  Am J Public Health       Date:  1984-09       Impact factor: 9.308

Review 10.  Where the rubber meets the road - An integrative review of programmatic assessment in health care professions education.

Authors:  Suzanne Schut; Lauren A Maggio; Sylvia Heeneman; Jan van Tartwijk; Cees van der Vleuten; Erik Driessen
Journal:  Perspect Med Educ       Date:  2020-10-21
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.