Literature DB >> 19120955

Evaluation of a novel assessment form for observing medical residents: a randomised, controlled trial.

Anthony A Donato1, Louis Pangaro, Cynthia Smith, Joseph Rencic, Yvonne Diaz, Janell Mensinger, Eric Holmboe.   

Abstract

CONTEXT: Teaching faculty cannot reliably distinguish between satisfactory and unsatisfactory resident performances and give non-specific feedback.
OBJECTIVES: This study aimed to test whether a novel rating form can improve faculty accuracy in detecting unsatisfactory performances, generate more rater observations and improve feedback quality.
METHODS: Participants included two groups of 40 internal medicine residency faculty staff. Both groups received 1-hour training on how to rate trainees in the mini-clinical evaluation exercise (mini-CEX) format. The intervention group was given a new rating form structured with prompts, space for free-text comments, behavioural anchors and fewer scoring levels, whereas the control group used the current American Board of Internal Medicine Mini-CEX form. Participants watched and scored six scripted videotapes of resident performances 2-3 weeks after the training session.
RESULTS: Intervention group participants were more accurate in discriminating satisfactory from unsatisfactory performances (85% versus 73% correct; odds ratio [OR] 2.13, 95% confidence interval [CI] 1.16-3.14, P = 0.02) and yielded more correctly identified unsatisfactory performances (96% versus 52% correct; OR 25.35, 95% CI 9.12-70.46), but were less accurate in identifying satisfactory performances (73% versus 95% correct; OR 0.15, 95% CI 0.05-0.39). Intervention group participants averaged one fewer declared intended feedback item (4.7 versus 5.7) and showed no difference in the amount of feedback that was above minimal in quality. Intervention group participants generated more written evaluative observations (10.8 versus 5.7). Inter-rater agreement improved with the new form (Fleiss' kappa, 0.52 versus 0.30).
CONCLUSIONS: Modifying the currently used direct observations process may produce more recorded observations, increase inter-rater agreement and improve overall rater accuracy, but it may also increase severity error.

Entities:  

Mesh:

Year:  2008        PMID: 19120955     DOI: 10.1111/j.1365-2923.2008.03230.x

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  6 in total

1.  Validity and Feasibility of the Minicard Direct Observation Tool in 1 Training Program.

Authors:  Anthony A Donato; Yoon Soo Park; David L George; Alan Schwartz; Rachel Yudkowsky
Journal:  J Grad Med Educ       Date:  2015-06

Review 2.  In-training assessment using direct observation of single-patient encounters: a literature review.

Authors:  E A M Pelgrim; A W M Kramer; H G A Mokkink; L van den Elsen; R P T M Grol; C P M van der Vleuten
Journal:  Adv Health Sci Educ Theory Pract       Date:  2010-06-18       Impact factor: 3.853

3.  Design of a clinical competency committee to maximize formative feedback.

Authors:  Anthony A Donato; Richard Alweis; Suzanne Wenderoth
Journal:  J Community Hosp Intern Med Perspect       Date:  2016-12-15

4.  Guidelines: The do's, don'ts and don't knows of direct observation of clinical skills in medical education.

Authors:  Jennifer R Kogan; Rose Hatala; Karen E Hauer; Eric Holmboe
Journal:  Perspect Med Educ       Date:  2017-10

5.  Longitudinal Faculty Development Program to Promote Effective Observation and Feedback Skills in Direct Clinical Observation.

Authors:  Sheira Schlair; Lawrence Dyche; Felise Milan
Journal:  MedEdPORTAL       Date:  2017-10-30

6.  Exploring the influence of gender, seniority and specialty on paper and computer-based feedback provision during mini-CEX assessments in a busy emergency department.

Authors:  Yu-Che Chang; Ching-Hsing Lee; Chien-Kuang Chen; Chien-Hung Liao; Chip-Jin Ng; Jih-Chang Chen; Chung-Hsien Chaou
Journal:  Adv Health Sci Educ Theory Pract       Date:  2016-04-25       Impact factor: 3.853

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.