Literature DB >> 22281550

Developing expert-derived rating standards for the peer assessment of lectures.

Lori R Newman1, Dara D Brodsky, David H Roberts, Stephen R Pelletier, Anna Johansson, Charles M Vollmer, K Meredith Atkins, Richard M Schwartzstein.   

Abstract

PURPOSE: For peer review of teaching to be credible and reliable, peer raters must be trained to identify and measure teaching behaviors accurately. Peer rater training, therefore, must be based on expert-derived rating standards of teaching performance. The authors sought to establish precise lecture rating standards for use in peer rater training at their school.
METHOD: From 2008 to 2010, a panel of experts, who had previously helped to develop an instrument for the peer assessment of lecturing, met to observe, discuss, and rate 40 lectures, using a consensus-building model to determine key behaviors and levels of proficiency for each of the instrument's 11 criteria. During this process, the panelists supplemented the original instrument with precise behavioral descriptors of lecturing. The reliability of the derived rating standards was assessed by having the panelists score six sample lectures independently.
RESULTS: Intraclass correlation coefficients of the panelists' ratings of the lectures ranged from 0.75 to 0.96. There was moderate to high positive association between 10 of the 11 instrument's criteria and the overall performance score (r = 0.752-0.886). There were no statistically significant differences among raters in terms of leniency or stringency of scores.
CONCLUSIONS: Two relational themes, content and style, were identified within the instrument's variables. Recommendations for developing expert-derived ratings standards include using an interdisciplinary group for observation, discussion, and verbal identification of behaviors; asking members to consider views that contrast with their own; and noting key teaching behaviors for use in future peer rater training.

Entities:  

Mesh:

Year:  2012        PMID: 22281550     DOI: 10.1097/ACM.0b013e3182444fa3

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  7 in total

1.  What Traditional Lectures Can Learn From Podcasts.

Authors:  Holland Kaplan; Divya Verma; Zaven Sargsyan
Journal:  J Grad Med Educ       Date:  2020-06

2.  Quantitative Study of the Characteristics of Effective Internal Medicine Noon Conference Presentations.

Authors:  Traci Fraser; Zaven Sargsyan; Travis P Baggett; Meridale Baggett
Journal:  J Grad Med Educ       Date:  2016-05

3.  Development and Validation of a Lecture Assessment Tool for Emergency Medicine Residents.

Authors:  Jeffery Hill; Matthew Stull; Brian Stettler; Robbie Paulsen; Kimberly Hart; Erin McDonough
Journal:  AEM Educ Train       Date:  2018-09-17

4.  Impact of peer feedback on the performance of lecturers in emergency medicine: a prospective observational study.

Authors:  Miriam Ruesseler; Faidra Kalozoumi-Paizi; Anna Schill; Matthias Knobe; Christian Byhahn; Michael P Müller; Ingo Marzi; Felix Walcher
Journal:  Scand J Trauma Resusc Emerg Med       Date:  2014-12-04       Impact factor: 2.953

5.  Are respiratory specialist registrars trained to teach?

Authors:  Emer Kelly; Sinead M Walsh; Jeremy B Richards
Journal:  ERJ Open Res       Date:  2015-07-06

6.  Does Faculty Follow the Recommended Structure for a New Classroom-based, Daily Formal Teaching Session for Anesthesia Residents?

Authors:  Anjum Anwar; Pedro Tanaka; Matias V Madsen; Alex Macario
Journal:  Cureus       Date:  2016-10-06

7.  The effect of written standardized feedback on the structure and quality of surgical lectures: A prospective cohort study.

Authors:  Jasmina Sterz; Sebastian H Höfer; Bernd Bender; Maren Janko; Farzin Adili; Miriam Ruesseler
Journal:  BMC Med Educ       Date:  2016-11-14       Impact factor: 2.463

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.