Literature DB >> 28386393

What are the associations between the quantity of faculty evaluations and residents' perception of quality feedback?

Joseph M Blankush1, Brijen J Shah1, Scott H Barnett1, Gaber Badran1, Amanda Mercado1, Reena Karani1, David Muller1, I Michael Leitman1.   

Abstract

OBJECTIVES: To determine if there is a correlation between the numbers of evaluations submitted by faculty and the perception of the quality of feedback reported by trainees on a yearly survey.
METHOD: 147 ACGME-accredited training programs sponsored by a single medical school were included in the analysis. Eighty-seven programs (49 core residency programs and 38 advanced training programs) with 4 or more trainees received ACGME survey summary data for academic year 2013-2014. Resident ratings of satisfaction with feedback were analyzed against the number of evaluations completed per resident during the same period. R-squared correlation analysis was calculated using a Pearson correlation coefficient.
RESULTS: 177,096 evaluations were distributed to the 87 programs, of which 117,452 were completed (66%). On average, faculty submitted 33.9 evaluations per resident. Core residency programs had a greater number of evaluations per resident than fellowship programs (39.2 vs. 27.1, respectively, p = 0.15). The average score for the "satisfied with feedback after assignment" survey questions was 4.2 (range 2.2-5.0). There was no overall correlation between the number of evaluations per resident and the residents' perception of feedback from faculty based on medical, surgical or hospital-based programs.
CONCLUSIONS: Resident perception of feedback is not correlated with number of faculty evaluations. An emphasis on faculty summative evaluation of resident performance is important but appears to miss the mark as a replacement for on-going, data-driven, structured resident feedback. Understanding the difference between evaluation and feedback is a global concept that is important for all medical educators and learners.

Entities:  

Keywords:  Accreditation; Evaluation; Faculty; Feedback; GME; Graduate medical education; Residency

Year:  2017        PMID: 28386393      PMCID: PMC5369264          DOI: 10.1016/j.amsu.2017.03.001

Source DB:  PubMed          Journal:  Ann Med Surg (Lond)        ISSN: 2049-0801


Introduction

Appropriately structured and timely feedback has a significant impact on learning and achievement [1]. At the same time, the content, format and frequency of feedback has been investigated and debated at length [2], [3], [4], [5], [6]. Trainees across all levels of medical education frequently identify feedback as an area needing improvement in their respective educational programs, as they typically want more feedback than they receive [7], [8]. The Accreditation Council for Graduate Medical Education (ACGME) Resident Survey provides programs with annual data on resident satisfaction with feedback after assignments, and programs must aggressively address non-compliance as the RRC's have begun to issue citations and concerns based on non-compliant responses, with implications for accreditation status. Faculty evaluation of trainee performance is one assessment that programs use to assess trainees. Recently, the ACGME's shift to competency-based educational directives [9] has placed a greater emphasis on data-driven assessment [10], and the availability of centralized, online evaluation tools has made it easier than ever to distribute numerous, summative evaluations. These evaluations might be replacing ongoing, structured feedback in graduate medical education and this concept is not limited to medical training programs accredited by the ACGME [11]. We hypothesized that if faculty are completing so many evaluations then the perception of feedback by trainees is also favorable [12]. The purpose of this study is to determine the correlation between the number of faculty evaluations received by residents upon completion of clinical rotations and their perception of faculty feedback, as measured by a standardized resident survey.

Methods

The Accreditation Council for Graduate Medical Education (ACGME) is responsible for the oversight of graduate medical education in the United States. One hundred forty seven ACGME-accredited training programs within a consortium of 12 hospitals sponsored by a single, private medical school were included in the analysis. Eighty-seven of these programs (49 core residency programs and 38 advanced training programs) had 4 or more residents and thus received summary data from the 2013–2014 ACGME resident survey (Table 1). These 87 analyzed programs represented a total of 2137 residents and fellows.
Table 1

Details of programs analyzed during the study.

Program Details
Total number of programs147
Total programs with 4 + trainees with ACGME Survey Summary Data93
Total programs with ACGME data and evaluations completed87
Total residency programs included in the analysis49
Total fellowship programs included in the analysis38
Total residents and fellows included in the analysis2137
Total surgical programs included in the analysis17
Total surgical residency programs included in the analysis16
Total surgical fellowship programs included in the analysis1
Total medicine programs included in the analysis52
Total medicine residency programs included in the analysis19
Total medicine fellowship programs included in the analysis33
Total hospital-based programs included in the analysis18
Total hospital-based residency programs included in the analysis14
Total hospital-based fellowship programs included in the analysis4

The bold highlights total values.

The ACGME survey is administered to every ACGME approved residency and fellowship program between January and June each year to monitor graduate medical clinical education and provide early warning of potential non-compliance with ACGME accreditation standards. All specialty and subspecialty programs (regardless of size) are mandated to participate and a 70% completion rate is required of each program. Residents and fellows complete the survey anonymously using a 5-point Likert scale. Questions in the following content areas are provided: Duty Hours, Faculty, Evaluation, Educational Content, Resources, Patient Safety, and Teamwork. The responses to the following question, “how satisfied are you with the written or electronic feedback you receive after you complete a rotation or major assignment?” in the evaluation section, were analyzed against the number of faculty evaluations completed per trainee during the same time period using data from New Innovations (Uniontown, OH). The Institutional Review Board at the Icahn School of Medicine reviewed the protocol and deemed this study to be exempt. R-squared correlation analysis and p-values were calculated using a Pearson correlation coefficient using Microsoft Excel (Microsoft, Redmond, WA) and SPSS Version 15.0 (I.B.M. Corporation, Armonk, New York).

Results

During this time period, 177,096 evaluations were electronically distributed across the 87 programs, of which 117,452 electronic evaluations were completed (66%). On average, faculty submitted 53.0 evaluations per trainee during this one-year time period. Core residency programs had a greater number of average evaluations per trainee than advanced training programs or fellowships (39.2 vs. 27.1, respectively, p = 0.15). The average score for the “satisfied with feedback after assignment” from the ACGME Annual Resident Survey question was 4.2 (range 2.2–5.0, national mean 3.9). There was no correlation between the number of evaluations per trainee and the residents' perception of feedback from faculty (R2 = 0.006, p = 0.53) (Table 2). The correlation varied minimally between medical (R2 = 0.034, p = 0.72), surgical (R2 = 0.055, p = 0.53) and hospital-based (R2 = 0.151, p = 0.23) programs. Advanced training programs had a small positive correlation (R2 = 0.084, p = 0.47), while core residency programs had a negative correlation (R2 = 0.048, p = 0.55).
Table 2

Summary of correlations between evaluations per trainee and overall trainee satisfaction with feedback.

Correlation analysisR2P-value
All Programs – Overall0.010.82
All Programs – Residency Programs0.050.55
All Programs – Fellowship Programs0.080.47
Surgical Programs – Overall0.060.53
Medicine Programs – Overall0.030.72
Medicine Programs – Residency Programs0.080.48
Medicine Programs – Fellowships Programs0.010.79
Hospital-Based Programs – Overall0.150.23
Hospital-Based Programs – Residency Programs0.000.92
Hospital-Based Programs – Fellowship Programs0.020.77
Correlation between program size and satisfaction with feedback0.130.36
Correlation between program size and number of evaluations per resident0.260.10

The bold highlights total values.

Large programs were slightly more likely to have higher numbers of evaluations per resident or fellow (R2 = 0.259, p = 0.10). There was a small, negative correlation between the number of residents in the program and resident satisfaction with feedback (R2 = 0.135, p = 0.36).

Discussion

Our study shows that the quantity of faculty evaluation as assed by formal written evaluations does not corelate with resident or fellow satisfaction with feedback after assignments, as based on the ACGME survey. In other words, this process measure does not correlate with resident satisfaction with feedback. This trend was seen irrespective of program size or type of training program. Our data suggest that programs should not focus on measures such as completing more end-of-rotation evaluations in an effort to improve resident satisfaction with feedback, a natural target when trying to respond to this domain in the ACGME Resident Survey. An emphasis on post-assignment faculty evaluations of resident performance is an important part of resident education but misses the mark as a replacement for on-going, data-driven, structured resident feedback according to residents' perceptions. More structured and formal feedback should be incorporated into residency training. In addition, residents and fellows may not recognize feedback given in the day-to-day process of caring for patients. This lack of awareness may explain the low satisfaction if frequency of feedback is the main driver. Low satisfaction may also be due to the lack of utility or the low quality of feedback contained in the written faculty evaluation. Trainees often will not engage with written feedback [13]. This clearly limits the influence written evaluations will have on trainee development. To increase satisfaction, educators may need to employ multiple modes or sources of feedback. The strength of this study is the large number of diverse programs in a single sponsoring institution. The study is limited by not having a standardized faculty evaluation tool and not being able to assess what percentage of these evaluations were reviewed by residents. While most evaluations did not have qualitative comments, this study did not assess the quality of the written comments. The resident and fellow survey tool also presents additional limitations to this study because the specific question requires a global and perceptual response, rather than providing specific examples of feedback. One explanation for our findings lies in the trend towards using digital templates, to take the place of verbal debriefing sessions at the end of the day or the end of the rotation. Many time-pressured faculty educators feel that the time taken to complete the detail-oriented electronic evaluation templates that provide the evaluator with the opportunity to provide additional text-based narrative about performance provides ample “feedback” to learners because these evaluations are immediately available to the residents with the identity of the evaluator provided [14]. Ende's seminal paper on feedback highlighted the importance and essential components of feedback in clinical medical education and offers some explanation for our findings [15]. There are unique differences between true formative feedback and evaluation and why evaluation alone misses the mark (Table 3). According to Ende, feedback timing is important and should not be constrained only to the end of a given performance time period. To be most effective, detailed feedback and the opportunity to gradually improve performance were marks of effective training environment [16], [17]. One could argue the ACGME question regarding “feedback after assignment” implies incorrectly that the end of the rotation is the optimal timing of feedback let alone evaluation. Our data supports this concept. Program should not reflexively seek to increase end of rotation evaluations to increase resident satisfaction. Summative evaluations, such as these, have generalizations by their nature; often require evaluators to seek out input from other faculty to appropriately complete an evaluation [18]. Ende and Erickson's work provide more appropriate targets to improve the resident perception of feedback after assignments.
Table 3

Comparison of essential feedback characteristics and characteristics of end of rotation evaluations.

Essential feedback characteristicsaTrue of evaluations?
Feedback should be undertaken with the teacher and trainee working as allies with common goalsAs with feedback, this depends on the educator but without ongoing discussions of goals and performance, a common direction may be difficult to ascertain.
Feedback should be well-timed and expectedEvaluation timing is often limited to the end of a rotation; timing is expected, content may not be.
Feedback should be based on first-hand dataFirst-hand data may be more difficult to recall and recount when completing end-of-rotation evaluations.
Feedback should be regulated in quantity and limited to behaviors that are remediableOften the goal of evaluations is to increase the quantity of feedback; online tools make it difficult to regulate the behaviors that are referenced.
Feedback should be phrased in descriptive, non-evaluative languageBy definition an evaluation is meant to be evaluative and often evaluations are intended to compare a trainee to peers.
Feedback should deal with specific performances, not generalizationsEvaluations are summative and deal in generalizations relating to a trainee's performance.
Feedback should offer subjective data, labeled as suchDepends on the evaluation type and the educator
Feedback should deal with decisions and actions, rather than assumed intentions or interpretationsDepends on the evaluation type and the educator

Adapted from Ende J. [15].

Our rate of completion of evaluations was similar to other published studies. The percentage of evaluations that faculty complete in a timely fashion can range from a few percent to as much as 93% depending upon the program [19]. Authors have tried to suggest additional strategies to enhance resident and fellow feedback after assignments. Peer evaluations of residents might provide an even greater value than those from faculty [20], [21]. Holmes and others suggested a structured method to provide formative feedback at the end of a clinical experience [22], [23]. In psychiatry programs, direct observation of clinical work is an excellent opportunity to provide formative feedback to residents. Dalack and co-coworkers suggested that regular and proper use of random sampling of clinical work followed by immediate feedback could help to develop, enhance, and encourage good clinical skills or highlight the need for remediation [24], [25]. This strategy would be enhanced by including the verbal feedback provided in the written summative faculty evaluation. Evaluation provides insights into trainee performance with relation to a standard – a common desire in the context of Next Accreditation System (NAS) and Milestones from the ACGME. We must make sure that our desire to gather numerous faculty evaluations does not hinder our ability to provide meaningful trainee feedback and improve individual learning and skill building within our programs. We should seek further trainee input in designing and implementing our assessment systems to better improve our learner's satisfaction with feedback.

Conclusions

The quantity of faculty evaluations does not correlate the resident perception of quality feedback. A greater emphasis is necessary to instruct faculty on providing regular, timely and data-driven feedback to residents and fellows with specific comments on performance. Faculty summative evaluation of resident performance is important but all stakeholders must understand that this is not a replacement for structured feedback for medical trainees.

Ethical approval

Exempt from Institutional Review.

Funding

None.

Author contribution

Joseph M. Blankush, MD-study design, data analysis, writing. I. Michael Leitman, MD study design, data collections, data analysis, writing. Brijen J. Shah, MD study design, writing. Scott H. Barnett, MD, study design, data collections, data analysis. Gaber Badran, study design, data collections, data analysis. Amanda Mercado, data collections. Reena Karani, MD, MHPE, data analysis, writing. David Muller, MD, data analysis, writing.

Conflicts of interest

None.

Guarantor

I. Michael Leitman, MD.

Research registration unique identifying number (UIN)

researchregistry1711.
  25 in total

1.  Advancing resident assessment in graduate medical education.

Authors:  Susan R Swing; Stephen G Clyman; Eric S Holmboe; Reed G Williams
Journal:  J Grad Med Educ       Date:  2009-12

2.  Giving feedback on clinical skills: are we starving our young?

Authors:  Peter A M Anderson
Journal:  J Grad Med Educ       Date:  2012-06

3.  Deliberate practice and acquisition of expert performance: a general overview.

Authors:  K Anders Ericsson
Journal:  Acad Emerg Med       Date:  2008-09-05       Impact factor: 3.451

4.  Why medical educators may be failing at feedback.

Authors:  Robert G Bing-You; Robert L Trowbridge
Journal:  JAMA       Date:  2009-09-23       Impact factor: 56.272

5.  Reliability and validity of assessing subspecialty level of faculty anesthesiologists' supervision of anesthesiology residents.

Authors:  Gildasio S De Oliveira; Franklin Dexter; Jane M Bialek; Robert J McCarthy
Journal:  Anesth Analg       Date:  2015-01       Impact factor: 5.108

6.  Deliberate practice as a framework for evaluating feedback in residency training.

Authors:  Stephen Gauthier; Rodrigo Cavalcanti; Jeannette Goguen; Matthew Sibbald
Journal:  Med Teach       Date:  2014-12-16       Impact factor: 3.650

7.  Milestone myths and misperceptions.

Authors:  Wallace A Carter
Journal:  J Grad Med Educ       Date:  2014-03

8.  Giving feedback in medical education: verification of recommended techniques.

Authors:  M G Hewson; M L Little
Journal:  J Gen Intern Med       Date:  1998-02       Impact factor: 5.128

9.  Consultation skills of young doctors: I--Benefits of feedback training in interviewing as students persist.

Authors:  P Maguire; S Fairbairn; C Fletcher
Journal:  Br Med J (Clin Res Ed)       Date:  1986-06-14

10.  Qualitative study about the ways teachers react to feedback from resident evaluations.

Authors:  Thea van Roermund; Marie-Louise Schreurs; Henk Mokkink; Ben Bottema; Albert Scherpbier; Chris van Weel
Journal:  BMC Med Educ       Date:  2013-07-16       Impact factor: 2.463

View more
  3 in total

1.  Integrating self-assessment into feedback for emergency medicine residents.

Authors:  Jenna Thomas; Benjamin Sandefur; James Colletti; Aidan Mullan; James Homme
Journal:  AEM Educ Train       Date:  2022-02-01

2.  Surgery goes EPA (Entrustable Professional Activity) - how a strikingly easy to use app revolutionizes assessments of clinical skills in surgical training.

Authors:  Nadine Diwersi; Jörn-Markus Gass; Henning Fischer; Jürg Metzger; Matthias Knobe; Adrian Philipp Marty
Journal:  BMC Med Educ       Date:  2022-07-19       Impact factor: 3.263

3.  Impact of a financial incentive on the completion of educational metrics.

Authors:  Andrew Pugh; Tabitha Ford; Troy Madsen; Christine Carlson; Gerard Doyle; Robert Stephen; Susan Stroud; Megan Fix
Journal:  Int J Emerg Med       Date:  2020-12-01
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.