Literature DB >> 31139740

Constructing a Shared Mental Model for Feedback Conversations: Faculty Workshop Using Video Vignettes Developed by Residents.

Alex Moroz1, Anna King2, Baruch Kim2, Heidi Fusco3, Kristin Carmody4.   

Abstract

Introduction: Providing feedback is a fundamental principle in medical education; however, as educators, our community lacks the necessary skills to give meaningful, impactful feedback to those under our supervision. By improving our feedback-giving skills, we provide concrete ways for trainees to optimize their performance, ultimately leading to better patient care.
Methods: In this faculty development workshop, faculty groups used six feedback video vignettes scripted, enacted, and produced by residents to arrive at a shared mental model of feedback. During workshop development, we used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback.
Results: Twenty-three faculty (physical medicine and rehabilitation and neurology) participated in seven small-group workshops. Analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories. After incorporating the results of a focused literature review, we identified 48 items grouped into 10 dimensions of feedback. Online session evaluation indicated that faculty members liked the workshop's format and thought they were better at providing feedback to residents as a result of the workshop. Discussion: Small faculty groups were able to develop a shared mental model of dimensions of feedback that was also grounded in medical education literature. The theme of specificity of feedback was prominent and echoed recent medical education research findings. Defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can enhance reliable scoring of feedback performance assessments and should be the next step in our work.

Entities:  

Keywords:  Faculty Development; Feedback; Physical Medicine and Rehabilitation; Workshop

Year:  2019        PMID: 31139740      PMCID: PMC6519682          DOI: 10.15766/mep_2374-8265.10821

Source DB:  PubMed          Journal:  MedEdPORTAL        ISSN: 2374-8265


Educational Objectives

By the end of this activity, learners will be able to: Identify key behaviors in faculty-resident feedback conversations. Describe dimensions of meaningful and impactful feedback. Discuss effective strategies for managing challenging feedback conversations.

Introduction

The ultimate goal of assessment practices in health professional education is improved health care. High-quality, credible feedback is necessary for assessment to provide a meaningful mechanism through which physicians can be expected to grow[1] and provide better care. Feedback is fundamental to everything physicians do—it is an essential part of every framework, every curriculum, every teaching interaction. When we explored the experiences of physical medicine and rehabilitation (PM&R) residents who were part of a structured program of self-assessment followed by faculty feedback,[2] we found that both residents and faculty receiving feedback felt they were not ready for the challenges encountered in real-life feedback conversations. The impetus for improving feedback-givers’ skills was born out of the study findings, as it turned out that learners were not easily fooled by substandard feedback.[3] Interestingly, a similar theme emerged from a mixed-methods study that focused on faculty receiving feedback from residents.[4] These combined findings across specialties and professional roles reinforced our belief that development of feedback-giving skills is an underexplored area of medical education delivery and scholarship, strengthening which may allow us to take another step towards improving physicians’ performance and, ultimately, providing better patient care.[5] Meaningful and impactful feedback conversations are not easy for most feedback providers. On the one hand, faculty members’ tensions impact not only the feedback process—reaching the right balance of constructive and positive feedback, dealing with their own perceptions of self-efficacy, as well as the receptiveness, insight, potential, and skill of the residents—but also the resident-faculty relationship and contextual factors.[6] On the other hand, faculty from both university-based and community-based programs reported inadequate training and incomplete understanding of the best ways to deliver feedback[6] despite the availability of excellent practical guides.[7,8] This does not appear to be just a perception issue—a recent qualitative study of simulated feedback encounters suggested that faculty skills do not in fact match recommended practices in several areas.[9] A number of curricula to improve faculty feedback-giving skills have been described by others, including several MedEdPORTAL publications. Some of these focus on peer or near-peer feedback. For example, Brown, Rangachari, and Melia developed an interactive multimodal feedback coaching workshop for residents giving feedback to interns.[10] Tews and colleagues created a course to improve resident skills in providing feedback to medical students.[11] Others have experimented with improving faculty feedback skills. For example, Schlair, Dyche, and Milan described a longitudinal faculty development program designed by faculty leaders and found positive impact on feedback quality as perceived by residents.[12] Sargeant and colleagues described a workshop based on a thoroughly researched[8,13-15] coaching model of feedback.[16] Although the imposition of predeveloped frameworks and top-down faculty development is tempting and may be an efficient approach, the very process of creating the framework and developing a holistic shared mental model may be essential for learning and participant buy-in.[17] Our goal, therefore, was to develop a workshop that would allow faculty groups to construct a shared mental model that was locally grown and sensitive to specialty and institution, yet based on general principles supported by medical education. This workshop adds to the existing body of work in one other way. Other scholars have identified variability in feedback recipients (four resident challenges) and suggested adjusting feedback provider approaches accordingly.[9] Since our workshop is based on scenarios that were selected, scripted, and enacted by feedback recipients (residents), it allowed us to also explore variability in feedback provider (faculty) behaviors. We believe that using more than one perspective in developing a shared mental model for feedback allowed us to highlight multiple facets of the feedback construct and understand it more fully. Most importantly, in addition to promoting a deeper understanding of the development of the knowledge, attitudes, skills, and habits necessary for providing meaningful and impactful feedback, improving faculty feedback-giving skills may allow us to take another step towards improving physicians’ performance and patient care.

Methods

Local Feedback Context

After surveying residents in our large (N = 36) PM&R program regarding feedback they had received[18] in 2015, we implemented a feedback bundle that included resident self-assessment, a faculty assessment, and a joint conversation. This process was facilitated by using an iPad app called PRIMES (locally developed based on the RIME [Reporter, Interpreter, Manager, Educator] framework,[19] with the addition of Professionalism and procedural Skills). Using their iPad, the residents first self-assessed (on their own) across the PRIMES dimensions and created three learning goals. They then met with, and passed the device to, the faculty, who assessed the resident blindly, using the same framework. After faculty submission of their assessment, the app compared resident self-assessment and faculty assessment, resulting in visual highlighting of areas of agreement and disagreement. This served as a starting point for a feedback conversation. Each resident was required to engage in this process at least once a month and was encouraged to do so mid-rotation. This was reinforced during semiannual meetings with the program director and via emails from chief residents. During 14 monthly rotations, 48 residents and 16 faculty members completed 343 PRIMES encounters. Each faculty member participated in a median of four encounters. Average resident compliance with the once-a-month requirement during the same time period was 71%, with most feedback encounters clustering during the last week of the rotation.

Development of Feedback Vignettes

A resident developed six vignette scripts based on some of the issues identified in our qualitative study[2]—lack of faculty engagement or time (Distracted Attending, Impersonal Attending), challenging resident behaviors (Cocky Connor, Defensive Debbie, Self-Effacing Sammy), and a portrayal of a constructive conversation from the resident perspective (Appendix B). The scripts were reviewed by one of us (Alex Moroz), who made minor edits. We subsequently recruited two residents (Baruch Kim, Anna King) and a faculty member (Heidi Fusco), who enacted the six vignettes. These were recorded and produced (Appendices C–H).

Workshop Implementation

We conducted this 1-hour workshop (in small groups) with faculty in the departments of PM&R and neurology who were feedback providers for our residents (using the PRIMES feedback bundle described above). For convenience, faculty at each clinical site (e.g., inpatient brain injury) or resident rotation (e.g., electrodiagnosis) formed distinct groups. All workshops were conducted by one of us (Alex Moroz), who took handwritten notes documenting the faculty discussion.

Development of a Shared Mental Model

To accomplish the goal of developing a shared mental model of useful feedback conversations, each group of faculty viewed the video vignettes and afterward had a discussion that focused on faculty behavior in the vignette, aiming to answer the following two questions: “What did faculty do well?” “What could faculty do better?”

Qualitative Analysis and Literature Review

Group responses were recorded as positive statements to distill strong feedback practices (e.g., “She did not maintain eye contact” became “maintaining eye contact”) and scanned to qualitative analysis software (ATLAS.ti V.1.5.3; Scientific Software Development GmbH, Berlin, Germany). Using segments of text identified in the notes, we defined coding categories and created a coding scheme that identified individual codes and their relationships. We reached consensus by discussing this scheme and individual coding categories. In order to ensure both sensitivity to specialty and institution context and conformance to general principles supported by medical education theory, we conducted a focused literature review and used the results to group our items (codes) into broader dimensions of feedback.

Session Evaluation

Faculty participants received an online session-evaluation survey (three items, developed by the authors) focusing on the first two levels of Kirkpatrick's hierarchy,[20] as well as suggestions for improving the workshop (Appendix I). After analyzing responses, the session evaluation was revised to include five questions and better align with educational objectives.

Materials/Logistics/Setup Needed for Implementation

In order to implement this workshop locally, the following materials and local resources are necessary: Buy-in and support from division head or department chair. Facilitator's guide (Appendix A). Faculty small-group facilitator and faculty champion (can be the same person). Support staff to schedule workshop and remind faculty. Sufficient space to accommodate small-group discussions. Working audiovisual equipment capable of playing video on a sufficiently large screen.

Results

Twenty-three faculty members participated in seven small-group workshops. Average group size was 3.3 (range: two to five participants), and there were 16 male and seven female faculty participants. Of the seven workshops, six were with PM&R faculty groups, and one was with a neurology faculty group. Initial qualitative analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories (Table). Each cell represents the number of times faculty (all groups) mentioned a particular coding category during discussion of the case in question.
Table.

Distributions of Coding Categories Among the Six Vignettes

Coding CategoryVignetteCoding Category Total
Cocky ConnorConstructive ConversationDefensive DebbieDistracted AttendingImpersonal AttendingSelf-Effacing Sammy
Being confident and staying in control3050008
Being constructive without offending62131114
Being honest about not enough facts or not enough time0011507
Being organized and completing the encounter1011014
Being polite and respectful41232113
Being positive, using positive language26821625
Being prepared0100708
Being present, engaged and paying attention10285319
Being specific and giving examples6171541851
Being warm, approachable, supportive, encouraging, reassuring453741033
Confronting wrong perceptions and inappropriate behaviors140300118
Dedicating time and minimizing disruptions00064010
Defining expectations, reviewing performance over time1220016
Discussing areas of improvement, action plan, and follow-up48100114
Ensuring quiet, private, appropriate environment3001015
Knowing the resident and basing feedback on objective facts01137113
Listening and having a dialogue1210048
Making eye contact and leaning forward23340416
Not just “going through the motions”0004004
Putting things in perspective3020005
Reacting to resident answers, probing deeper645001732
Redirecting and disarming3020005
Referring up for wellness and psychiatric concerns0000033
Starting with self-assessment02115514
Staying calm, composed, and nonconfrontational4130008
   Vignette total685562484268
After incorporating the results of a focused literature review, we identified 48 items that we grouped into 10 dimensions of feedback, supported by both our data and published literature (Figure; see Appendix J for the supporting literature references). Faculty groups identified 11 out of 32 items discussed in the literature (indicated by italics in the Figure).
Figure.

Feedback dimensions and items identified by small faculty groups. Items in italics are those discussed in the published literature and also identified by the small faculty groups. See Appendix J for the published literature references.

The theme of specificity of feedback emerged as a prominent finding when we explored how the coding categories distributed across the feedback vignette scenarios (Table). Faculty agreed that the cocky resident needed confronting, the defensive resident needed specific examples, and the self-effacing resident needed exploration and support. Four participants completed the online session-evaluation questionnaire (Appendix I). All reported that they liked (two) or liked a lot (two) the workshop's format and thought they were better at providing feedback to residents as a result of the workshop (four). Recommendations for improvement included providing more videos on good feedback technique, including an optimal feedback session, “keeping it up,” and removing the titles of the videos as they might bias the viewers.

Discussion

In this faculty development workshop, small groups of faculty viewed feedback video vignettes together and arrived at a shared mental model of feedback. We used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback and shared these with the participating faculty. In a short time (1 hour), small faculty groups were able to develop a shared mental model of dimensions of meaningful and impactful feedback that was also grounded in medical education literature. While individual group discussions varied, the characteristics of feedback that each group thought to be most valuable were similar across all groups. Faculty groups identified 11 out of 32 items discussed in the published literature (Appendix J). The largest number of missed items clustered around the dimension of Starting With Self-Assessment, perhaps because the structured feedback encounter automatically included resident self-assessment and was taken for granted as part of the process by the participating faculty. Another explanation is that physicians, similar to other professions, do not accurately self-assess or include this process in their daily practice or cultural norms.[21] There were two dimensions (Preparation, Engagement, Investment and Confidence, Direction, Correction) that received more attention from the participating faculty than from the previously published literature. We hypothesize that this may have been a result of the specific scenarios selected for video vignettes. However, considering that these vignettes were created by residents and deemed representative of relevant feedback interactions, it is likely that these dimensions will be generalizable to other settings. In fact, the theme of specificity of feedback not only appeared as one of the 10 dimensions (Individualizing the Conversation), it was also a prominent finding when we explored how the coding categories distributed across the feedback vignette scenarios (Table). Faculty agreed that the cocky resident needed confronting, the defensive resident needed specific examples, and the self-effacing resident needed exploration and support. This closely echoes the findings of Roze des Ordons, Cheng, Gaudet, Downar, and Lockyer,[22] who explored the challenges that faculty experienced and the approaches taken in adapting feedback conversations to different residents. While the faculty questioned their ability, they were able to adapt their approach to feedback, drawing on techniques of coaching for highly performing residents, directing for residents demonstrating insight gaps, mentoring and support for emotionally distressed residents, and mediation for overly confident residents. Our work has several important limitations. As the workshop was conducted within a single institution and specialty, the items and dimensions identified may not be directly generalizable to other settings. Similarly, we decided not to change or remove the vignette titles as these represented our resident perceptions of the process. We think that including findings from a focused literature review may have mitigated this somewhat. In addition, feedback is a universal skill necessary across all specialties, and therefore, it would be expected that other settings would encounter similar challenges. Similarly, our work was limited to the graduate medical education setting, and we do not know if our findings are generalizable to the contexts of undergraduate or postgraduate medical education or broader health professions education. There were also challenges we encountered in our program evaluation approach. The response rate (four out of 23) was rather low, and in retrospect, we should have followed the initial survey invitation with several reminders (lesson learned). Additionally, our session-evaluation questionnaire focused on lower levels of Kirkpatrick's hierarchy, rather than on changes in faculty behaviors or, better yet, resident behavior changes or patient outcomes. While we acknowledge the rising difficulty of measuring outcomes with each step within the hierarchy, we also recognize the matching increase in the meaningfulness of the findings. We believe that while these findings have deepened our theoretical understanding of the dimensions of feedback, defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can increase reliability of scoring for feedback assessments and may be the next logical step in our work. Although rubrics may not ensure validity of judgment of feedback ratings per se, they can potentially promote learning and make teaching feedback providers easier by clarifying both the criteria and the expectations, thereby facilitating feedback and self-assessment.[23] We offer the following recommendations to readers who might consider replicating this workshop. We do not think that a formal analysis of the participants’ discussions, as was done here, is necessary after each workshop; on the other hand, explicit and frank discussion and local consensus-building process is paramount. By limiting the size of small groups to three to seven participants, the workshop can be scaled up to any number of faculty by manipulating the number of groups. We do not think that increasing group size is as effective because it would be difficult to insure engagement of all group members in larger groups. Finally, we found our workshop evaluation response rate to be very low with a single administration of the survey; we suggest at least two strategically timed reminders after the initial request for feedback. A. Facilitator Guide.docx B. Vignette Scripts.docx C. Cocky Connor.mp4 D. Constructive Conversation.mp4 E. Defensive Debbie.mp4 F. Distracted Attending.mp4 G. Impersonal Attending.mp4 H. Self-Effacing Sammy.mp4 I. Session Evaluation.docx J. Dimensions and Items.docx All appendices are peer reviewed as integral parts of the Original Publication.
  17 in total

1.  Attending and resident satisfaction with feedback in the emergency department.

Authors:  Lalena M Yarris; Judith A Linden; H Gene Hern; Cedric Lefebvre; David M Nestler; Rongwei Fu; Esther Choo; Joseph LaMantia; Patrick Brunett
Journal:  Acad Emerg Med       Date:  2009-12       Impact factor: 3.451

2.  AM Last Page. Mapping the ACGME competencies to the RIME framework.

Authors:  Rechell G Rodriguez; Louis N Pangaro
Journal:  Acad Med       Date:  2012-12       Impact factor: 6.893

3.  Features of assessment learners use to make informed self-assessments of clinical performance.

Authors:  Joan Sargeant; Kevin W Eva; Heather Armson; Ben Chesluk; Tim Dornan; Eric Holmboe; Jocelyn M Lockyer; Elaine Loney; Karen V Mann; Cees P M van der Vleuten
Journal:  Med Educ       Date:  2011-06       Impact factor: 6.251

4.  Exploring Faculty Approaches to Feedback in the Simulated Setting: Are They Evidence Informed?

Authors:  Amanda Lee Roze des Ordons; Adam Cheng; Jonathan E Gaudet; James Downar; Jocelyn M Lockyer
Journal:  Simul Healthc       Date:  2018-06       Impact factor: 1.929

5.  The R2C2 Model in Residency Education: How Does It Foster Coaching and Promote Feedback Use?

Authors:  Joan Sargeant; Jocelyn M Lockyer; Karen Mann; Heather Armson; Andrew Warren; Marygrace Zetkulic; Sophie Soklaridis; Karen D Könings; Kathryn Ross; Ivan Silver; Eric Holmboe; Cindy Shearer; Michelle Boudreau
Journal:  Acad Med       Date:  2018-07       Impact factor: 6.893

6.  How faculty members experience workplace-based assessment rater training: a qualitative study.

Authors:  Jennifer R Kogan; Lisa N Conforti; Elizabeth Bernabeo; William Iobst; Eric Holmboe
Journal:  Med Educ       Date:  2015-07       Impact factor: 6.251

Review 7.  Accuracy of physician self-assessment compared with observed measures of competence: a systematic review.

Authors:  David A Davis; Paul E Mazmanian; Michael Fordis; R Van Harrison; Kevin E Thorpe; Laure Perrier
Journal:  JAMA       Date:  2006-09-06       Impact factor: 56.272

8.  Factors influencing responsiveness to feedback: on the interplay between fear, confidence, and reasoning processes.

Authors:  Kevin W Eva; Heather Armson; Eric Holmboe; Jocelyn Lockyer; Elaine Loney; Karen Mann; Joan Sargeant
Journal:  Adv Health Sci Educ Theory Pract       Date:  2011-04-06       Impact factor: 3.853

9.  Longitudinal Faculty Development Program to Promote Effective Observation and Feedback Skills in Direct Clinical Observation.

Authors:  Sheira Schlair; Lawrence Dyche; Felise Milan
Journal:  MedEdPORTAL       Date:  2017-10-30

10.  Beyond the Sandwich: From Feedback to Clinical Coaching for Residents as Teachers.

Authors:  Lorrel E Brown; Deepa Rangachari; Michael Melia
Journal:  MedEdPORTAL       Date:  2017-09-18
View more
  2 in total

1.  Feedback Focused: A Learner- and Teacher-Centered Curriculum to Improve the Feedback Exchange in the Obstetrics and Gynecology Clerkship.

Authors:  Natasha R Johnson; Andrea Pelletier; Celeste Royce; Ilona Goldfarb; Tara Singh; Treven C Lau; Deborah D Bartz
Journal:  MedEdPORTAL       Date:  2021-03-25

2.  Can You Hear Me Now? Helping Faculty Improve Feedback Exchange for Internal Medicine Subspecialty Fellows.

Authors:  Sonia Ananthakrishnan; Mara Eyllon; Craig Noronha
Journal:  MedEdPORTAL       Date:  2021-02-17
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.