Literature DB >> 35356418

Evaluating Family Medicine Resident Narrative Comments Using the RIME Scheme.

Destiny Folk1, Christian Ryckeley2, Michelle Nguyen3, Jeremiah J Essig4, Gary L Beck Dallaghan4, Catherine Coe4.   

Abstract

Background: In 2013, the Accreditation Council on Graduate Medical Education (ACGME) launched the Next Accreditation System, which required explicit documentation of trainee competence in six domains. To document narrative comments, the University of North Carolina Family Medicine Residency Program developed a mobile application to document real time observations. Objective: The objective of this work was to assess if the Reporter, Interpreter, Manager, Expert (RIME) framework could be applied to the narrative comments in order to convey a degree of competency.
Methods: From August to December 2020, 7 individuals analyzed narrative comments of four family medicine residents. The narrative comments were collected from July to December 2019. Each individual applied the RIME framework to the comments and the team met to discuss. Comments where 5/7 individuals agreed were not further discussed. All other comments were discussed until consensus was achieved.
Results: 102 unique comments were assessed. Of those comments, 25 (25.5%) met threshold for assessor agreement after independent review. Group discussion about discrepancies led to consensus about the appropriate classification for 92 (90.2%). General comments on performance were difficult to fit into the RIME framework. Conclusions: Application of the RIME framework to narrative comments may add insight into trainee progress. Further faculty development is needed to ensure comments have discrete elements needed to apply the RIME framework and contribute to overall evaluation of competence.
© The Author(s) 2022.

Entities:  

Keywords:  family medicine residency; mobile application; qualitative research; reporter-interpreter-manager-educator

Year:  2022        PMID: 35356418      PMCID: PMC8958670          DOI: 10.1177/23821205221090162

Source DB:  PubMed          Journal:  J Med Educ Curric Dev        ISSN: 2382-1205


Introduction

Competence is defined as multi-dimensional and dynamic, changing with time and linked to experience and setting. The Accreditation Council on Graduate Medical Education (ACGME) defined six domains of competence expected of every resident.[2,3] Programmes individually developed methods to gather assessments of trainees’ progress to guide promotion decisions. Medical education evaluations often rely on rating scales defining trainee performance.[4,5] Numerical ratings create a ranking system that can be used to benchmark trainee progress. This reductionist approach has come under scrutiny due to rating inflation as well as poor correlations with narrative comments.[7,8] Where competency-based performance is concerned, evaluations dependent on numerical systems fall short in capturing and evaluating progress in complex tasks and roles. Questions have arisen about the validity and reliability of numeric ratings and scoring systems and whether the qualities and capabilities essential for good performance post-graduation are assessable using only grades. Narrative-based evaluations of clinical performance provide context to the numerical ratings. With accreditation systems increasingly requiring programmes to document progress, reliable systems for evaluation are needed more than ever. Entrustable professional activities (EPAs) have been advocated as a more advanced way of evaluating competence, but “entrustment” still elicits confusion among clinician educators. The Reporter-Interpreter-Manager-Educator (RIME)[14,15] is a developmental framework for assessing trainees in clinical settings. RIME suggests trainees progress through four stages, each requiring more complex application of the skills attained at the previous level. This model offers descriptive nomenclature readily understood and accepted by trainees and preceptors. Ryan and colleagues reported on reliability of the RIME framework being used as a numeric rating with medical students. Narrative comments modelling this framework can provide a richness of detail of progression that numerical scales cannot. In response to the ACGME and to better document narrative-based descriptions of learners, the University of North Carolina Family Medicine Residency Program developed the Mobile Medical Milestones application (M3App©), allowing faculty to document real time direct observations and provide formative feedback to residents (Figure 1). We sought to apply a process developed by Hanson et al to evaluate narrative comments from the M3App© using the RIME framework to determine developmental progress from the feedback. Specific questions explored in this study were:
Figure 1.

M3APP© feedback process. When completing feedback on a resident, faculty have the opportunity to enter a comment (A). They are then asked to identify a broad competency (B), followed by choosing a detailed competency (C). Therefore, multiple competencies can be selected for a single narrative comment. The figure is an adaptation of the M3APP©.

How accurate can independent reviewers assign RIME to narrative comments? What challenges emerged from applying the RIME model to narrative comments? M3APP© feedback process. When completing feedback on a resident, faculty have the opportunity to enter a comment (A). They are then asked to identify a broad competency (B), followed by choosing a detailed competency (C). Therefore, multiple competencies can be selected for a single narrative comment. The figure is an adaptation of the M3APP©.

Methods

Narrative comments for four family medicine residents (2 PGY 1 and 2 PGY 2) were chosen for inclusion in this exploratory study. The narrative comments from July to December 2019 were obtained and de-identified to blind the researchers from the resident as well as evaluators (Figure 2). Because the M3App© allows preceptors to choose what ACGME Milestones the comment relates to, there were duplicate comments in the download for each resident and coded only once. This study was reviewed and approved by the university institutional review board.
Figure 2.

Example output of resident performance from the M3App©. Faculty and resident names were removed to ensure anonymity.

Example output of resident performance from the M3App©. Faculty and resident names were removed to ensure anonymity. From August to December 2020, we analyzed narrative comments from the M3App©. Our team consisted of a medical education researcher, a family medicine physician, an internal medicine intern, and four senior medical students. Prior to the narrative analysis, background material about the RIME framework was discussed to ensure all members of the team understood each classification. Examples of comments were presented so the members of the team had a shared mental model. All narrative comments were independently coded deductively based on the RIME framework. If it appeared comments fit more than one category, multiple RIME categories were selected. If comments were unclear or simply a compliment, they were categorized as not applicable. The research team met to discuss individual coding results. Narrative comments where 5 of 7 codes agreed were not further discussed. All other comments were discussed until consensus was achieved.

Results

For four residents, 221 narratives were obtained. After removing duplicates, 102 unique narrative comments remained. For the first research question, rater agreement was analyzed. Only 25 (25.5%) records met our threshold for assessor agreement. Inter-rater reliability for the independent review resulted in a Cronbach's alpha = .427. After discussion, 92 (90.2%) evaluations achieved consensus among assessors (Table 1).
Table 1.

Narrative comment rater agreement.

# of Assessors AgreeingIndependent Review AgreementAgreement After Consensus Building
7/7512
6/7824
5/71256
≤ 4/77710
Narrative comment rater agreement. For the second research question, reviewers debriefed about this process and the challenges faced when coding comments. Comments that were vague, using verbs such as “great” or “excellent” to describe an action without providing more specific feedback were difficult to assess and fit into the RIME framework. Examples included “Great presentation with fellows” and “from MICU, one attending called to praise his excellent care.” These items were considered more of a compliment and rated as not applicable. Technical skills often described the particular skill undertaken in a matter-of-fact manner. For example, “…RESIDENT performed 3 excisional biopsies today. He demonstrated good technique and appropriate caution. We worked on refining his technique for buried sutures…” This made codifying a procedural skill based on RIME impossible to do.

Discussion

Narrative comments on resident performance facilitate assessment of competence. This study however demonstrated that it is difficult to assign the RIME scheme by independently reading narrative feedback, primarily because of the lack of specificity in many narratives. Pangaro and ten Cate indicated comments need to be clear to communicate progress, which many of our narratives lacked. Based on our study, there remains a need for faculty development related to narrative comments.[7,11] The RIME framework presents an understandable vocabulary by clinician educators. Training the reviewers to write narratives with the RIME framework in mind will also help evaluators. In so doing, they may also offer suggestions of how to progress to the next level. During our consensus process, it also became evident that contextual features such as setting adds clarity to the narrative. The authors intend to repeat this process following faculty development on writing specific, actionable feedback that includes more contextual information. Establishing a shared mental model of trainee expectations to improve feedback contributes to applying a framework like RIME, reflecting the work and skill of a physician. Additionally, a more in-depth analysis linking RIME to the competency ratings will be conducted to determine if the narrative comments are congruent.

Conclusion

Narrative comments reveal strengths and weaknesses of trainees, information that is difficult to attain from a single summative score. Applying a framework such as RIME to narrative comments can offer insights into trainee progress toward independent practice, allowing for meaningful feedback for trainees. For future steps, faculty feedback regarding input of the comments would help ensure the ability to apply the RIME framework and further determine competence.
  19 in total

1.  The next GME accreditation system--rationale and benefits.

Authors:  Thomas J Nasca; Ingrid Philibert; Timothy Brigham; Timothy C Flynn
Journal:  N Engl J Med       Date:  2012-02-22       Impact factor: 91.245

2.  Competency-based medical education: theory to practice.

Authors:  Jason R Frank; Linda S Snell; Olle Ten Cate; Eric S Holmboe; Carol Carraccio; Susan R Swing; Peter Harris; Nicholas J Glasgow; Craig Campbell; Deepak Dath; Ronald M Harden; William Iobst; Donlin M Long; Rani Mungroo; Denyse L Richardson; Jonathan Sherbino; Ivan Silver; Sarah Taber; Martin Talbot; Kenneth A Harris
Journal:  Med Teach       Date:  2010       Impact factor: 3.650

3.  Using the RIME model for learner assessment and feedback.

Authors:  Dan Sepdham; Manjula Julka; Laura Hofmann; Alison Dobbie
Journal:  Fam Med       Date:  2007-03       Impact factor: 1.756

4.  What criteria do faculty use when rating students as potential house officers?

Authors:  Kimberly Hoffman; Michael Hosokawa; Joe Donaldson
Journal:  Med Teach       Date:  2009-09       Impact factor: 3.650

5.  Piloting the Mobile Medical Milestones Application (M3App©): A Multi-Institution Evaluation.

Authors:  Cristen Page; Alfred Reid; Catherine L Coe; Janalynn Beste; Blake Fagan; Erica Steinbacher; Warren P Newton
Journal:  Fam Med       Date:  2017-01       Impact factor: 1.756

6.  Frameworks for learner assessment in medicine: AMEE Guide No. 78.

Authors:  Louis Pangaro; Olle ten Cate
Journal:  Med Teach       Date:  2013-05-16       Impact factor: 3.650

7.  Writing medical student and resident performance evaluations: beyond "performed as expected".

Authors:  Alison Volpe Holmes; Christopher B Peltier; Janice L Hanson; Joseph O Lopreiato
Journal:  Pediatrics       Date:  2014-04-14       Impact factor: 7.124

8.  When do supervising physicians decide to entrust residents with unsupervised tasks?

Authors:  Anneke Sterkenburg; Paul Barach; Cor Kalkman; Mathieu Gielen; Olle ten Cate
Journal:  Acad Med       Date:  2010-09       Impact factor: 6.893

9.  Workplace-Based Assessments Using Pediatric Critical Care Entrustable Professional Activities.

Authors:  Amanda R Emke; Yoon Soo Park; Sushant Srinivasan; Ara Tekian
Journal:  J Grad Med Educ       Date:  2019-08

10.  Evaluating the Reliability and Validity Evidence of the RIME (Reporter-Interpreter-Manager-Educator) Framework for Summative Assessments Across Clerkships.

Authors:  Michael S Ryan; Bennett Lee; Alicia Richards; Robert A Perera; Kellen Haley; Fidelma B Rigby; Yoon Soo Park; Sally A Santen
Journal:  Acad Med       Date:  2021-02-01       Impact factor: 7.840

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.