Literature DB >> 28144272

Milestones: a rapid assessment method for the Clinical Competency Committee.

Christopher Nabors1, Leanne Forman1, Stephen J Peterson2, Melissa Gennarelli1, Wilbert S Aronow1, Lawrence DeLorenzo3, Dipak Chandy3, Chul Ahn4, Sachin Sule1, Gary W Stallings1, Sahil Khera1, Chandrasekar Palaniswamy1, William H Frishman1.   

Abstract

INTRODUCTION: Educational milestones are now used to assess the developmental progress of all U.S. graduate medical residents during training. Twice annually, each program's Clinical Competency Committee (CCC) makes these determinations and reports its findings to the Accreditation Council for Graduate Medical Education (ACGME). The ideal way to conduct the CCC is not known. After finding that deliberations reliant upon the new milestones were time intensive, our internal medicine residency program tested an approach designed to produce rapid but accurate assessments.
MATERIAL AND METHODS: For this study, we modified our usual CCC process to include pre-meeting faculty ratings of resident milestones progress with in-meeting reconciliation of their ratings. Data were considered largely via standard report and presented in a pre-arranged pattern. Participants were surveyed regarding their perceptions of data management strategies and use of milestones. Reliability of competence assessments was estimated by comparing pre-/post-intervention class rank lists produced by individual committee members with a master class rank list produced by the collective CCC after full deliberation.
RESULTS: Use of the study CCC approach reduced committee deliberation time from 25 min to 9 min per resident (p < 0.001). Committee members believed milestones improved their ability to identify and assess expected elements of competency development (p = 0.026). Individual committee member assessments of trainee progress agreed well with collective CCC assessments.
CONCLUSIONS: Modification of the clinical competency process to include pre-meeting competence ratings with in-meeting reconciliation of these ratings led to shorter deliberation times, improved evaluator satisfaction and resulted in reliable milestone assessments.

Entities:  

Keywords:  clinical competency committee; educational milestones

Year:  2016        PMID: 28144272      PMCID: PMC5206368          DOI: 10.5114/aoms.2016.64045

Source DB:  PubMed          Journal:  Arch Med Sci        ISSN: 1734-1922            Impact factor:   3.318


Introduction

All US residency training programs are required to assess the development of their trainees using educational milestones [1]. Twice annually each program’s Clinical Competency Committee (CCC) must review and report each resident’s progress to the Accreditation Council for Graduate Medical Education (ACGME). Certain features of the process are mandated by the ACGME, while others are left to the discretion of individual programs. The ideal way to make these determinations is not currently known. Our internal medicine residency gained early experience with milestones as a participant in the Educational Innovations Project (EIP) [2, 3]. Our initial work focused on a set of detailed or “curricular milestones” that were released to the internal medicine community in 2009 [4, 5]. Prior to implementation of the Next Accreditation System (NAS), we pilot tested a second set of milestones that are now used for ACGME reporting. We (and others) found their use to be time intensive. This prompted us to devise and test a new approach which we hypothesized would permit rapid but accurate clinical competency assessments. This report describes the new approach.

Material and methods

The study took place as an EIP initiative within a medium-sized residency program at an academic medical center [6]. Participants included academic files of 13 categorical residents (then interns), a clinical competency committee chair, three program leaders and six faculty members. We modified our usual clinical competency process to incorporate several new elements and tested the new approach during a special clinical competency session in 2013. Outcomes included pre- and post-intervention deliberation times, participant survey results and comparison of assessments made by individual committee members and the collective committee using class rank lists.

The intervention

Our usual CCC included a chair, program director and eight faculty members. Meetings lasted 2–3 h, during which one class of 13 residents was evaluated. Faculty reviewed portfolios, de novo, in the meeting and presented evaluation data and other information to the group. Reference to other information not necessarily in the portfolio helped to inform decisions. During this study, the committee composition was similar. However, six faculty members were assigned to serve as “presenters.” As such, they were tasked with reviewing a standard data report for 2–3 assigned subjects prior to the meeting and rating their progress along the 22 internal medicine milestone subcompetencies. The program director did the same for each member of the class. At the meeting, the presenters announced the subcompetency ratings (and underlying reasoning) for each of their assigned subjects in sequence. The program director provided his rating for the same subcompetency, and through negotiated consensus the group reconciled any discrepancies between the committee member and program director’s ratings to achieve final scores for each subcompetency. Other information supplemented the discussions as needed. Key distinctions between the traditional and new approaches are displayed in Table I.
Table I

Clinical competency deliberations

FeatureUsual deliberation processStudy deliberation process
Data sourcesCustom Data Report in tandem with portfolioCustom data report as primary source with portfolio as backup
Data review timingIn meetingPre-meeting emphasis with less on meeting review
Data presentersRandom Faculty Committee membersAssigned Faculty Committee members and Program Director
Data presentationEntire portfolio:

Curricular milestones

Non-milestones

General competency

Milestone subcompetency scores and basis therefor; less emphasis on remainder of portfolio
Analytic frameworkGeneral competenciesMilestones subcompetencies
Analytic processGeneral discussionReconciliation of reviewer ratings supplemented by other discussion
Clinical competency deliberations Curricular milestones Non-milestones General competency

Evaluation data

Standard data reports used by the CCC were generated in Microsoft Access (Microsoft Corp, Mountain View, CA) using data exported from New Innovations (New Innovations, Uniontown, Ohio). One report was generated for each study subject. Data were organized by core competency, milestone subcompetency, curricular milestone, rotation type and time frame and evaluator type. Figure 1. Characteristics of the resident evaluation data set are displayed in Table II. Figure 2 displays the CCC data consideration process.
Figure 1

Sample Reporting Milestones Report Subcompetencies 20 and 21

Table II

Characteristics of evaluation system

Time frame – July 1, 2013 to March 1, 2013All evaluationsMilestone- basedNon-milestone-based
Total evaluation tools employed31292
Total evaluations for class710513197
Total faculty evaluations for class22818048
Percent of requested faculty evaluations completed228/288 (79%)
Directly observed faculty evaluations for class80800
Supervising peer evaluations (PGY-2 or PGY-3)31310
Same training level peer evaluations (PGY 1)1350135
Nursing evaluations for class48480
Self-assessments for class87870
Patient satisfaction20200
Commendations or concerns606
Clinical Competency Committee Evaluations13130
Synthetic Evals – Program Director Formative; Clinic Director ready for distance supervision26260
Documentation reviews (Progress Notes/ H&P, DC Summary)66660
Avg. No. of milestones used to rate each resident in PGY-1 year64
Individual milestone-based ratings for class (excludes self-assessments and chart reviews)7608
Avg. no. of milestone-based ratings per resident585
Figure 2

Study clinical competency data collection process

Characteristics of evaluation system Sample Reporting Milestones Report Subcompetencies 20 and 21 Study clinical competency data collection process

Deliberation times

Baseline CCC deliberation times for each subject were established during an ACGME/ABIM reporting milestones feasibility pilot. During this study, deliberation times were recorded by the program coordinator. Time spent by committee members and the program director during pre-meeting assessments was recorded by members and compiled by leadership.

Class rank lists

We compared class rank lists made during this study to estimate reliability of competence determinations. A gold standard or “master” rank list was generated by the CCC just after deliberations. Using open discussion to achieve consensus, the group rated each member of the subject class from most (rank 1) to least competent (rank 13). Competence was defined as aggregate effectiveness in the application of knowledge, skills, and attitudes within the scope of medical training across all possible settings. To the master list, we compared rank lists produced by individual presenters prior to any data review (pre-meeting) and after full deliberations (post-meeting), but prior to discussions which led to the master list. Additional rank lists were compiled from overall rotation evaluation scores (Faculty Rotation Evaluations) and from overall scores assigned to subcompetencies during the CCC (CCC Milestone Subcompetencies) (see Table III). Rank lists by two program leaders were excluded because of their substantial familiarity with the subjects’ evaluations prior to the study.
Table III

Comparison of rank lists

Variable 1Variable 2Correlation P-value
Master rank listFaculty 1 post-meeting rank list0.747250.0033*
Master rank listFaculty 1 pre-meeting rank list0.065930.8305
Master rank listFaculty 2 post-meeting rank list0.675820.0112*
Master rank listFaculty 2 pre-meeting rank list0.450550.1223
Master rank listFaculty 3 post-meeting rank list0.587910.0346*
Master rank listFaculty 3 pre-meeting rank list0.439560.1329
Master rank listFaculty 4 post-meeting rank list0.93407< 0.0001*
Master rank listFaculty 4 pre-meeting rank list0.401100.1744
Master rank listFaculty rotation evaluations0.736260.0041*
Master rank listCCC Milestone Subcompetencies0.763740.0024*

P-value < 0.05.

Comparison of rank lists P-value < 0.05.

Participant surveys

At meeting close, non-leadership faculty (6/6) completed a voluntary and anonymous survey. To help establish content validity, items were patterned after a questionnaire used in a prior report [3] and were piloted for clarity by a former program director. Focus sections included: 1) demographics; 2) comparison of milestones versus non-milestone based clinical competency deliberations; 3) effectiveness of data organization and analytic approaches.

Statistical analysis

Survey results and deliberation times were analyzed using the Mann-Whitney U test with significance accepted for p < 0.05. Survey results were not adjusted for multiple comparisons. Reliability estimates for competence assessments were made by comparing the CCC master rank list ordering with rank lists based on faculty pre-/post-meeting assessments, faculty ward ratings and aggregated CCC milestone subcompetency ratings using Bland-Altman analysis with significance accepted for p < 0.05. Associations between the master list and other rank lists were investigated using Spearman’s rank correlation coefficient. Statistical analysis was conducted at the University of Texas Southwestern Medical Center using SAS software version 9.3 (SAS Institute, Cary, North Carolina). The project was approved by the Institutional Review Board of New York Medical College and the Office of Clinical Trials and Westchester Medical Center.

Results

Participants rated overall satisfaction with clinical competency deliberations prior to and after milestones introduction at 6.0 and 7.7 (p = 0.041). Members found that use of milestones for CCC deliberations improved their ability to know and identify expected elements of competency development from 6.2 to 8.3 (p = 0.015), their ability to specifically assess a resident’s competency development from 5.5 to 7.8 (p = 0.026) and their ability to identify particular strengths or weaknesses from 5.3 to 7.5 (p = 0.026) (Table IV).
Table IV

Clinical Competency Committee Member perceptions of milestones use (N = 6)

Rate your level of satisfaction with clinical competency deliberation prior to and after introduction of Milestones (10 Maximally Satisfied, 1 Minimally Satisfied)
MeanRangea P-value
The Committee’s format (milestones versus no milestones) for evaluating resident performance:
 Pre-milestones63–80.041
 Using milestones7.77–8
How data were presented to you for advancement decisions:
 Pre-milestones5.83–8
 Using milestones7.56–90.093
Your ability to know and identify expected elements of competency development (knowledge, skills and attitudes) for use in competency committee deliberations:
 Pre-milestones6.24–70.004
 Using milestones8.37–9
Your ability to specifically assess a resident’s level of competency development (i.e., attainment of required knowledge, skills and attitudes):
 Pre-reporting milestones5.54–70.015
 Using milestones7.86–9
Your ability to identify residents ready for an accelerated training curriculum (complete training in 2 years rather than 3 years:
 Pre-milestones53–70.026
 Using milestones7.36–9
Your ability to describe and quantify differences in level of performance between house officers at the same level of training:
 Pre-milestones64–90.132
 Using milestones7.77–8
Your ability to identify particular strengths or weaknesses in the trainees’ developmental progress:
 Pre-milestones5.33–70.026
 Using milestones7.56–9

Ten-point scale where 10 = maximally satisfied, 1 = minimally satisfied.

Clinical Competency Committee Member perceptions of milestones use (N = 6) Ten-point scale where 10 = maximally satisfied, 1 = minimally satisfied. Faculty rated their ability to rate competence at 6.2 using chart review alone, 7.3 using only data collected into a standard report and 8.3 using the study deliberation process which included pre-meeting review coupled with in-meeting committee reconciliation of disparate ratings (Table V). Committee members agreed strongly (2 of 6, 33%) or somewhat (4 of 6, 67%) that a standard data report facilitated deliberations; all agreed strongly (4 of 6, 67%) or somewhat (2 of 6, 33%) that such a report facilitated rapid CCC data analysis. All agreed somewhat (3 of 6, 50%) or strongly (3 of 6, 50%) that consideration of curricular milestone evaluations within the current milestone framework represented an effective evaluation strategy (Table VI).
Table V

Data consideration method

Rate how well you were able to assess trainee competenceMeanRangea N
Based on manual chart review6.25–86
Using data from a standard milestones report7.35–106
Based on pre-meeting review with in-meeting reconciliation8.37–106

Ten-point scale where 10 = very well, 1 = very poorly.

Table VI

Effectiveness of standard data report

VariableDisagree stronglyDisagree somewhatUncertain or neutralAgree somewhatAgree stronglyTotalAverage rating
(–2)(–1)0(+1)(+2)(N)
The milestones report structure: milestone, core competency, curricular milestones, time frame, clinical rotation, effectively facilitates deliberations0% (0)0% (0)0% (0)67% (4)33% (2)61.33
A standard report permits more rapid data analysis and presentation during clinical competency deliberations than is possible with manual chart review0% (0)0% (0)0% (0)33% (2)67% (4)61.67
The milestones report permits more effective data analysis and presentation during clinical competency deliberations than is possible with manual chart review0% (0)0% (0)33% (2)33% (2)33% (2)61.00
Most data central to consideration of progress along milestones is contained within the milestones data report0% (0)0% (0)0% (0)67% (4)33% (2)61.33
The aggregation of curricular milestones based data and consideration within the framework of reporting milestones represents an effective evaluation strategy0% (0)0% (0)0% (0)50% (3)50% (3)61.50
Data consideration method Ten-point scale where 10 = very well, 1 = very poorly. Effectiveness of standard data report

Deliberation time

Study CCC deliberation time for 13 subjects was 2 h and 5 min (9.6 min per subject). Using our prior approach, per subject deliberation time was 25 min. The difference in review time (15.4 min) was statistically significant (p ≤ 0.001). During this study, presenters and the program director, respectively, required 32 min (mean) and 30 min pre-meeting time to rate subject milestone subcompetency achievement. Five man-hours per subject (25 min × 12 members) were required to generate milestone ratings using our usual approach, while 3 man-hours per subject (30 min faculty, 32 min program director, 10 min × 12 members) were required using the study approach.

Clinical competency rank lists

Four faculty pre-/post-intervention class rank lists were compared with the committee’s master list. While none of the four pre-meeting rank lists showed statistically significant agreement with the master rank list, each of the four post-meeting rank lists agreed with the master list (Table III). Significant correlation was also noted between the master list and class rank lists based on current academic year faculty rotation-evaluations and each subject’s summed CCC’s reporting milestone subcompetency scores.

Discussion

In this study, CCC deliberations that were informed by a custom data report and guided by pre-meeting milestone assessments with in-meeting reconciliation produced rapid and reliable milestone ratings. Committee deliberation time was reduced from 25 min to less than 10 min per subject using the new method. Time savings resulted primarily from the committee’s ability to focus on resolving discrepancies between program director and faculty competence determinations rather than on conducting a de novo data review. Despite the reduced deliberation time, committee members believed the new process improved their ability to assess a resident’s level of competency development and that consideration of evaluation data through a standard data report was useful. Support for these perceptions came from comparison of class rank lists produced during the study. Lists generated by faculty prior to data review lacked significant association with the master class rank list generated by the collective CCC. On the other hand, each of the faculty generated rank lists which followed the CCC deliberation process bore a close statistical association with the master list. This suggests that analysis within the milestone framework permits development of a shared mental model [7, 8] of competence attainment during CCC deliberations. Because CCC deliberations in the NAS rest on the application of the criterion-based milestone narratives, our finding of agreement between the CCC’s milestone-based rank list and the committee’s (Gestalt-based) master list was significant. This concordance between traditional and NAS-based deliberations provides preliminary evidence that application of the internal medicine milestone narratives permits evaluators to effectively discern differing developmental trajectories among trainees. Further work will be needed to verify this finding and to determine whether application of individual subcompetencies can permit discrimination of more granular features of competence attainment. This study had several limitations. First, it was completed at a single program which was atypical in some respects. At the time of the study, our program had already gained experience with milestone-based evaluations and had developed a robust mechanism by which to aggregate data for use in CCC deliberations. As such, the creation of a streamlined CCC process may have required less groundwork than would be necessary to generate a comparable process elsewhere. On the other hand, our findings appear to have general application in that current residency software systems such as New Innovations now offer data aggregating capability similar to that achieved by the custom data report used herein. A second limitation was that this study relied on a small sample size and lacked a control group. Third, lack of benchmark milestone data limited our ability to make comparisons of our findings with those of others. Finally, the survey results derived from a non-validated instrument and were not fully substantiated by other objective measures. Future studies at our facility will focus on gathering data to further assess survey perceptions and to permit correlation of milestone ratings with objective patient care outcome measures. In conclusion, the use of a modified clinical competency process which included pre-meeting assessment of milestone-based developmental progress and employed a custom data report significantly reduced committee deliberation time and permitted reliable milestone-based assessment.
  8 in total

1.  The next GME accreditation system--rationale and benefits.

Authors:  Thomas J Nasca; Ingrid Philibert; Timothy Brigham; Timothy C Flynn
Journal:  N Engl J Med       Date:  2012-02-22       Impact factor: 91.245

2.  Charting the road to competence: developmental milestones for internal medicine residency training.

Authors:  Michael L Green; Eva M Aagaard; Kelly J Caverzagie; Davoren A Chick; Eric Holmboe; Gregory Kane; Cynthia D Smith; William Iobst
Journal:  J Grad Med Educ       Date:  2009-09

3.  Internal medicine's Educational Innovations Project: improving health care and learning.

Authors:  Jeanette Mladenovic; Roger Bush; John Frohna
Journal:  Am J Med       Date:  2009-04       Impact factor: 4.965

4.  Playing with curricular milestones in the educational sandbox: Q-sort results from an internal medicine educational collaborative.

Authors:  Lauren B Meade; Kelly J Caverzagie; Susan R Swing; Ron R Jones; Cheryl W O'Malley; Kenji Yamazaki; Aimee K Zaas
Journal:  Acad Med       Date:  2013-08       Impact factor: 6.893

5.  Early feedback on the use of the internal medicine reporting milestones in assessment of resident performance.

Authors:  Eva Aagaard; Gregory C Kane; Lisa Conforti; Sarah Hood; Kelly J Caverzagie; Cynthia Smith; Davoren A Chick; Eric S Holmboe; William F Iobst
Journal:  J Grad Med Educ       Date:  2013-09

6.  Faculty development in assessment: the missing link in competency-based medical education.

Authors:  Eric S Holmboe; Denham S Ward; Richard K Reznick; Peter J Katsufrakis; Karen M Leslie; Vimla L Patel; Donna D Ray; Elizabeth A Nelson
Journal:  Acad Med       Date:  2011-04       Impact factor: 6.893

7.  The internal medicine reporting milestones and the next accreditation system.

Authors:  Kelly J Caverzagie; William F Iobst; Eva M Aagaard; Sarah Hood; Davoren A Chick; Gregory C Kane; Timothy P Brigham; Susan R Swing; Lauren B Meade; Hasan Bazari; Roger W Bush; Lynne M Kirk; Michael L Green; Kevin T Hinchey; Cynthia D Smith
Journal:  Ann Intern Med       Date:  2013-04-02       Impact factor: 25.391

8.  Operationalizing the internal medicine milestones-an early status report.

Authors:  Christopher Nabors; Stephen J Peterson; Leanne Forman; Gary W Stallings; Arif Mumtaz; Sachin Sule; Tushar Shah; Wilbert Aronow; Lawrence Delorenzo; Dipak Chandy; Stuart G Lehrman; William H Frishman; Eric Holmboe
Journal:  J Grad Med Educ       Date:  2013-03
  8 in total
  2 in total

1.  The Science of Effective Group Process: Lessons for Clinical Competency Committees.

Authors:  Karen E Hauer; Laura Edgar; Sean O Hogan; Benjamin Kinnear; Eric Warm
Journal:  J Grad Med Educ       Date:  2021-04-23

2.  Comparison of Male and Female Resident Milestone Assessments During Emergency Medicine Residency Training: A National Study.

Authors:  Sally A Santen; Kenji Yamazaki; Eric S Holmboe; Lalena M Yarris; Stanley J Hamstra
Journal:  Acad Med       Date:  2020-02       Impact factor: 6.893

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.