Literature DB >> 30498546

Sustainability of physical exam skills in a resident-led curriculum in a large internal medicine program with competency based medical education.

Don Thiwanka Wijeratne1, Siddhartha Srivastava1, Barry Chan1, Wilma Hopman1, Benjamin Thomson1,2.   

Abstract

BACKGROUND: Competency Based Medical Education (CBME) designates physical examination competency as an Entrustable Professional Activity (EPA). Considerable concern persists regarding the increased time burden CBME may place on educators. We developed a novel physical examination curriculum that shifted the burden of physical examination case preparation and performance assessment from faculty to residents. Our first objective was to determine if participation led to sustainable improvements in physical examination skills. The second objective was to determine if resident peer assessment was comparable to faculty assessment.
METHODS: We selected physical exam case topics based on the Objectives of Training in the Specialty of Internal Medicine as prescribed by the Royal College of Physicians and Surgeons of Canada. Internal Medicine residents compiled evidence-based physical exam checklists that faculty reviewed before distribution to all learners. Physical exam practice sessions with whole-group demonstration followed by small-group practice sessions were performed weekly. We evaluated this pilot curriculum with a formative OSCE, during which a resident peer and a faculty member simultaneously observed and assessed examinee performance by.
RESULTS: Participation in the novel curriculum practice sessions improved OSCE performance (faculty score mean 78.96 vs. 62.50, p<0.05). Peer assessment overestimated faculty scores (76.2 vs. 65.7, p<0.001), but peer and faculty assessments were highly correlated (R2 = 0.73 (95% CI 0.50-0.87).
CONCLUSION: This novel physical examination curriculum leads to sustainable improvement of physical examination skills. Peer assessment correlated well with the gold standard faculty assessment. This resident-led physical examination curriculum enhanced physical examination skills in a CBME environment, with minimal time commitment from faculty members.

Entities:  

Year:  2018        PMID: 30498546      PMCID: PMC6260518     

Source DB:  PubMed          Journal:  Can Med Educ J        ISSN: 1923-1202


Introduction

With the advent of Competency Based Medical Education (CBME), physical examination is a core skill designated as a consistent milestone of many Entrustable Professional Activities (EPA).[1] However, since the 1960s, physical examination proficiency amongst trainees continues to remain below expectation,[2-7] and its importance and emphasis have waned over the decades.[8] Numerous educational interventions have shown variable success, including structured curriculum,[8] multimedia-assisted teaching,[9] simulation training,[10] feedback,[11] and instructor variation.[12-14] The Objectives of Training in Specialty of Internal Medicine of the Royal College of Physicians and Surgeons of Canada (RCPSC) include that trainees should be able to perform “a focused physical examination that is relevant and accurate for… diagnosis and/or management.” However, there is no standardized curriculum defining the breadth of scope and depth of knowledge.[15] Thus, trainees continue to have varying expectations for an ill-defined standard. Furthermore, creation of a new physical examination curriculum to be concordant with CBME has the practical challenge of being sustainable given limitations in manpower, organization, finances, and faculty time. Faculty remain concerned that CBME demands greater time investment, when they have other competing priorities.[16-18] Thus, how to teach physical examination skills effectively in a CBME environment, in a way that minimizes time commitments for faculty educators, remains uncertain yet a pressing concern. The authors constructed, implemented, and evaluated a pilot, two-phased, structured, resident-led physical examination focused curriculum for the Core Internal Medicine and the General Internal Medicine Fellowship trainees (PGY 1-4) to address CBME requirements. The curriculum was dependent on resident learners, with minimal faculty involvement. The first objective of our study was to determine if participation in the pilot curriculum led to sustainable improvements in physical examination skills measured by performance on a formative OSCE examination. The secondary objective was to determine if peer assessments were comparable to faculty assessments, determined by simultaneous peer and faculty assessments in a formative OSCE.

Methods

Setting

We developed a physical examination curriculum for the Internal Medicine Training Program (PGY 1-4) of Queen’s University (Kingston, Canada) which consists of 67 residents.

Development of physical examination curriculum

We selected physical exam topics based on the Objectives of Training in the specialty of Internal Medicine as prescribed by the RCPSC. Internal Medicine residents in their second or third post-graduate year volunteered to compile evidence-based physical exam checklists from a number of recommended physical exam references including The Rational Clinical Examination: Evidence-based clinical diagnosis,[19] Evidence-Based Physical Diagnosis,[20] and Clinical Examination: A Systematic Guide to Physical Diagnosis.[21] They organized checklists into general inspection, then system-based examination (e.g., Cardiac, Neurological, Abdominal, etc.), and finally evidence-based special tests. Special tests were not considered part of the standard system-based physical examination. We described special tests in the checklist document, with references provided. Checklists were evaluated, refined and finalized by Internal Medicine faculty prior to distribution to all learners (Appendix A).

Implementation of physical examination curriculum

Practice physical examination sessions were incorporated into weekly academic half-day sessions. A single physical examination checklist for a specific physical exam topic (e.g., physical exam for Myasthenia Gravis) was distributed to all learners at least two days prior to the learning activity, with paper copies provided during the activity. These physical exam practice sessions were conducted first as a large group demonstration. This was facilitated by one to two faculty members who were Fellows of RCPSC in Internal Medicine. The faculty facilitator introduced a clinical scenario pertinent to the topic, then the resident checklist creator demonstrated the physical exam scenario with a resident peer as the standardized patient. Follow-up questions were then discussed with the group, and feedback was provided on the performance by the faculty members. The checklist was also reviewed after the physical exam scenario was performed, to assure all physical exam maneuvers were demonstrated correctly. This was followed by small-group breakout practice sessions, subdivided to one of eight available examination rooms, that were facilitated by faculty. Twenty residents contributed one topic each, so twenty topics were taught using this pilot physical examination curriculum, from July 2016 to February 2017.

Assessment of physical examination curriculum

In March 2017, a voluntary formative Objective Structured Clinical Examination (OSCE) was organized, to evaluate physical examination performance on four of the twenty topics that had been taught and practiced during the pilot curriculum. There were 36 residents who participated in the voluntary formative OSCE. Scoring sheets for these stations aligned with the checklists developed for the practice sessions with follow-up questions added to each station to assess critical thinking and knowledge application. This checklist and follow-up questions were used to calculate a raw score. A 10-point rubric was used to assign a global, general impression score, with the highest score being 10 and the lowest score being 0 (Appendix B). The raw and global scores were converted to percentages and added to determine combined scores. Two residents were paired to complete the examination circuit, consisting of four stations. The two residents alternated between being an examiner, marking physical examination performance on the checklist, or the examinee who performed the physical exam upon the resident examiner. No standardized patients were used in any of the OSCE stations. Specific instructions were provided at each station, with two minutes to read the instructions and stem, 10 minutes for the station, and three minutes for feedback. Three identical circuits of four OSCE stations ran simultaneously. In one of the circuits, we performed faculty assessment at each station, by observing through a one-way window that had an unimpeded view of the entire room, and listening via headphones to all sounds produced within the same room. Faculty members at each examination station used the same scoring sheet as resident peer examiners; and there was no communication between faculty and resident peer examiners. Residents who created checklists were permitted to participate in the formative OSCE. However, if a resident had previously created the checklist for an OSCE physical exam station, he or she was to be the examiner, but not to be examined in such a scenario.

Data Analysis- Objective 1

Multivariable regression models were used to determine if participation in the novel curriculum physical exam practice sessions improved examinee performance on the formative OSCE, after adjusting for the clinical scenario and PGY level of the examinee. Performance on the formative OSCE was defined as the peer raw score (regression model 1) or the faculty raw score (regression model 2).

Data Analysis- Objective 2

Raw, global and combined scores were calculated as percentages. Peer and faculty scores were reported as means and standard deviations (SD) by examinee/examinee PGY level and the station topic. As the scores were normally distributed groups’ scores were compared using paired t-test. Statistical significance was set at p<0.05. The 95% confidence intervals of these comparisons were derived. The correlation between peer and faculty scores was determined using Pearson’s Correlation Coefficients.

Ethics

Ethics approval was obtained through Queen’s University Health Sciences Research Ethics Board, Identification number 6022756. Informed consent was waived as per the Research Ethics Board approval.

Results

Participants

There were 72 encounters in the formative OSCE assessment. Faculty observed and evaluated 38.9% encounters (28/72). Examinee participation in the facilitated practice physical exam sessions that corresponded to the OSCE stations was 26.4% (19/72) (Table 1).
Table 1

Formative OSCE characteristics

Formative OSCE Encounter TypeN (%)
Total72 (100%)
Faculty Observed28 (38.9%)
Examinee participated in Matched Pilot Curriculum Practice Session19 (26.4%)
Examinee Post Graduate Training Year: 122 (30.5%)
216 (22.2%)
318 (25.0%)
416 (22.2%)
Formative OSCE characteristics

Objective 1: Sustainability of Physical Examination Skills

Participation in the curriculum practice physical examination sessions were associated with higher faculty raw (79.0 vs 62.5, p<0.05) and combined (75.3 vs 61.8, p<0.05) scores (Table 2).
Table 2

Effect of pilot curriculum physical examination practice on formative OSCE performance

Prior Exposure
YesNoP
Peer Assessment
Number1953
Raw Score81.475.20.16
Global Score78.582.60.17
Combined Score76.981.90.12
Faculty Assessment
Number820
Raw Score79.062.5<0.05
Global Score72.561.00.08
Combined Score75.361.8<0.05

Bolded numbers are statistically significant to p<0.05

Effect of pilot curriculum physical examination practice on formative OSCE performance Bolded numbers are statistically significant to p<0.05 In a multivariate model adjusted for the clinical scenario and PGY level of the examinee, residents who participated in novel curriculum practice sessions showed a non-statistically significant trend to improvement in peer raw score (p=0.06, difference = + 6.5 points). Resident participation in novel curriculum practice sessions were associated with higher faculty raw scores on the formative OSCE in another multivariate model with similar adjustments (p=0.01, difference = +19.8 points) (Table 3).
Table 3

Multivariate regression model data for peer raw (Model 1) and faculty raw (Model 2) scores

95% Confidence Interval
BetaPLower LimitUpper Limit
MODEL 1 (Peer Raw Scores)
Constant62.310.0054.5070.11
SCENARIO:     Meningitis18.060.009.7426.38
Splenomegaly28.170.0019.8636.49
Osteoporosis16.290.008.0524.52
POST-GRADUATE YEAR**:     1-0.230.96-8.548.08
2-4.350.33-13.154.44
3-7.130.11-15.801.54
Examinee Participation in Pilot Curriculum Practice Session6.500.06-0.2813.28
MODEL 2 (Faculty Raw Scores)
Constant51.220.0034.6067.85
SCENARIO:     Meningitis17.680.05-0.0235.37
Splenomegaly11.310.19-6.1728.79
Osteoporosis-3.330.69-20.6013.94
POST-GRADUATE YEAR***:     18.950.32-9.1427.04
24.660.49-9.2718.59
Examinee Participation in Pilot Curriculum Practice Session19.79<0.014.9834.60

Reference for Scenario (Models 1 and 2) is Osteoarthritis versus Rheumatoid Arthritis

Reference for Model 1 Post-Graduate Year is Post-Graduate Year 4

Reference for Model 2 Post-Graduate Year is Post-Graduate Year 3

Multivariate regression model data for peer raw (Model 1) and faculty raw (Model 2) scores Reference for Scenario (Models 1 and 2) is Osteoarthritis versus Rheumatoid Arthritis Reference for Model 1 Post-Graduate Year is Post-Graduate Year 4 Reference for Model 2 Post-Graduate Year is Post-Graduate Year 3

Objective 2: Peer versus Faculty Assessments

In the formative OSCE exam, peer assessment scores exceeded faculty assessment scores for raw (74.2 vs. 67.2, p<0.01), global (78.2 vs 64.3, p<0.001) and combined (76.2 vs 65.7, p<0.001) scores (Table 1). Peer assessment scores were higher than faculty assessment scores for post-graduate year 2 raw (75.1 vs 68.9, p<0.05), global (80.0 vs 64.2, p<0.05) and combined (77.6 vs 66.5, p<0.05) scores, and for post-graduate year 3 global scores (77.0 vs 64.0, p<0.01). Peer assessment scores were also higher than faculty assessment scores for clinical scenario meningitis global (80.0 vs 67.1, p<0.05), splenomegaly raw (90.0 vs 74.3, p=0.01), global (84.3 vs 64.3, p<0.05) and combined (87.1 vs 69.3, p=0.01), and osteoporosis global (81.4 vs 60.0, p<0.05) and combined scores (75.5 vs 59.0, p<0.05) (Figure 1). The peer assessment scores were not less than faculty assessment scores in any post-graduate year or clinical scenario. Faculty and peer assessment scores did not differ between post graduate year (data not shown, p>0.05 for all comparisons).
Figure 1

Peer and faculty assessment scores for consanguineous physical exam stations on formative OSCE

Peer and faculty assessment scores for consanguineous physical exam stations on formative OSCE Peer and faculty raw scores were highly correlated, with intra-class correlation (ICC) of 0.73 (95% confidence interval (CI) or 0.50-0.87). Peer and faculty global scores were not well correlated (ICC = 0.39).

Discussion

The advent of CBME by RCPSC-certified internal medicine programs has made physical examination skills as EPAs. How to teach physical examination skills effectively remains a challenge with a number of techniques potentially showing promise: structured curriculum,[8] multimedia-assisted teaching,[9] simulation training,[10] feedback,[11] and instructor variation.[12-14] Considerable concerns persist that adoption of CBME will increase demands for faculty time,[16-18] and thus physical curriculum, to be compatible with CBME in a sustainable fashion, would benefit from decreasing time demands on faculty. This study evaluated the feasibility of a novel pilot curriculum designed to decrease burden on faculty. This study showed a strong correlation between peer and faculty assessments for the raw, but not global scores. The raw score markings were based upon the combination of an evidence-based physical examination checklist and questions that assessed critical thinking and knowledge application, whereas the 10-point Likert scale was a subjective global rating of the candidate. Considerable bias has been reported in peer assessment in both medical and non-medical settings, including halo, horns, leniency, strictness, and similar-to-me biases.[13,22,23] Within the medical education literature, the halo and friendship marking effects inflate peer assessment scores by peers compared to faculty,[24-26] and inflated peer-assessment scores may be the norm in high stakes settings such as medical schools.[24] The magnitude of peer assessment scores’ inflation was greater in the subjective global (13.0 points), than the more objective raw (7.0 points) scores. This was likely because of a reduction of the halo and friendship marking effects. On the other hand, the view of the faculty assessor into the formative OSCE room was farther than, and intermittently blocked by the peer assessor, so it is possible that this impaired the faculty assessor’s viewpoint to provide an accurate assessment of physical examination maneuvers. However, none of the four faculty members in this trial thought the view was ever hindered sufficiently to impair faculty assessment. Medical literature confirms correlation between faculty and peer assessment may be low,[27,28] medium,[26,29-32] or high.[33] The extent of correlation is largely predicted by the effect of biases in the assessment tool. In this study, peer raw scores overestimated faculty raw scores to less of an extent than global scores, and peer raw scores (but not global scores) strongly correlated with the faculty scores. Thus, implementation of an evidence-based physical examination checklist may neutralize common biases associated with peer assessment. Therefore, wider adoption of this curriculum and subsequent physical examination assessment should be based upon raw, rather than global, scores. This study confirms that physical examination skills can be learned in the novel curriculum in a sustainable fashion with improved performance on a formative OSCE for those trainees who had previously been taught the physical examination topic. There are a number of reasons why trainees’ performance may have improved. Firstly, trainees created the physical examination evidence based checklist. This was done to decrease the workload for faculty physicians, but this act may in itself enhance knowledge retention.[34] Secondly, the pilot curriculum sessions included both single trainee demonstration with faculty, and multiple peers practicing, a combination that has been shown to enhance physical examination skills.[13] Thirdly, physical examination teaching by persons other than faculty physicians may be equally or more effective.[35-37] Consequently, adoption of this curriculum should include all components so both intended and unintended benefits are mobilized. It is plausible that additional improvements in physical examination skills may be realized by supplementing this curriculum with other interventions such as physical examination videos: this remains a topic for ongoing research. It is well established that learners tend to overestimate their skills, in self-assessment, yet the extent of this overestimation decreases with experience.[38-40] The same is also true for peer assessment, which overestimated faculty score by 7.0 (peer-raw), 13.0 (peer-global) and 10.5 (peer-combined) scores in this study. Peer assessment tends to approximate faculty assessment scores as peer assessors become more experienced.[41,42] This has important implications for curriculum design; peer assessment alone could lead to false confirmation of physical examination competency. Two potential solutions may address this. Firstly, a correction factor can be derived to adjust the peer-raw score to approximate the faculty-raw score. However, this will require intermittent faculty assessment to validate the correction factor prospectively. The alternative solution is to hold frequent faculty assessment without peer assessment. The first option may be more feasible if faculty time is in short supply as is the reality in many academic institutions. However, the required frequency of intermittent validation would warrant further study. There are a number of important strengths in this study. Firstly, this is a strong report of a physical examination curriculum that can be adopted within the CBME framework that leads to sustainable improvements in trainee physical examination skills. Secondly, this study suggests that many of the biases of peer assessment may be diminished by use of raw scores from an evidence-based checklist, rather than subjective global scores. Thirdly, the curriculum could be generalized to other academic centers within the CBME framework, without substantial impairment of academic resources, including faculty time. In terms of weaknesses, firstly, this is a single center study consisting of one internal medicine program’s trainees. However, the Queen’s Internal Medicine program includes residents with diverse cultural backgrounds and subspecialty interests, and, thus, the results are likely generalizable to other large internal medicine programs. Secondly, the number of formative OSCE scenarios that were evaluated by both peers and faculty was low. On the other hand, the conclusions found in this study were both meaningful, normally distributed and statistically significant, and thus this did not decrease the importance of the study. Thirdly, this study was unable to determine which of the components of the curriculum led to the improvements in physical examination skills. However, the curriculum was designed to combine multiple educational methods, (self-directed learning, small group learning, and faculty led demonstration) while maintaining long-term sustainability within restricted faculty and department resources. Thus, the determination of which specific component is responsible is not as important as knowing that the entire curriculum can be replicated and delivered in other internal medicine training programs.

Conclusion

This study confirms that peer assessments using raw scores, based on an evidence-based checklist, correlate well to faculty assessments, leaving an allowance for overestimation. A physical examination curriculum using checklist and demonstrations is sustainable in the CBME framework, with minimal faculty time commitment, leading to sustainable improvements in physical examination skills. Further research is required to determine how other co-operative peer run educational interventions could improve physical examination skills even further in this setting.
Table 1:

Physical Exam Findings Differentiating Pleural Effusions versus Pneumonia

Physical FindingPleural EffusionPneumonia
Tracheal DeviationDeviate ContralaterallyDeviate Ipsilaterally
Tactile FremitusReducedIncreased
PercussionReduced (‘stony dullness’)Reduced
Breath SoundsReducedBronchial
Adventitious SoundsPleural Friction Rub (if minimal fluid with pleurisy)Crackles (heard superior to effusion)Coarse CracklesRhonchi
Special TestsVocal Fremitus (e.g. egophony, bronchophony, whispered pectoriloquy)
Sens (%)Spec (%)LR +LR -
Jolt Accentuation (based on 1 study)97542.40.05
Fever43480.821.2
Neck Stiffness3-15680.94-6.60.83-1.0
Brudzinski5950.971.0
Kernig5-9950.97-4.20.92-1.0
Altered mental status69---
Focal neuro findings21---
Rash61---

Consolidated from JAMA RCE + Update

12345678910
  28 in total

1.  Peer assessment of competence.

Authors:  John J Norcini
Journal:  Med Educ       Date:  2003-06       Impact factor: 6.251

2.  Web-based blog supplement to evidence-based physical examination teaching.

Authors:  Isaac Bogoch; Rodrigo Cavalcanti; Arnold Weinberg; Benjamin Davis
Journal:  Med Educ       Date:  2012-05       Impact factor: 6.251

3.  Resistance to peer evaluation in an internal medicine residency.

Authors:  G M Van Rosendaal; P A Jennett
Journal:  Acad Med       Date:  1992-01       Impact factor: 6.893

4.  Teaching foundational physical examination skills: study results comparing lay teaching associates and physician instructors.

Authors:  Gwyn E Barley; Jennifer Fisher; Brian Dwinnell; Kelly White
Journal:  Acad Med       Date:  2006-10       Impact factor: 6.893

Review 5.  Barriers to Implementing the ACGME Outcome Project: A Systematic Review of Program Director Surveys.

Authors:  Mohammad U Malik; David A Diaz Voss Varela; Charles M Stewart; Kulsoom Laeeq; Gayane Yenokyan; Howard W Francis; Nasir I Bhatti
Journal:  J Grad Med Educ       Date:  2012-12

6.  Preparing anesthesiology faculty for competency-based medical education.

Authors:  Amy B Fraser; Emma J Stodel; Robert Jee; Daniel A Dubois; Alan J Chaput
Journal:  Can J Anaesth       Date:  2016-09-19       Impact factor: 5.063

7.  Competency in cardiac examination skills in medical students, trainees, physicians, and faculty: a multicenter study.

Authors:  Jasminka M Vukanovic-Criley; Stuart Criley; Carole Marie Warde; John R Boker; Lempira Guevara-Matheus; Winthrop Hallowell Churchill; William P Nelson; John Michael Criley
Journal:  Arch Intern Med       Date:  2006-03-27

8.  Simulation-based teaching to improve cardiovascular exam skills performance among third-year medical students.

Authors:  Donna H Kern; Arch G Mainous; Maura Carey; Anne Beddingfield
Journal:  Teach Learn Med       Date:  2011-01       Impact factor: 2.414

9.  Student teachers can be as good as associate professors in teaching clinical skills.

Authors:  Martin G Tolsgaard; Amandus Gustafsson; Maria B Rasmussen; Pernilla Høiby; Cathrine G Müller; Charlotte Ringsted
Journal:  Med Teach       Date:  2007-09       Impact factor: 3.650

10.  Leniency and halo effects in marking undergraduate short research projects.

Authors:  Brian H McKinstry; Helen S Cameron; Robert A Elton; Simon C Riley
Journal:  BMC Med Educ       Date:  2004-11-29       Impact factor: 2.463

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.