Literature DB >> 34222591

Medical student perceptions of assessment systems, subjectivity, and variability on introductory dermatology clerkships.

Jaewon Yoon1, Jordan T Said1, Leah L Thompson1, Gabriel E Molina1, Jeremy B Richards2, Steven T Chen1.   

Abstract

BACKGROUND: Elective introductory clerkships in dermatology serve a critical function in providing formative experiences to medical students interested in the field. Although dermatology clerkships play a pivotal role in students' career choices and residency preparation, the assessment systems used to evaluate students on these clerkships are widely different and likely affect student experiences.
OBJECTIVE: This study aimed to explore the relationship between dermatology clerkship assessment systems and student experiences through interviews with students about their clerkship reflections and perceptions of assessment.
METHODS: The authors contacted clerkship directors via the Association of Professors of Dermatology mailing list and invited them to provide a description of the assessment system at their institution. The authors, via contacted clerkship directors, then invited students who had completed an introductory dermatology clerkship in between 2018 and 2019 to provide a description of the assessment system at their institution and to participate in a qualitative interview about their experiences with assessment systems. The authors then iteratively synthesized interview transcripts using phenomenological analysis, in which a templated approach was used to achieve comprehensive thematic categorization.
RESULTS: Prior to clerkship onset, students expressed a limited understanding of their clinical role and the assessment system. During the clerkship, students endorsed variable expectations across preceptors, limited feedback experiences, and pressures to perform for evaluators. After their clerkship, students continued to perceive assessment systems as nontransparent, subjective, and preordained.
CONCLUSION: Medical students perceived assessment systems on introductory dermatology clerkships to be unclear and arbitrary. Encouragingly, students also viewed these challenges in assessment as malleable, identifying several opportunities for educational reform in dermatology clerkships.
© 2021 Published by Elsevier Inc. on behalf of Women's Dermatologic Society.

Entities:  

Keywords:  Assessment of clinical performance; Clinical education; Medical student education; Medical student perceptions; Qualitative research; Undergraduate medical education

Year:  2021        PMID: 34222591      PMCID: PMC8243165          DOI: 10.1016/j.ijwd.2021.01.003

Source DB:  PubMed          Journal:  Int J Womens Dermatol        ISSN: 2352-6475


Introduction

Medical students often have limited exposure to the field of dermatology through preclinical coursework and core clinical clerkships required to graduate (Griffith et al., 2000, Hauer et al., 2008, Jagadeesan et al., 2014, Lefevre et al., 2010, Meurer, 1995). Extracurricular opportunities to explore dermatology beyond the required curriculum, such as through student-led dermatology interest groups, exist across medical schools, but these organizations have widely inconsistent offerings for students across institutions (Quirk et al., 2016, Wang et al., 2017). As such, elective rotations in dermatology serve a practical and pivotal function, providing formative exposure to confirm nascent interest in the field (Benson et al., 2015, Coates et al., 2008). Beyond supporting specialty choice, clinical clerkships in dermatology are also vital to prepare committed students for the residency application process, offering important opportunities for faculty connection, performance evaluation, and letters of recommendation (Benson et al., 2015, Hauer et al., 2008). Recent years have demonstrated an increasing trend for dermatology residency programs to rely on applicants’ numbers of publications in evaluating and ranking candidates, shown by the number of publications among accepted applicants increasing significantly each year (Cline et al., 2021, Ezekor et al., 2020, Wang and Keller, 2016). Dermatology clerkships serve an important function in preparing students for this component of the application process as well, connecting them with both mentors and publishable academic opportunities. Thus, clinical clerkships in dermatology strongly influence students’ career choices and residency preparation, but the assessment systems used on these and other subspecialty clerkships lack standardization (Jagadeesan et al., 2014, Lindeman et al., 2013, O’Connor et al., 2017, Westerman et al., 2019). Across institutions, available evidence indicates inconsistent use of pass–fail and tiered (i.e., high honors/honors/pass/fail) assessment outputs, as well as persistent subjectivity in evaluation methods (Ange et al., 2018, Hauer and Lucey, 2019, Spring et al., 2011, ten Cate and Regehr, 2019, Westerman et al., 2019). The impact of these variable and subjective assessment methods on medical students’ experiences during introductory dermatology clerkships remains underexplored. Existing data underscore that nontransparency and significant variations in clerkship assessments may detract from medical students’ well-being, both in preclinical coursework and clinical clerkships (Spring et al., 2011, Wasson et al., 2016). These issues have been explored most extensively in surgical fields, with prior work indicating a relationship between the lack of a standardized assessment system and reduced student competency and confidence (Lindeman et al., 2013). In the field of emergency medicine, a survey administered to clinical clerkship directors (CDs) regarding medical student assessment revealed a wide breadth of differently utilized assessment tools. Furthermore, 90% of respondents preferred the development of a nationally standardized assessment tool to the existing inconsistent evaluation of medical students (Lawson et al., 2016). Variable assessment methods and subjectivity in assessments have been explored and demonstrated in other fields, but the literature investigating these phenomena and possible targeted interventions in dermatology is lacking. Furthermore, these variable and subjective assessment methods may be more consequential in competitive subspecialties, such as dermatology, which rely heavily on institutional evaluations and narrative reports to evaluate applicants for residency programs (Gorouhi et al., 2014, Wang et al., 2018). To optimally support students considering careers in dermatology, medical educators may benefit from additional information about the relationship between assessment systems and learner perceptions of assessment on dermatology clerkships. To address this need, we qualitatively investigated medical students’ perceptions of subjectivity and variability in their clerkships’ assessment systems, aiming to identify persistent student concerns and opportunities for potential improvement. Furthermore, to better understand transparency of information passed from CD to student, we also evaluated concordance between these groups’ reports of their clerkships’ assessment components.

Methods

Sample selection

In April and May of 2018, we invited CDs via the Association of Professors of Dermatology mailing list to complete an initial survey describing the assessment system for the introductory dermatology clerkship at their institution. CDs indicated whether they would be willing to be contacted for a qualitative study requiring student involvement. At programs indicating interest in further participation, CDs were asked to invite students who completed an introductory dermatology clerkship between 2018 and 2019 to participate in a semi-structured interview regarding their experiences with assessment. We used purposive sampling to select a set of student–CD dyads for the qualitative component of the study, representing a range of geographic regions, clerkship structures, and assessment systems. Students who opted to enroll participated in a recorded, semi-structured phone interview between February 1, 2019 and October 1, 2019. Participating students provided informed consent prior to the interview and were offered a $15 gift card for their time. If needed, students were e-mailed for clarification about their responses within 3 months of the original interview.

Qualitative data collection

Student demographic information was collected prior to the interview (Supplement 1, Question 1). Interviews lasted 15 to 20 minutes. The interview included 10 questions (Supplement 1, Questions 2–11) exploring the following topics: students’ clerkship expectations, relationships and experiences receiving feedback from faculty, understanding of the clerkship assessment system, and perceptions of the effect of assessment on clerkship experiences. Feedback experiences were neutrally defined as formal if they were planned, structured, and scheduled (i.e., mid-clerkship feedback meeting) or informal if they were immediately provided to the student (i.e., critiquing examination skills or oral presentations; Tuma and Nassar, 2020). Assessment systems encompassed all aspects of an assessment, including components (i.e., methods of evaluation, such as clinical performance, presentations, and examinations), sources (i.e., individuals assessing students, such as faculty and residents), and outputs (i.e., final grade and written evaluations).

Data extraction and analysis

Interview recordings were transcribed verbatim. Consistent with prior work, we analyzed interview transcripts using a template analysis approach derived from traditional phenomenology (Brooks et al., 2015, Corr et al., 2017, Hardy et al., 2014, Symon and Cassell, 2017, Symon et al., 2017). This approach entailed the construction of an initial coding template, developed by two researchers (JY, JTS) using a subset of two transcripts. As more transcripts were analyzed, the coding template was iteratively developed. The final coding template was then validated by a third researcher (LLT). Using this final coding template, iterative thematic categorization was performed by two blinded coders (JY, JTS). The coding was repeated until all themes were comprehensively described with complete intercoder agreement, both within and across individual interview transcripts. This study was approved by the Massachusetts General Hospital and Harvard Medical School institutional review boards.

Results

Among six CDs who indicated interest in study participation with student interviewees, five ultimately responded with student interviewees (response rate: 5 of 6 CDs; 88.3%). A total of 10 medical students (median age: 26 years; 70% female) from five different institutions who completed an introductory dermatology clerkship and their corresponding CDs provided information regarding assessment components, sources, and outputs. Demographic information of the students and institutions are provided in Table 1. The 10 students also completed semi-structured, qualitative interviews on their overall experiences with the assessment system. To protect student privacy, students were randomly assigned numbers, which are used throughout this work.
Table 1

Student and institution characteristics.

Student characteristics (N = 10)

Student numberSexAge at time of interview, yearsUltimate choice of pursued specialtyInstitution number

1Female25Dermatology1
2Female26Pediatrics2
3Female26Dermatology2
4Female25Dermatology3
5Male29Dermatology4
6Female35Dermatology5
7Female27Dermatology4
8Male24Dermatology3
9Female25Dermatology4
10Male28Dermatology2



Institution characteristics (N = 5)

Institution numberGeographic regionPublic/private statusNumber of full-time faculty (<1500; 1500–3000; >3000)Number of dermatology faculty (<25; 25–50; >50)

1NortheastPrivate1500–3000>50
2WestPublic1500–3000>50
3SoutheastPrivate<1500<25
4NortheastPrivate>3000>50
5MidwestPublic<1500<25
Student and institution characteristics.

Preclerkship: Limited understanding of clinical role and assessment system

Students received limited clerkship-specific materials delineating their expected clinical role (3 of 10) prior to clerkship onset and frequently (7 of 10) sought collateral information from unofficial sources, such as peers (7 of 10). Notably, even when clerkship-specific materials were available, the practical implications of this information were often unclear. As Student 5 articulated: “I knew that the course catalog had a paragraph on [clinical] expectations, but it wasn't clear whether that had any real-world relevance” (Table 2). Student 7 echoed these sentiments: “I think especially for clinic, we didn’t know what the expectations were. I wasn’t sure […] if I should be politely shadowing and not interfering with clinic flow because they’re very busy or if I should be aggressively offering to see patients independently to seem enthusiastic. It was very unclear.”
Table 2

Pre-clerkship: Limited understanding of clinical role and assessment system.

ThemesRepresentative quotes
A) Clinical role“I think especially for clinic, we didn’t know what the expectations were. I wasn’t sure […] if I should be politely shadowing and not interfering with clinic flow because they’re very busy or if I should be aggressively offering to see patients independently to seem enthusiastic. It was very unclear.” (Student 7)
“I knew that the course catalog had a paragraph on [clinical] expectations, but it wasn't clear whether that had any real-world relevance.” (Student 5)
“I was told that I would be observing and not much else.” (Student 10)
B) Assessment system“Not having a concrete idea of how [I was] being graded [was] worrisome.” (Student 8)
“If I’m working with more with one person […] does their evaluation get weighted more […] [do] they ask them to look at your progress throughout?” (Student 4)
“I knew it would be similar to the other advanced clerkships […]. [But] in terms of knowing what was going to go into being graded, the grade itself, I didn’t know […]. I didn’t know the weights of resident and faculty comments, or how much clinic would matter […]; it would have shaped my preparation if there’d been more detail about the grading system.” (Student 7)
Pre-clerkship: Limited understanding of clinical role and assessment system. Students’ pre-clerkship understanding of assessment systems was similarly limited. Although 8 of 10 students reported that they knew the assessment output, substantial confusion existed regarding the components and sources of the assessment, as well as their relative weights. As Student 4 recalled, it was unclear “if faculty evaluations were weighted more than resident evaluations” and whether duration of time or frequency of contact mattered: “If I’m working more with one person […], does their evaluation get weighted more, […] [do] they ask them to look at your progress throughout?” (Table 2). In addition, some students articulated that limited details regarding the relative weight of different components of the assessment could leave them unsure about how to apportion their efforts, despite significant concern among students over the mechanics of how their assessment outputs are assigned. As Student 7 summarized: “I knew it would be similar to the other advanced clerkships […]. In terms of knowing what was going to go into being graded, the grade itself, I didn’t know […]. I didn’t know the weights of resident and faculty comments, or how much clinic would matter […]; it would have shaped my preparation if there’d been more detail about the grading system.”

During clerkship: Variable expectations, limited feedback, and performative pressures

During the clerkship, variability among different residents and faculty members often led to student confusion. As Student 4 articulated, expectations could be “really different depending on who you were working with,” with dynamics varying from “a shadowing experience” to “seeing patients on my own and presenting to an attending” (Table 3). Confusion surrounding variable expectations could persist well into the rotation, causing longitudinal distress. As Student 2 reflected: “I only really understood what my role [was] two weeks into the rotation […], [which] was hard for me as a student.”
Table 3

During clerkship: Variable expectations, scant feedback, and performance pressure.

ThemesRepresentative quotes
A) Variable expectations“It was really different depending on who you were working with. Some people it was more of a shadowing experience […], then other times I think, probably more so for me because people knew me a little bit better […]; I got to do a lot of seeing patients on my own and then presenting to the attending.” (Student 4)
“I only really understood what my role [was] two weeks into the rotation […], [which] was hard for me as a student.” (Student 2)
“I felt like [my resident] was a great resident in the sense that she was very clear in what her expectations were and really outlined what she felt would be above and beyond. And so I always felt with her it was pretty clear what I needed to do in order to make her happy […]. I knew she liked really, really thorough notes and lots of literature citations. And so, because she was very clear, while I was staying late, it wasn’t nebulous what I needed to do. But then, on my last week of the rotation, she switched and someone else came in. And he was very different. Very different residents, very different styles.” (Student 9)
B) Limited feedback“There wasn’t a formalized feedback session […] this aspect was entirely self-directed […] with no focus on improving specific skills and receiving feedback around them.” (Student 1)
“I just got a lot of informal feedback as I went […]. I just kind of took what I could get when I got it […], but there was no formal feedback session.” (Student 4)
“The feedback I was getting was pretty positive […], but it’s always a toss-up. You never really know until you see the grade […] there wasn’t a clear sense of the benchmarks you need to make.” (Student 8)
C) Performative pressures“I was very surprised […] when we were going through [cases], how much [attendings] would look stuff up, and then put their notes down and teach everyone, then and there—that was really impressive […]. Everyone was very nice, and willing to teach and encouraging. It was just wonderful.” (Student 6)
“I always kind of knew these were the people I wanted to impress, [who] would be interviewing me, [and] hopefully ranking me to match […]. I wanted to weasel my way in there […] and make people remember me […]. I had an advantage, because I knew everyone [in the department] pretty well already, and so [during the rotation] there were just a couple I still had to seek out […] [to get] great exposure to everyone.” (Student 4)
“I’ve never liked the idea that as a medical student I have to perform for all these people […]. I don’t like feeling like the entire thing is this show. It fe[els] disingenuous and uncomfortable. For me, I’d rather just do a really good job of caring for my patients.” (Student 3)
“I was stressed about getting people to like me enough to write nice things about me that would go into my Dean’s letter.” (Student 9)
“I felt increased pressure to prioritize my goal of performing well enough to impress faculty over my goal of getting a feel for clinical dermatology and learning for my future patients’ sake.” (Student 5)
During clerkship: Variable expectations, scant feedback, and performance pressure. In addition, students reported receiving minimal structured feedback from faculty and residents during the rotation. Of the seven students who commented on receiving feedback, only 1 reported receiving formal feedback outside of the final evaluation, 3 students received intermittent informal feedback throughout the rotation, and 3 students received no feedback (Fig. 1A). As Student 1 recalled, “there wasn’t a formalized feedback session […], this aspect was entirely self-directed.”
Fig. 1

Types of feedback received and students’ interpretations of assessment outputs, including (A) reported types of feedback received during clerkship (of the 10 interviewed students, seven commented on receiving feedback; the percentages displayed are for the seven students who provided information on the types of feedback received during their introductory dermatology clerkships) and (B) student perceptions of values reflected in assessment output.

Types of feedback received and students’ interpretations of assessment outputs, including (A) reported types of feedback received during clerkship (of the 10 interviewed students, seven commented on receiving feedback; the percentages displayed are for the seven students who provided information on the types of feedback received during their introductory dermatology clerkships) and (B) student perceptions of values reflected in assessment output. Even when feedback was provided, students perceived a lack of specificity and actionability. As Student 1 further articulated, there was “no focus on improving specific skills and receiving feedback around them” (Table 3). Student 8 echoed these sentiments, reflecting that “the feedback I was getting was pretty positive […] but it’s always a toss-up. You never really know until you see the grade […]; there wasn’t a clear sense of the benchmarks.” With regard to performance pressures, students connected strongly with faculty, and most students (8 of 10) reported positive interactions, a few (2 of 10) reported neutral interactions, and none reported negative interactions. However, beneath these positive connections, students often perceived significant pressures to perform and impress faculty. As Student 4 recalled: “I always kind of knew these were the people I wanted to impress, [who] would be interviewing me, [and] hopefully ranking me to match […]. I wanted to weasel my way in there […] and make people remember me […]. I had an advantage, because I knew everyone [in the department] pretty well already, and so [during the rotation] there were just a couple I still had to seek out […] [to get] great exposure to everyone” (Table 3). Some students viewed these pressures with ambivalence, characterizing the dynamic as one of inauthentic showmanship. As Student 3 articulated: “I’ve never liked the idea that as a medical student I had to perform for all these people […]. I didn’t like feeling like the entire thing was this show. It felt disingenuous and uncomfortable. For me, I’d rather just do a really good job of caring for my patients.”

Postclerkship: Nontransparent, subjective, and preordained assessment system with opportunities for feasible change

With regard to nontransparency, even after completing the clerkship and receiving final assessment outputs, students continued to express a persistent lack of clarity regarding assessment systems. As Student 8 summarized, there was “a lack of very concrete things […] you need[ed] to achieve in order to get honors” (Table 4). Student 4 echoed these sentiments: “It’d be neat to be more transparent […]; I don’t even know if that final presentation counted.”
Table 4

Post-clerkship: Nontransparent, subjective, and preordained assessment system, with opportunities for feasible change.

ThemesRepresentative quotes
A) Nontransparent“The assessment methods were fairly unclear.” (Student 7)
“A lack of very concrete things […] you need[ed] to achieve in order to get honors.” (Student 8)
“It was unclear to me, for example, whether residents had a formal evaluative role.” (Student 5)
B) SubjectiveYou never really know what people think of you until that final clerkship grade comes out […], [but] we know that [grades] are subjective […]. It represented whether people liked me or not” (Student 2)
“I don’t know how arbitrary the Derm Sub-I is, but I would imagine it’s extraordinarily arbitrary. I kind of knew it would be a lot of not just working hard, but also how well I got along with my resident” (Student 9)
“[Grades] were an aggregate of largely subjective but still meaningful snapshots of my ability to be helpful to and well-liked by attendings, residents, and support staff in dermatology clinical settings.” (Student 5)
“It was your personality they were looking at, not some score on a paper. That was one of the things I loved about it.” (Student 6)
“I felt anxious that things like sitting in the wrong seat at grand rounds, asking a question at the wrong time in meetings, or saying something awkward in the charting room, would be more likely to affect others’ opinions of me than my clinical judgment, medical knowledge, or interactions with patients.” (Student 2)
“There’s a disconnect between your book knowledge and your knowledge that you’ve shown in front of the attending […], and everyone has a different sense of what’s excellent versus what’s not.” (Student 10)
C) Pre-ordained“I had the assumption that if they like you, and they know you're going into derm[atology], you'll get honors.” (Student 4)
“It had the reputation that if you… show up and are interested, you'll—most people get honors.” (Student 8)
“My understanding of the grade was that everyone received the highest score… [so] to me, the course was functionally pass-fail, which made for a positive experience.” (Student 4)
“I had also kind of heard that [the medical school] doesn’t really not give [the highest grade] for these types of rotations.” (Student 9)
D) Opportunities for feasible change“I think it would definitely have shaped my preparation if there had been more detail about the grading system. If they told us ‘read X, Y, and Z, and your clinic experience will count for this much,’ that would have made a difference.” (Student 7)
I developed a sense of what [faculty] were looking for over the course of the rotation, based on their reaction to my presentations […]. It worked out fine, but it would have ease[d] the nerves if [there’d been] a little bit more concreteness in terms of clinical evaluations, what key people [were] looking for.” (Student 8)
“It’d be neat to be more transparent […]. I don’t even know if that presentation counted for anything.” (Student 4)
Post-clerkship: Nontransparent, subjective, and preordained assessment system, with opportunities for feasible change. Within programs, student’s postclerkship perceptions of assessment components and sources were frequently discordant with each other. For example, two students from the same institution (Students 2 and 10) disagreed on whether a final examination and resident reports were incorporated into the assessment; two students from another program (Students 4 and 8) disagreed on the role of didactic participation determining the final grades (Table 5).
Table 5

Student- and clerkship director-reported assessment components, sources, and outputs (N = 10).

Student assigned numberAssessment output scaleaPerceived assessment components and sources reported by studentsActual assessment components and sources reported by clerkship directorsStudent–clerkship director concordance
1Three-tieredFinal presentation; faculty reportsFaculty reports; resident reportsNo
2Two-tieredFinal exam; final presentation; faculty reports; resident reportsFinal exam; final presentation; resident reportsNo
3Two-tieredFinal exam; final presentation; faculty reports; resident reportsFinal exam; final presentation; resident reportsNo
4Four-tieredFinal presentation; faculty reports; resident reportsFaculty reports; resident reportsNo
5Four-tieredFinal presentation; faculty reports; resident reportsFaculty reports; resident reportsNo
6Three-tieredFinal presentationFinal exam; faculty reports; resident reportsNo
7Four-tieredFinal presentation; faculty reports; resident reportsFaculty reports; resident reportsNo
8Four-tieredFinal presentation; faculty reports; resident reports; didactic participationFaculty reports; resident reportsNo
9Four-tieredFinal presentation; faculty reports; resident reportsFaculty reports; resident reportsNo
10Two-tieredFinal presentation; faculty reportsFinal exam; final presentation; resident reportsNo

Tiers refer to possible assessment outputs. Three-tiered systems had three potential final outputs: honors, pass, and fail. Four-tiered systems had four potential final outputs: honors with distinction, honors, pass, and fail.

Student- and clerkship director-reported assessment components, sources, and outputs (N = 10). Tiers refer to possible assessment outputs. Three-tiered systems had three potential final outputs: honors, pass, and fail. Four-tiered systems had four potential final outputs: honors with distinction, honors, pass, and fail. Interestingly, there was also substantial discordance between student and CD characterizations of assessment components and sources. Although students in all five programs reported a final presentation as a key component of the assessment, only 1 of 5 CDs reported this to be a part of the assessment (Table 5). Furthermore, none of the 10 students correctly identified all relevant assessment components and sources (Table 5). In addition to the lack of transparency, all reporting students (7 of 7) identified subjectivity and arbitrariness as significant challenges underlying the current assessment systems. Furthermore, only 2 of 10 students believed that clerkship grades reflected clinical competency. As Student 10 articulated: “There’s a disconnect between your book knowledge and your kind of knowledge that you’ve shown in front of the attending […] and everyone has a different sense of what’s excellent versus what’s not” (Table 4). Student 9 echoed these thoughts, endorsing “a feeling that grades were a very arbitrary decision.” Nearly all students (8 of 10) viewed these subjective distinctions, and assessment outputs more generally, as reflecting their cultural fit and/or global likability in the specialty (Fig. 1B). Some students viewed this as a positive component of assessment, such as Student 6, who articulated: “It was your personality they were looking at, not some score on a paper. That was one of the things I loved about it.” For others, perceptions of assessment as subjective measures of likability became a source of anxiety and distraction. As Student 2 articulated: “I felt anxious that things like sitting in the wrong seat at grand rounds, asking a question at the wrong time in meetings, or saying something awkward in the charting room, would be more likely to affect others’ opinions of me than my clinical judgment, medical knowledge, or interactions with patients.” Student 5 echoed these sentiments while also articulating the hierarchy of his rotation goals more explicitly: “I felt increased pressure to prioritize my goal of performing well enough to impress faculty over my goal of getting a feel for clinical dermatology and learning for my future patients’ sake.” Some students (3 of 10) viewed assessment outputs as not only subjective measures of cultural fit and likability but also preordained, assuming an honors output was assigned to all rotators. As Student 4 recalled, “I had the assumption that if they like you, and they know you're going into derm[atology], you'll get honors” (Table 4). With regard to opportunities for feasible change, students identified several areas for improvement to address the challenges of assessment variability. First, students articulated the need for increased clarity regarding clerkship expectations, especially surrounding clinical roles. As Student 8 noted, substantial opportunities exist for “more concreteness in terms of the clinical evaluations […] [and] what key people are looking for” (Table 4). Second, most students (8 of 10) explicitly stated a desire for increased transparency regarding assessment components and sources. More specifically, students identified the need for this information at the outset to help shape their approach to the clerkship. As Student 7 reflected: “I think it would definitely have shaped my preparation if there had been more detail about the grading system. If they told us, ‘read X, Y, and Z, and your clinic experience will count for this much,’ that would have made a difference.” Finally, students identified the need for more actionable feedback, supported by elements such as structured forms for feedback, and protected time for faculty–student discussion of clinical performance. As Student 8 asserted, students were only able to have a “sense of what [faculty] were looking for” by “the second week or third week,” so it would have helped to “ease the nerves […] if it had been a little more concrete,” leveraging features such as “midterm feedback forms […] where faculty member[s] write down things you’re doing well, [and] things you could work on.” Globally, although students articulated significant concerns regarding lack of transparency, subjectivity, and limited feedback, they also viewed these challenges as opportunities for feasible change, with 7 of 10 students suggesting a change related to one of these themes. Furthermore, despite these challenges, 8 of 10 students reported feeling more positive about dermatology as a specialty after clerkship.

Discussion

This study was an exploratory effort to qualitatively characterize student perceptions of assessment systems used on introductory dermatology clerkships. We found that students reported persistent limited understanding of these assessment systems, characterizing them as nontransparent, subjective, and arbitrary. Encouragingly, however, students also viewed these challenges as malleable, identifying an array of contributing factors occurring before, during, and after the clerkship meriting change. Corroborating prior work in other specialties, students frequently entered their dermatology clerkship with limited understanding of the assessment system and their clinical role (Bosch et al., 2017, O’Brien et al., 2007). Notably, even after completing the clerkship, no student could correctly identify all assessment components and sources, as highlighted by the universal discordance between student-reported and CD-reported assessment components and sources. Available evidence underscores that this durable paucity of information may impair clerkship preparation and impede the educational transition from observer to care provider (Bosch et al., 2017, Surmon et al., 2016). Importantly, these stressors may also undermine students’ well-being, heightening anxieties in the setting of perceived scrutiny and contributing to long-term burnout (Benbassat et al., 2011, Bosch et al., 2017, Dyrbye et al., 2009). Encouragingly, prior work suggests potential solutions to better support students exploring dermatology, such as standardizing orientation materials and providing detailed information about clerkship logistics, faculty contacts, and assessment systems (components, sources, and outputs) prior to clerkship onset (Atherley et al., 2016, Coates, 2004). Given that students’ clerkship experiences have been shown to influence specialty selection, perceived subjectivity and bias in student assessment might also conceivably dissuade some students from pursuing a career in dermatology (Benson et al., 2015, Coates et al., 2008). This is particularly relevant when considering the experiences of medical students underrepresented in medicine (URM), defined as students with an African-American/Black, Latino/Hispanic, American Indian/Native Alaskan, or Native Hawaiian/Pacific Islander racial-ethnic background (Low et al., 2019). Prior single-institutional work has shown that both URM and non-URM minority students (e.g., Asian-American students) receive disproportionally lower quality medical student performance evaluations compared with white students. Similar grading disparities that favor white students were also shown to exist in most core clinical clerkships (Low et al., 2019). The existence of these disparities in clerkships outside of dermatology that face similar criticism related to subjectivity and variability may suggest to students a bias against URM and non-URM minority students in dermatology clerkships’ analogous assessment methods. Dermatology is a notably nondiverse medical specialty; nonwhite and URM students are disproportionately represented among applicants to dermatology residency programs and dermatology residents (Akhiyat et al., 2020, Vasquez et al., 2020, Van Voorhees and Enos, 2017). The pursuit of assessment standardization and improving feedback to combat perceived subjectivity and bias may combat existing barriers to pursuing a career in dermatology for these student populations. To create a more supportive environment for specialty differentiation and clinical growth, assessment systems must be delineated proactively, with transparency bolstered through the use of a consistently implemented assessment method combined with the delivery of frequent, structured feedback. In other specialty fields, medical school CDs have made nationwide efforts to introduce standardized assessment tools to combat inconsistent assessment tools being used across institutions. In 2018, a Delphi consensus process among stakeholders resulted in the development of a National Clinical Assessment Tool for Medical Students in Emergency Medicine, an easily accessed, standardized tool for medical educators in the field (Jung et al., 2018). CDs and other leaders in medical education in dermatology may consider the development of a similar tool in the future to combat the subjectivity and variability in assessment perceived by students, both within and across institutions. Substantial discordance between student and faculty perceptions of assessment systems underscores a lack of transparency and significant limitations in existing feedback mechanisms. As work in other clerkship settings highlights, optimal student feedback should be timely, expected, data-driven, and actionable (Bernard et al., 2011). However, our findings indicate that rotating students rarely received feedback meeting these parameters. This limited feedback, coupled with students’ unclear pre-clerkship expectations, may have created undue stress, detracting from meaningful self-assessment (Kluger and DeNisi, 1996). In addition, all students in our study fall in the millennial age range. Prior work has demonstrated that, among dermatology trainees, millennial learners value a surplus of feedback that is both thorough and consistently offered (Wang et al., 2019). This preference is juxtaposed against the limited quantity of feedback and inconsistent delivery of organized formal feedback reported by the dermatology clerkship students interviewed for our study. Enhancing feedback systems may also address students’ perceptions of assessment systems as subjective structures rewarding likability. As students articulated, these perceptions precipitate significant performance pressures, which at times eclipsed knowledge-centered learning objectives (Alikhan et al., 2009, Wu and Tyring, 2003). The implementation of formal mid-rotation feedback sessions with faculty members has been shown to support behavior change and substantive clinical development in other settings (Delzell et al., 2011). To support these sessions, training both medical students and faculty to communicate effectively during feedback sessions can reduce perceptions of subjectivity, creating a less-pressured learning environment (Kogan and Shea, 2008, Konopasek et al., 2016, Milan et al., 2011, Schartel, 2012). Improving communication between students and faculty may also strengthen faculty–student relationships, enriching the dermatology learning environment for all stakeholders. We summarize our recommendations for feasibly changing dermatology clerkship assessment methods in Fig. 2.
Fig. 2

Summary of proposed suggestions to assessment methods.

Summary of proposed suggestions to assessment methods. Finally, we sought to assess for a relationship between assessment system variability and perceived subjectivity and student experiences. Reassuringly, although a lack of transparency, subjectivity, and limited feedback were identified as areas for feasible change by 7 of 10 students, more total students (8 of 10) reported an overall positive dermatology clerkship experience. Neither of the two remaining students identified these clerkship phenomena as reasons for an overall nonpositive experience, suggesting that other positive aspects of the clerkship outweighed unclear assessment methods for nearly all students. Limitations of this study include the sample size, which may limit the generalizability of our findings. In addition, recall bias may have affected our sample of interview data because interviews were conducted after students had received their assessments and among a cohort of students who almost entirely chose to pursue residency training in dermatology. Similarly, student interviewees were identified through their CDs; conceivably, students who had more positive experiences on their clerkship may have been more likely to participate. We acknowledge that no students in our sample reported overall negative experiences on their dermatology clerkship, which may be an artificial product of recall bias or interviewer bias during data collection.

Conclusion

Our results highlight novel facets of student perspectives regarding assessment systems in dermatology. By qualitatively characterizing student perceptions of assessment systems, our study provides scaffolding to improve educational experiences in dermatology and support a feasible reform in dermatology medical student clerkships. Future studies should evaluate the impact of assessment standardization on educators’ ability to accurately assess and provide feedback to medical students in dermatology clerkships. More broadly, we hope that these findings support the design and implementation of novel education tools to combat subjectivity and variability in medical student assessments.
  47 in total

1.  The learning environment and medical student burnout: a multicentre study.

Authors:  Liselotte N Dyrbye; Matthew R Thomas; William Harper; F Stanford Massie; David V Power; Anne Eacker; Daniel W Szydlo; Paul J Novotny; Jeff A Sloan; Tait D Shanafelt
Journal:  Med Educ       Date:  2009-03       Impact factor: 6.251

2.  Racial/Ethnic Disparities in Clinical Grading in Medical School.

Authors:  Daniel Low; Samantha W Pollack; Zachary C Liao; Ramoncita Maestas; Larry E Kirven; Anne M Eacker; Leo S Morales
Journal:  Teach Learn Med       Date:  2019-04-29       Impact factor: 2.414

3.  Optimizing visiting clerkships in dermatology: a dual perspective approach.

Authors:  Jordan V Wang; Kathleen McGuinn; Matthew Keller
Journal:  Dermatol Online J       Date:  2018-04-15

4.  The Power of Subjectivity in the Assessment of Medical Trainees.

Authors:  Olle Ten Cate; Glenn Regehr
Journal:  Acad Med       Date:  2019-03       Impact factor: 6.893

5.  Focusing on the Formative: Building an Assessment System Aimed at Student Growth and Development.

Authors:  Lyuba Konopasek; John Norcini; Edward Krupat
Journal:  Acad Med       Date:  2016-11       Impact factor: 6.893

6.  Pressure to publish for residency applicants in dermatology.

Authors:  Jordan V Wang; Matthew Keller
Journal:  Dermatol Online J       Date:  2016-03-16

7.  A national radiation oncology medical student clerkship survey: didactic curricular components increase confidence in clinical competency.

Authors:  Vikrant S Jagadeesan; David R Raleigh; Matthew Koshy; Andrew R Howard; Steven J Chmura; Daniel W Golden
Journal:  Int J Radiat Oncol Biol Phys       Date:  2014-01-01       Impact factor: 7.038

8.  Why dermatology is the second least diverse specialty in medicine: How did we get here?

Authors:  Sophia Akhiyat; Leah Cardwell; Olayemi Sokumbi
Journal:  Clin Dermatol       Date:  2020-02-19       Impact factor: 3.541

9.  Living with 'melanoma' … for a day: a phenomenological analysis of medical students' simulated experiences.

Authors:  M Corr; G Roulston; N King; T Dornan; C Blease; G J Gormley
Journal:  Br J Dermatol       Date:  2017-06-26       Impact factor: 9.302

10.  Medical students' preparedness for professional activities in early clerkships.

Authors:  Josefin Bosch; Asja Maaz; Tanja Hitzblech; Ylva Holzhausen; Harm Peters
Journal:  BMC Med Educ       Date:  2017-08-22       Impact factor: 2.463

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.