Daniel Walden1, Meagan Rawls2, Sally A Santen2,3, Moshe Feldman2, Anna Vinnikova4, Alan Dow4. 1. Virginia Commonwealth University School of Medicine, Richmond, VA USA. 2. Office of Assessment, Evaluation, and Scholarship, Virginia Commonwealth University School of Medicine, Richmond, VA USA. 3. University of Cincinnati College of Medicine, Cincinnati, USA. 4. Department of Internal Medicine, Virginia Commonwealth University School of Medicine, Richmond, VA USA.
Abstract
Introduction: Medical schools vary in their approach to providing feedback to faculty. The purpose of this study was to test the effects of rapid student feedback in a course utilizing novel virtual learning methods. Methods: Second-year medical students were supplied with an optional, short questionnaire at the end of each class session and asked to provide feedback within 48 h. At the close of each survey, results were emailed to faculty. After the course, students and faculty were asked to rate the effectiveness of this method. This study did not affect administration of the usual end-of-course summative evaluations. Results: Ninety-one percent of students who participated noted increased engagement in the feedback process, but only 18% on average chose to participate. Faculty rated rapid feedback as more actionable than summative feedback (67%), 50% rated it as more specific, and 42% rated it as more helpful. Some wrote that comments were too granular, and others noted a negative personal emotional response. Conclusion: Rapid feedback engaged students, provided actionable feedback, and increased communication between students and instructors, suggesting that this approach added value. Care must be taken to reduce the student burden and support relational aspects of the process.
Introduction: Medical schools vary in their approach to providing feedback to faculty. The purpose of this study was to test the effects of rapid student feedback in a course utilizing novel virtual learning methods. Methods: Second-year medical students were supplied with an optional, short questionnaire at the end of each class session and asked to provide feedback within 48 h. At the close of each survey, results were emailed to faculty. After the course, students and faculty were asked to rate the effectiveness of this method. This study did not affect administration of the usual end-of-course summative evaluations. Results: Ninety-one percent of students who participated noted increased engagement in the feedback process, but only 18% on average chose to participate. Faculty rated rapid feedback as more actionable than summative feedback (67%), 50% rated it as more specific, and 42% rated it as more helpful. Some wrote that comments were too granular, and others noted a negative personal emotional response. Conclusion: Rapid feedback engaged students, provided actionable feedback, and increased communication between students and instructors, suggesting that this approach added value. Care must be taken to reduce the student burden and support relational aspects of the process.
Although student feedback on pre-clinical medical education is widely used and has been proven to be valid and reliable for providing critical information for teaching and curricular administration [1-5], the best approach to providing feedback to teaching faculty remains uncertain. Most medical curricula and faculty are assessed, and feedback delivered, through summative evaluations which students fill out at the end of each block, weeks after many learning activities [2]. However, delayed feedback has been shown to be less accurate [6]. While the general impression of each lecture remains over time [7], students lose the specifics [8]. For example, in one study, over half of the students rated a lecture that had been canceled and was not delivered [9]. Further, when courses are provided annually, course improvement is delayed until the next year. Perhaps for this reason, feedback is required by US accreditors to be “scheduled and timely” [10]. How to best construct a feedback program for courses is an understudied area.Feedback given and received during a course allows faculty to address student concerns immediately [11]. With a more rapid delivery of feedback, a few studies have demonstrated an increase in voluntary student feedback response rates by over 35% [7, 12], perhaps due to students appreciating that their comments will be utilized to improve their own education, not just the education of future students [13]. In addition, feedback should be specific to individual learning activities and instructors [14, 15], and rapid feedback has been purported to provide more specific feedback [11, 16]. Feedback that is more granular may not always be better, however, and studies have not investigated how the quality of rapid feedback differs from that of summative feedback. Rapid feedback also has been shown to have a small, negative effect on session ratings [17]. Rapid feedback may have become more socially desirable and acceptable as younger generations have expected the feedback process to move more quickly [18-20].Despite these reasons and expert recommendations that the feedback process be concurrent with learning activities in preclinical curricula [12], this approach has only been studied and reported in a few examples: as paper surveys immediately after each lecture [5, 7], as “continuous feedback” implemented with regular meetings [11, 16], and for the purpose of correlating student perceptions with test scores [21]. Many of these studies occurred before medical curricula changed over the last decade to incorporate more active learning and asynchronous learning through recorded lectures [22, 23]. Understanding how best to attain and provide feedback on newer approaches to instruction is also a gap in the medical education literature.“Best practices” for feedback “should be considered as a set of tools that can be selected amongst and used strategically…” [24], suggesting that there are many ways the feedback process can work effectively. Here we examine rapid feedback as one tool in the toolkit. The purpose of this study was to test the effects of rapid feedback in a course utilizing traditional and novel learning methods. This study explored whether an approach to rapid feedback was feasible for students, whether receipt of this feedback was acceptable and useful to faculty, and how rapid feedback compared to the traditional, summative approach in terms of its actionability, specificity, and overall helpfulness.
Methods
Study Setting and Existing Feedback Process
The School of Medicine at Virginia Commonwealth University is part of a large, urban academic medical center with approximately 180 matriculants per year. The school employs an exclusively summative end-of-course feedback model throughout the preclinical curriculum. Evaluations for courses and professors are mandatory for all students, available to be filled out for weeks during each learning block, and due within days after each final exam. There is one evaluation for each professor, whether the professor is involved in one or multiple learning activities. Feedback is given concurrently with overall course evaluation. Most students fill out these evaluations just before they are due, that is, after the course is finished. Feedback is then compiled by a small group of students and presented at a meeting, usually within 6 weeks of the end of the block, to course directors and a representative from the curriculum office. The following month, at a meeting of the Curriculum Council, student and faculty leaders review the course as a whole and recommend changes based on student and faculty input. Course directors are responsible for delivering feedback to individual professors. Course improvements based on that feedback occur the following year.The renal course at Virginia Commonwealth University is a multi-week learning block delivered every October to second-year undergraduate medical students. In 2020, 18 instructors provided up to four hours of instruction per day to 187 students through traditional lectures, graded and nongraded team-based learning sessions, and flipped classroom learning activities. This course was chosen due to the variety of learning methods. With the exception of one team-based learning session, attendance at all sessions was optional. As is normal practice, all sessions were recorded and made available to students online within 24 h. Due to the COVID-19 pandemic, all sessions were available live online via Zoom video conferencing. Many sessions were still delivered in the usual classrooms, and for those sessions, students were welcome to attend in person. Lecture-based sessions were coded as passive sessions, while labs, workshops, and team-based learning sessions were considered active. Patient panels were not considered active or passive. There were 33 passive learning sessions (70%) and 14 active sessions (30%).
Feedback Approach
Rapid feedback was obtained in this study through forms provided to students created on the web tool Qualtrics. Each form was linked to a specific session and included 3 questions:“How and when did you participate in this session?”A 5-point Likert scale for rating the sessionA text box for specific commentsStudents were able to access all feedback forms via any of 3 methods: a QR code displayed after the session for at least 15 s, a link posted near the end of the session in the online chat box for that session, and a link on the students’ online learning platform (“eCurriculum”) available at all times, including before and after the session. To allow students participating asynchronously to provide rapid feedback, each form was closed to new answers approximately 48 h after the session. The form did not collect any identifying information, and all questions were optional. Reports of the rapid feedback were then provided to faculty by email. Items are noted in Appendix 1.
Assessment Approach
At the end of the course, the effectiveness of rapid feedback was assessed through surveys of students and faculty. Because feedback is so subjective, student feedback provided during the course was intentionally not analyzed by study authors to avoid introducing bias. Rather, instructors judged whether feedback was helpful to them and indicated that on the post-course survey. A qualitative analysis was performed on these comments.The student survey was sent out after the course final exam via email and posted in the private class group chat by a student co-investigator. The student survey contained 9 questions total; five closed-ended items and four open-response items. Students were asked to indicate how often they provided feedback (1, never; 5, always) and how they provided feedback (link in Zoom chat, link on learning platform, QR code). Students were also asked to indicate their agreement (1, strongly disagree; 5, strongly agree) with the following statements:Providing feedback during the course helped me feel more engaged with the feedback process.Being asked to provide feedback so often became a burden.Faculty made improvements or changes to the course as a result of student feedback.The survey was available for one week, and students were reminded about it once during that period.The faculty survey included 10 items total with 6 Likert-type questions and four open-ended questions. It was emailed to faculty following the final exam and was available for 1 week. Both instruments are available in Appendices 2 and 3.
Participant Engagement
Students and faculty were informed about the study prior to the course. A student co-investigator officially introduced the project to the faculty via email with an option for them to reply to the email to opt out. Faculty were told that they would be emailed the results for each session in a pdf report 48 h after the session and that they were not required to read the report. Reminders about the project and ongoing communication with faculty throughout the course were provided by the course director.Students were briefed on the study by the first author, who was also a peer. Students were introduced to the project via email prior to implementation, informed about the purpose of the study, and requested to be constructive and professional in their comments. They were reminded of the project numerous times throughout, on average once per week, via group chat messages. With each communication, students were reminded that participation was optional and anonymous, and professionalism was emphasized. Faculty were welcome to provide students with updates during the course on how they were using feedback that had been submitted.Attendance at each session was unable to be measured precisely due to constantly fluctuating numbers on the virtual platform, so it was estimated based on the number of students present the majority of the time. To account for asynchronous learners, the number of students who clicked “play” on the recording for each session was tracked. When each survey was closed a few days after its session, the number of clicks on that session’s recording was added to the number of live participants for an estimated total attendance.This study did not affect administration of the usual summative evaluations for the course and faculty that all students were required to fill out during or shortly after the course.The study was approved by the Institutional Review Board. Since the study was low-risk and the major risk was breach of confidentiality, informed consent was waived for all parties. Students were informed that information they provided would be used in a research study but not identifiable in any way.
Outcomes
The primary outcome of the study was quantitative survey data from students and faculty from after each session and the post-course surveys. Descriptive statistics were used to explore survey results, including bivariate correlation analysis and two-tailed t test. In addition to survey data, attendance was tracked to quantify participation rates. Qualitative data from feedback forms submitted during the course were intentionally not analyzed. Qualitative data from the post-course survey were assigned a positive, neutral, or negative valence, and comments were critically analyzed by the lead investigator. To understand student perceptions of rapid feedback, three metrics throughout the course were analyzed quantitatively:Number of feedback forms submitted for each session compared to session ratingRating for active learning sessions compared to rating for passive learning sessionsNumber of feedback forms submitted before and after faculty or peer encouragement
Results
A total of 187 students were enrolled in the course. There were 19 instructors total for the course (Fig. 1).
Fig. 1
The timeline for obtaining and assessing rapid feedback
The timeline for obtaining and assessing rapid feedbackStudent participation with session material and rapid feedback, on average
Student Feedback After Each Session (Fig. 2)
There were 50 total sessions in the course. Forty-one individual sessions were rated, and 9 multi-instructor sessions were rated. The first session was dropped from analysis because of changes made to survey items. An average of 117 students engaged with session material, while each rapid feedback survey was open. The number of students responding ranged from 1 to 74, with an average of 21 responses per session (18% of attendees).Of 1039 total feedback forms, 1003 (96.5%) were submitted by students who attended the session on time, either in person or via Zoom video conferencing. Almost all forms were submitted within 10 min of the end of each session. Six hundred sixty-seven forms (64%) were accessed through the link and 372 (36%) via QR code. The average session rating was 4.64 (SD = 0.45). There was no correlation between the number of feedback forms submitted for each session and session rating. Students rated passive learning sessions more highly (mean = 4.58, SD = 0.31) than active sessions (mean = 4.12, SD = 0.57) (t16 = 2.86, p < 0.05). There was no relationship between encouragement by faculty and the number of feedback forms submitted for the subsequent day’s sessions.Student perceptions of rapid feedback, from 53 responses to the post-course survey
Student Perceptions of Rapid Feedback (Fig. 3)
Out of 187 total students, 53 (28%) responded to the post-course feedback survey. A minority (15/53; 28%) agreed that being asked to provide feedback so often became a burden, 48 of 53 of respondents (91%) indicated that they felt more involved in the feedback process, and 37 of 53 (70%) agreed that improvements or changes were made to the course as a result of their feedback. Twenty of 30 student comments (67%) were assigned a positive valence in support of rapid feedback.Faculty perceptions of rapid feedback
Faculty Perceptions of Rapid Feedback (Fig. 4)
A total of 13 of 19 instructors responded partially or fully to the post-course feedback survey for a response rate of 68%. Half of respondents (6/12; 50%) agreed that feedback was more specific compared to previous years. Forty-two percent (5/12) indicated that rapid feedback was more helpful. A majority of faculty (9/13; 69%) indicated that they would like to see rapid feedback implemented in future courses. Half (5/10; 50%) of the free-text comments supported rapid feedback. Nine of the 13 respondents taught multiple sessions in the course, and most (6/9; 67%) agreed they were able to implement feedback in a way that improved subsequent sessions. Changes included waiting for less time on audience-response questions and refining alternative lecture formats when they did not translate well to online learning.The course director noted she found rapid feedback to be helpful in course administration and organization, such as students requesting that in-person lecturers wear a mask.
Unintended Consequences of Rapid Feedback
Some professors who taught the majority of the sessions in the course wrote in their comments that student feedback was sometimes unprofessional and overly negative. One wrote that the anticipation around finding negative comments was “terrifying.”“Because there will always be at least 1 comment from a student who hated something, I had a feeling of “wings clipped” and did not have as much inspiration to tell stories etc.”“What’s coming, what am I going to read that might harm the teaching spirit that’s so critical? That spirit is far more critical than fixing some small thing that bothered one student.”
Discussion
A rapid feedback approach had both positives and negatives while also identifying some directions for future research. Overall impressions were generally positive from both students and faculty, but uptake by students was limited, and some comments were perceived as hurtful to faculty. These findings suggest that a rapid feedback program can add value and must also be carefully constructed, as discussed below.Despite praise for the rapid feedback process from the majority of students who filled out the surveys, most students chose not to participate, though surveys were short and easily accessible. We did not see the same large increases in participation that have been demonstrated by similar studies [7, 12]. Potentially, virtual and asynchronous learning may have made it difficult for some students to engage with class material and the feedback process. In addition, though only 28% of participating students indicated that such frequent requests for feedback became a burden, the volume of learning activities for second-year students and the ongoing requirement for summative evaluations may have limited participation. To improve response rates, leaders could incentivize or require participation, schedule time at the end of activities for feedback, or designate representative subgroups to provide feedback at different times. How higher response rates affect impact should be studied further.While most faculty agreed that rapid feedback was actionable and made immediate changes, many indicated that the process had drawbacks. About half of faculty members indicated that the rapid feedback was more specific than summative feedback in past years, and some wrote that the details were too specific. These granular tidbits (e.g., “You say ‘okay’ too much”) helped adjust teaching in small ways but may have mired faculty in the weeds of subjective, individual criticisms. Some faculty noted that rapid feedback may not have allowed students to appreciate the bigger picture of the course’s construction. Training for students on how to provide effective feedback might make comments more relevant. In addition, higher participation rates may help faculty understand whether nitpicky concerns are shared by multiple students and whether they should make the significant effort to adapt their teaching in real time [25].While most faculty did not find rapid feedback to be more helpful than summative feedback, most faculty also indicated that they would like to see a rapid feedback system implemented in the future. This finding suggests that both rapid feedback and summative feedback have merit, potentially in different ways. For example, the course director noted that rapid feedback increased communication with students and thus helped with course administration. It may be that rapid feedback is most effective when applied to specific topics, instructors, or learning modalities. This area should be further examined to determine the situations where rapid feedback is most useful.The context for feedback matters [14, 26, 27]. Feedback is received more effectively in well-established, longitudinal relationships, recently described as the “educational alliance” [15, 24, 28]. The credibility of the evaluator is especially important for the receipt of feedback [29]. Rapid feedback may be received before an educational alliance is established and credibility is built for students as evaluators. This may increase the potential for mistrust and misinterpretation.There are numerous ways to support the feedback process, such as platforms for discussion, meetings to build relationships and trust between faculty and students (or, at least, student leaders), and training on how to give and receive feedback. These supports may be even more important with a rapid feedback system. In addition, summative feedback at our institution is normally delivered to course directors before being sent to faculty, which buffers faculty from any unfair or overtly negative comments. This approach may help mitigate the negative consequences of rapid feedback but also may slow the process. The impact of buffering feedback through a middleman before it reaches its final recipients should be studied.
Limitations
This study was conducted in a single course. Several instructors only taught one session, which limited their ability to implement changes. Further, the majority of sessions were taught by three instructors, which makes the overall impact of the feedback on all instructors less clear. The low response rate may have exacerbated any response bias among students: for example, students who chose to fill out the optional form may have felt more strongly than the average student. Finally, almost all sessions were delivered online due to COVID-19, limiting generalizability of the results.
Conclusion
A rapid feedback approach had both positive and negatives. This approach increased communication between students and faculty and resulted in real-time changes to the course, and most professors indicated they would like to see a similar system implemented in the future. However, there were significant drawbacks, including low response rates from students and unintended emotional side effects on faculty. Rapid feedback mechanisms appear to add value, but they should be carefully employed to limit negative impacts and studied to further determine how they can be most helpful in improving educational outcomes.
Authors: Kirstin W Scott; Dana G Callahan; Jie Jane Chen; Marissa H Lynn; David J Cote; Anna Morenz; Josephine Fisher; Varnel L Antoine; Elizabeth R Lemoine; Shaunak K Bakshi; Jessie Stuart; Edward M Hundert; Bernard S Chang; Holly Gooding Journal: Acad Med Date: 2019-07 Impact factor: 6.893
Authors: Robert Bing-You; Kalli Varaklis; Victoria Hayes; Robert Trowbridge; Heather Kemp; Dina McKelvy Journal: Acad Med Date: 2018-04 Impact factor: 6.893
Authors: Jung Eun Hwang; Na Jin Kim; Meiying Song; Yinji Cui; Eun Ju Kim; In Ae Park; Hye In Lee; Hye Jin Gong; Su Young Kim Journal: BMC Med Educ Date: 2017-12-12 Impact factor: 2.463