| Literature DB >> 34309243 |
Clare Guilding1, Rachel Emma Pye2,3, Stephanie Butler4, Michael Atkinson1, Eimear Field1.
Abstract
Multiple choice questions (MCQs) are a common form of assessment in medical schools and students seek opportunities to engage with formative assessment that reflects their summative exams. Formative assessment with feedback and active learning strategies improve student learning outcomes, but a challenge for educators, particularly those with large class sizes, is how to provide students with such opportunities without overburdening faculty. To address this, we enrolled medical students in the online learning platform PeerWise, which enables students to author and answer MCQs, rate the quality of other students' contributions as well as discuss content. A quasi-experimental mixed methods research design was used to explore PeerWise use and its impact on the learning experience and exam results of fourth year medical students who were studying courses in clinical sciences and pharmacology. Most students chose to engage with PeerWise following its introduction as a noncompulsory learning opportunity. While students perceived benefits in authoring and peer discussion, students engaged most highly with answering questions, noting that this helped them identify gaps in knowledge, test their learning and improve exam technique. Detailed analysis of the 2015 cohort (n = 444) with hierarchical regression models revealed a significant positive predictive relationship between answering PeerWise questions and exam results, even after controlling for previous academic performance, which was further confirmed with a follow-up multi-year analysis (2015-2018, n = 1693). These 4 years of quantitative data corroborated students' belief in the benefit of answering peer-authored questions for learning.Entities:
Keywords: MCQ; PeerWise; assessment for learning; collaborative learning; formative assessment; gamification; medical education; peer learning; single best answer
Mesh:
Year: 2021 PMID: 34309243 PMCID: PMC8311910 DOI: 10.1002/prp2.833
Source DB: PubMed Journal: Pharmacol Res Perspect ISSN: 2052-1707
Engagement of students with question answering, authoring, commenting, and rating in PeerWise during Stage 4 Semester 1 (September–December 2015). The “number engaged” column is the number of students who participated in each element of PeerWise
| PeerWise course | Clinical sciences | Clinical pharmacology | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| # Students (%) | 359 (81%) | 310 (70%) | ||||||||
| Activity measure (number) | Number engaged | Mean | Median | Max | Total | Number engaged | Mean | Median | Max | Total |
| Questions answered | 359 | 162 | 137 | 343 | 59,920 | 308 | 99 | 99 | 184 | 31,124 |
| Questions authored | 47 | 0.5 | 0 | 49 | 203 | 28 | 0.3 | 0 | 23 | 104 |
| Question comments | 97 | 1.4 | 0 | 48 | 535 | 75 | 0.8 | 0 | 21 | 260 |
| Questions rated | 272 | 107 | 62 | 338 | 39,749 | 242 | 66 | 48.5 | 180 | 20,753 |
The number in brackets shows this value as a % of the total class of 444.
Only includes students who activated their PeerWise account.
FIGURE 1Categorization of 50 clinical sciences and 50 clinical pharmacology questions according to Blooms taxonomy. The Y axis shows the number of questions within each category
FIGURE 2Mean Stage 3 and Stage 4 knowledge exam scores over four cohorts: two cohorts before PeerWise introduction (2012, 2013) and two after (2015, 2016). Y axis shows the mean exam score for each cohort, error bars are standard error of the mean
Pearson's correlations between Stage 4 and Stage 3 exam scores, and PeerWise mean values of reputation and answering across the two target courses for 2015–2018 cohorts
| Stage 4 | Stage 3 | Mean reputation | |
|---|---|---|---|
| Stage 3 | .659 | ||
| Mean reputation | .077 | .083 | |
| Mean answer | .205 | .106 | .284 |
p <.01.
p <.001.
Student ratings of perceived relative benefit of different aspects of PeerWise for learning on a 5‐point Likert scale, 1 being the least benefit, 5 being the most
|
| Mean | SEM | |
|---|---|---|---|
| Answering questions | 167 | 4.40 | 0.07 |
| Reading explanations of answers | 167 | 4.37 | 0.06 |
| Writing explanations | 108 | 4.05 | 0.1 |
| Writing questions | 102 | 3.77 | 0.12 |
| Evaluating quality of questions | 152 | 2.97 | 0.1 |
| Commenting on questions | 128 | 2.84 | 0.1 |
Abbreviations: N, number of student responses; SEM, standard error of the mean.
Themes and subthemes identified from thematic analysis of student survey responses
| Theme | Subtheme ( |
|---|---|
| Answering |
Identifying gaps in knowledge (43) Testing and consolidation of learning (41) Exam technique and practice (29) Explanations improve knowledge/understanding (15) Wide range of curriculum relevant questions (13) Benchmarking against peers (9) Novel, active revision method (9)
Questions too niche/difficult (15) Inconsistent question quality (13) Questions didn't reflect exam (4) Difficulty using the site (2) |
| Authoring |
Stimulated learning through in‐depth research into topic (13) Consolidation of knowledge/understanding (12) Insight into how exam questions are composed (9) Ensured thorough understanding of topic for good quality, error free question (8) Writing distractors and explanations helps identify confounding information (4) Identification of gaps on knowledge (4)
Lack of time (41) Concern about question writing ability (32) Concern over negative peer feedback (7) Used other revision method (6) Difficult/time‐consuming to write questions (5) Bank already full of questions (5) Unsure of benefit of authoring (4) |
| Commenting |
For clarification of question/answers (24) Correction of incorrect knowledge/understanding (13) To help and encourage peers (11) Generates peer discussion (10) Explaining answers reinforces learning (7)
Comments already covered what student would have raised (8) Did not feel the need (6) Sought clarification in course materials (3) Concerned comment may be incorrect (2) |
Abbreviation: n, number of comments within each subtheme.