| Literature DB >> 28130270 |
Kirsten K Deane-Coe1,2, Mark A Sarvary2, Thomas G Owens3.
Abstract
In an undergraduate introductory biology laboratory course, we used a summative assessment to directly test the learning objective that students will be able to apply course material to increasingly novel and complex situations. Using a factorial framework, we developed multiple true-false questions to fall along axes of novelty and complexity, which resulted in four categories of questions: familiar content and low complexity (category A); novel content and low complexity (category B); familiar content and high complexity (category C); and novel content and high complexity (category D). On average, students scored more than 70% on all questions, indicating that the course largely met this learning objective. However, students scored highest on questions in category A, likely because they were most similar to course content, and lowest on questions in categories C and D. While we anticipated students would score equally on questions for which either novelty or complexity was altered (but not both), we observed that student scores in category C were lower than in category B. Furthermore, students performed equally poorly on all questions for which complexity was higher (categories C and D), even those containing familiar content, suggesting that application of course material to increasingly complex situations is particularly challenging to students.Entities:
Mesh:
Year: 2017 PMID: 28130270 PMCID: PMC5332046 DOI: 10.1187/cbe.16-06-0195
Source DB: PubMed Journal: CBE Life Sci Educ ISSN: 1931-7913 Impact factor: 3.325
FIGURE 1.Factorial framework used for generating and categorizing questions in four categories (A–D) along axes of novelty and complexity.
FIGURE 2.Discrimination index (DI) values for questions in categories A–D administered in Spring 2014 (S14, open symbols), Fall 2014 (F14, gray symbols), and Spring 2015 (S15, black symbols). DI values did not differ significantly across categories or semesters (p > 0.05). Gray horizontal lines and shaded boxes represent the mean ± SD for each question category.
FIGURE 3.Mean (± SE) student scores (% correct) for questions in categories A–D over three semesters (n = 1139). Data were pooled from the three semesters to illustrate trends because semester was not a significant main effect in the mixed-effects model used. Different lowercase letter represent significant differences between categories (p < 0.05).
Mixed-effects model to explain the drivers of student scores on multiple true–false questionsa
| Fixed effect | Parameter value | SE | ||
|---|---|---|---|---|
| Novelty | 0.001 | 0.0067 | 0.137 | 0.891 |
| Complexity | 0.079 | 0.0067 | 11.74 | |
| Semester | 0.001 | 0.0065 | 0.570 | 0.305 |
| Question set | 0.004 | 0.0089 | 0.411 | 0.681 |
| Novelty × complexity | −0.058 | 0.0095 | −6.147 |
aNovelty, complexity, their interaction (novelty × complexity), semester, and question set were included as fixed effects, and student nested within lab section was included as a random effect.
bValues in bold indicate significant effects (p < 0.05).
FIGURE 4.Interaction plots for the effect of complexity on scores for questions that differed in novelty (A), and the effect of novelty on scores for questions that differed in complexity (B). Points indicate the mean values (n = 1139) in each of the categories, and changes in slope of the connecting lines indicate the strength of interaction between the factors.