| Literature DB >> 35162729 |
Špela Smrkolj1,2, Enja Bančov1, Vladimir Smrkolj1.
Abstract
Certainty-Based Marking (CBM) involves asking students not only the answer to an objective question, but also how certain they are that their answer is correct. In a mixed method design employing an embedded approach with a quasi-experimental design, we have examined the use of CBM during a 5-week Gynaecology and Obstetrics course. The study was conducted as a non-mandatory revision exam with two additional questionnaires on Moodle. Majority of students perceive CBM as fair (78%) and useful (94%). Most students would immediately want CBM to be used for revision exams, but more practice would be needed for CBM to be used in graded exams. The lowest self-evaluation of knowledge was mostly seen by worst (less than 70% Accuracy) and best achievers (more than 90% Accuracy); the worst achievers probably have knowledge gaps, and the best achievers probably correctly guessed at least one question. Our findings conclude that CBM does not discriminate any learner type (p = 0.932) and does not change the general distribution of the exam scores, since there is no significant differences between Certainty-Based Score (M = 80.4%, SD = 10.4%) and Accuracy (M = 79.8%, SD = 11.1%); t(176) = 0.8327, p = 0.4061. These findings are widely applicable, as learner type study models are used extensively in education. In the future, larger samples should be studied and the implementation of CBM on question types other than MCQ should be investigated.Entities:
Keywords: certainty-based marking; confidence-based learning; online exam; self-evaluation; undergraduate medical education
Mesh:
Year: 2022 PMID: 35162729 PMCID: PMC8834968 DOI: 10.3390/ijerph19031706
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Figure 1Students’ responses to the questionnaire before the CBM exam (). The labels on the left represent the question or a series of questions that were given in the questionnaire. The text above the stacked bars represents the possible answers, while the width of the bar represents the portion of students who selected that answer. The data for Learner type questions has been analysed and only the determined proportions for each learner type are represented.
Figure 2Students’ responses to the questionnaire after the CBM exam (). The labels on the left represent the question or a series of questions that were given in the questionnaire. The text above the stacked bars represents the possible answers, while the width of the bar represents the portion of students who selected that answer.
Results of mean-comparison tests between different student’s qualities (Learner type, Learning start, Rehearsal times, Self-evaluated knowledge) and performance indicators (Accuracy, CBM Bonus and Certainty-Based Score).
| Independent Variable | |||
|---|---|---|---|
|
|
|
| |
| Learner type | 0.900 | 0.711 | 0.932 |
| Learning start | 0.512 | 0.940 | 0.453 |
| Rehearsal times | 0.593 | 0.862 | 0.423 |
| Self-evaluated knowledge | 0.901 | 0.492 | 0.718 |
Figure 3Average CBM mark (=All points/number of questions) versus Accuracy shows the deviation from the standard curve, which represents random guessing.
Figure 4(a) Distribution of Accuracy scores. (b) Distribution of Certainty-Based Score.