| Literature DB >> 26641632 |
Anja J Boevé1, Rob R Meijer1, Casper J Albers1, Yta Beetsma2, Roel J Bosker3.
Abstract
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.Entities:
Mesh:
Year: 2015 PMID: 26641632 PMCID: PMC4671535 DOI: 10.1371/journal.pone.0143616
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Studies investigating performance differences between paper-based and computer-based tests with multiple-choice questions.
| Number of multiple-choice questions | Randomized | High-Stakes | Effect size(Cohen’s d) | Result in favor of | |
|---|---|---|---|---|---|
|
| 40 |
| yes | .685 | paper-based |
|
| 100 |
| yes | .755 | computer-based |
|
| 20 |
| yes | .146 | computer-based |
|
| 6 |
| unclear | .323 | paper-based |
|
| 30 |
| unclear | .185 | computer-based |
|
| 25 |
| yes | not possible | |
|
| 3 |
| unclear | not possible | |
|
| unclear |
| yes | not possible | |
|
| unclear |
| unclear | not possible |
the test counted for 15% of the final grade
5 mc-items—but reported means for the mc-test are larger than 5
Fig 1Flow-chart illustrating response from the initial randomization to the actual outcome.
Evaluations of students test-taking experience and acceptance of computer-based exams.
| Student acceptance of computer-based exams | |
|---|---|
| Questions | Sub-questions |
| In this computer-based exam | I was able to work in a structured manner |
| I had a good overview of my progress in the exam | |
| I was able to concentrate well | |
| In paper-based exams in general | I am able to work in a structured manner |
| I have a good overview of my progress in the exam | |
| I am able to concentrate well | |
| I prefer a: paper-based exam, computer-based exam, no preference | |
| Did your opinion about computer-based exams change as a result of taking this exam? | |
Mean number of questions correct in the different exam conditions for the midterm and final exam.
| Computer-based | Paper-based | |||||
|---|---|---|---|---|---|---|
|
| M(SD) |
| M(SD) |
|
| |
|
| 126 | 28.56 (5.3) | 157 | 28.50 (4.6) | -0.1 (281) | .92 |
|
| 157 | 29.92 (4.6) | 126 | 29.50 (4.3) | -0.78 (281) | .44 |
Fig 2Mean scores and 95% confidence intervals for student approaches to completing the computer-based exam, and paper-based exams in general for the midterm and final exam.
Mean difference between computer-based and paper-based exam evaluation, with dependent-sample t-test results and effect-size.
| CB—PB mode M (SD) | 95% CI |
|
| Cohen’s d | |
|---|---|---|---|---|---|
|
| -0.9 (1.4) | -1.1, -0.8 | -10.7(268) | <.001 | 0.64 |
|
| -0.5 (1.5) | -0.7, -0.4 | - 6.2 (269) | <.001 | 0.33 |
|
| -0.7 (1.5) | -0.9, -0.6 | - 8.1 (269) | <.001 | 0.46 |