| Literature DB >> 34506599 |
Pedro Tadao Hamamoto Filho1, Pedro Luiz Toledo de Arruda Lourenção2, Joélcio Francisco Abbade3, Dario Cecílio-Fernandes4, Jacqueline Teixeira Caramori5, Angélica Maria Bicudo6.
Abstract
Several methods have been proposed for analyzing differences between test scores, such as using mean scores, cumulative deviation, and mixed-effect models. Here, we explore the pooled analysis of retested Progress Test items to monitor the performance of first-year medical students who were exposed to a new curriculum design. This was a cross-sectional study of students in their first year of a medical program who participated in the annual interinstitutional Progress Tests from 2013 to 2019. We analyzed the performance of first-year students in the 2019 test and compared it with that of first-year students taking the test from 2013 to 2018 and encountering the same items. For each item, we calculated odds ratios with 95% confidence intervals; we also performed meta-analyses with fixed effects for each content area in the pooled analysis and presented the odds ratio (OR) with a 95% confidence interval (CI). In all, we used 63 items, which were divided into basic sciences, internal medicine, pediatrics, surgery, obstetrics and gynecology, and public health. Significant differences were found between groups in basic sciences (OR = 1.172 [CI95% 1.005 CI 1.366], p = 0.043) and public health (OR = 1.54 [CI95% CI 1.25-1.897], p < 0.001), which may reflect the characteristics of the new curriculum. Thus, pooled analysis of pretested items may provide indicators of different performance. This method may complement analysis of score differences on benchmark assessments.Entities:
Mesh:
Year: 2021 PMID: 34506599 PMCID: PMC8432842 DOI: 10.1371/journal.pone.0257293
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Distribution of the number of students and number of items previously tested.
| 2013 | 2014 | 2016 | 2017 | 2018 | Total | |
| Number of students | 88 | 96 | 90 | 95 | 90 | |
| Number of items | 2 | 4 | 20 | 16 | 21 | 63 |
| Basic sciences | 2 | 4 | 4 | 4 | 3 | 17 |
| Internal medicine | 0 | 0 | 4 | 1 | 4 | 9 |
| Pediatrics | 0 | 0 | 1 | 2 | 5 | 8 |
| Surgery | 0 | 0 | 2 | 5 | 4 | 11 |
| Obstetrics & gynecology | 0 | 0 | 3 | 3 | 3 | 9 |
| Public health | 0 | 0 | 6 | 1 | 2 | 9 |
Fig 1Forest plots of the pooled analysis according to the exam’s different content areas.
The vertical axis represents the item number on the 2019 exam. Each point represents the OR with its respective CI (horizontal bars). When points appear on the left of the vertical line identified as “1”, it indicates that students exposed to the old curriculum performed better, whereas points on the right indicate better performance for students with the new curriculum. Significant differences were found in basic sciences and public health.
I2 statistics for heterogeneity evaluation in the six content areas of the exam.
| Content Area | I2 value (%) | CI 95% | p value |
|---|---|---|---|
| Basic sciences | 61.79 | 35.33–77.42 | 0.0004 |
| Internal medicine | 55.90 | 7.00–79.09 | 0.0202 |
| Pediatrics | 47.44 | 0.00–76.63 | 0.0647 |
| Surgery | 64.92 | 33.25–81.56 | 0.0015 |
| Obstetrics & gynecology | 65.73 | 30.36–83.14 | 0.0029 |
| Public health | 80.26 | 63.31–89.38 | < 0.0001 |