| Literature DB >> 31098845 |
Timothy J Cleary1,2, Abigail Konopasky3, Jeffrey S La Rochelle4, Brian E Neubauer5, Steven J Durning3, Anthony R Artino3.
Abstract
To be safe and effective practitioners and learners, medical professionals must be able to accurately assess their own performance to know when they need additional help. This study explored the metacognitive judgments of 157 first-year medical students; in particular, the study examined students' self-assessments or calibration as they engaged in a virtual-patient simulation targeting clinical reasoning practices. Examining two key subtasks of a patient encounter, history (Hx) and physical exam (PE), the authors assessed the level of variation in students' behavioral performance (i.e., effectiveness and efficiency) and judgments of performance (i.e., calibration bias and accuracy) across the two subtasks. Paired t tests revealed that the Hx subtask was deemed to be more challenging than the PE subtask when viewed in terms of both actual and perceived performance. In addition to students performing worse on the Hx subtask than PE, they also perceived that they performed less well for Hx. Interestingly, across both subtasks, the majority of participants overestimated their performance (98% of participants for Hx and 95% for PE). Correlation analyses revealed that the participants' overall level of accuracy in metacognitive judgments was moderately stable across the Hx and PE subtasks. Taken together, findings underscore the importance of assessing medical students' metacognitive judgments at different points during a clinical encounter.Entities:
Keywords: Calibration; Clinical reasoning; Metacognition; Microanalytic assessment; Self-assessment; Self-regulated learning
Mesh:
Year: 2019 PMID: 31098845 PMCID: PMC6775028 DOI: 10.1007/s10459-019-09897-2
Source DB: PubMed Journal: Adv Health Sci Educ Theory Pract ISSN: 1382-4996 Impact factor: 3.853
Summary of descriptive statistics for Hx and PE performance, post-diction, and calibration scores
| Clinical task | Effectiveness M % (SD) | Efficiency M % (SD) | Post-diction M % (SD) | Bias M % (SD) | Accuracy M % (SD) |
|---|---|---|---|---|---|
| Patient history (Hx) | 43.07 (17.36) | 17.24 (7.03) | 63.23 (21.17) | 45.99 (20.43) | 53.95 (20.29) |
| Physical exam (PE) | 71.63 (17.06) | 34.19 (11.78) | 69.83 (20.14) | 35.64 (20.53) | 62.72 (17.71) |
| Difference (PE-Hx) | − 28.56* (18.03) | − 16.95* (11.15) | − 6.60* (19.71) | 10.35* (22.01) | − 8.78* (20.76) |
| Cohen’s | 1.57 | 2.21 | .33 | .48 | .40 |
Positive bias scores indicate overestimation. High accuracy scores indicate greater levels of accuracy
*p < .05
Correlations among calibration bias and accuracy scores
| Bias: history | Bias: physical exam | Accuracy: history | |
|---|---|---|---|
| Bias: history | – | ||
| Bias: physical exam | .18* | – | |
| Accuracy: history | − .31* | − .22* | – |
| Accuracy: physical exam | .08 | − .28* | .41* |
Bias scores were based on a dichotomous dummy variable: 0 (under-estimators), 1 (over-estimators)
Accuracy scores were based on a scale ranging from 0 to 100, with higher scores indicating greater levels of accuracy
*p < .05