| Literature DB >> 32190341 |
Daniel P Walsh1, Kadhiresan R Murugappan1, Achikam Oren-Grinberg1, Vanessa T Wong1, John D Mitchell1, Robina Matyal1.
Abstract
Interactive online learning tools have revolutionized graduate medical education and can impart echocardiographic image interpretive skills. We created self-paced, interactive online training modules using a repository of echocardiography videos of normal and various degrees of abnormal left ventricles. In this study, we tested the feasibility of this learning tool. Thirteen anesthesia interns took a pre-test and then had 3 weeks to complete the training modules on their own time before taking a post-test. The average score on the post-test (74.6% ± 11.08%) was higher than the average score on the pre-test (57.7% ± 9.27%) (P < 0.001). Scores did not differ between extreme function (severe dysfunction or hyperdynamic function) and non-extreme function (normal function or mild or moderate dysfunction) questions on both the pre-test (P = 0.278) and post-test (P = 0.093). The interns scored higher on the post-test than the pre-test on both extreme (P = 0.0062) and non-extreme (P = 0.0083) questions. After using an online educational tool that allowed learning on their own time and pace, trainees improved their ability to correctly categorize left ventricular systolic function. Left ventricular systolic function is often a key echocardiographic question that can be difficult to master. The promising performance of this educational resource may lead to more time- and cost-effective methods for improving diagnostic accuracy among learners.Entities:
Keywords: anesthesia; echocardiography; education; image interpretation; interactive resource; left ventricle; left ventricular; left ventricular assessment; left ventricular function; left ventricular systolic function; online resource; residents; systolic function
Year: 2020 PMID: 32190341 PMCID: PMC7077518 DOI: 10.1530/ERP-19-0053
Source DB: PubMed Journal: Echo Res Pract ISSN: 2055-0464
Figure 1Pre-test and post-test scores. (A) Average pre-test and post-test scores. The average score on the pre-test was 57.7% ± 9.27% correct. After using the educational tool, the average score on the post-test was 74.6% ± 11.08% correct, which was a statistically significant increase from the pre-test (P < 0.001 based on a paired t-test with α = 0.05). In comparison, the average score of experts (70% ± 10.61%) was significantly higher than the interns’ average pre-test score (P = 0.027 based on an unpaired two-sample t-test with α = 0.05) and not significantly different than the interns’ average post-test score (P = 0.435 based on an unpaired two-sample t-test with α = 0.05). On the post-test, all the interns scored within one s.d. of the experts’ average score or higher. (B) Individual pre-test and post-test scores. Scores improved from pre-test to post-test for all but three residents, who scored the same on the post-test as they did on the pre-test.
Figure 2Median and interquartile ranges (IQR) for the pre- and post-test based on extreme or non-extreme function. On the pre-test, there was no significant difference between the extreme (median: 57; IQR: 57 to 71) and non-extreme (median: 62, IQR: 54 to 62) questions (P = 0.278 based on a Wilcoxon signed-rank test with α = 0.05). Likewise, on the post-test, there was no significant difference between the extreme (median: 86; IQR: 71 to 86) and non-extreme (median: 69, IQR: 54 to 85) questions (P = 0.093 based on a Wilcoxon signed-rank test with α = 0.05). The interns scored significantly higher on the post-test (median: 86; IQR: 71 to 86) than the pre-test (median: 57; IQR: 57 to 71) on the extreme questions (P = 0.0062 based on a Wilcoxon signed-rank test with α = 0.05). They also scored significantly higher on the post-test (median: 69, IQR: 54 to 85) than the pre-test (median: 62, IQR: 54 to 62) on the non-extreme questions (P = 0.0083 based on a Wilcoxon signed-rank test with α = 0.05). The medians are indicated by the red lines in the figure.