Brittany N Hasty1, James N Lau, Ara Tekian, Sarah E Miller, Edward S Shipper, Sylvia Bereknyei Merrell, Edmund W Lee, Yoon Soo Park. 1. B.N. Hasty was a surgical education fellow, Department of Surgery, Stanford University School of Medicine, Stanford, California, at the time of writing. She is currently a resident in general surgery, Loyola University Medical Center, Maywood, Illinois. J.N. Lau is professor of surgery, Department of Surgery, Stanford University School of Medicine, Stanford, California. A. Tekian is professor and associate dean for international affairs, Department of Medical Education, University of Illinois, Chicago, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-9252-1588. S.E. Miller was a fourth-year medical student, Stanford University School of Medicine, Stanford, California, at the time of writing. She is currently a resident in obstetrics and gynecology, Stanford University School of Medicine, Stanford, California. E.S. Shipper was a general surgery resident, University of Texas Health Science Center at San Antonio, San Antonio, Texas, at the time of writing. He is currently a research fellow, National Trauma Institute, San Antonio, Texas. S. Bereknyei Merrell is director of research, Goodman Surgical Education Center, and research scholar, Stanford-Surgery Policy Improvement Research & Education Center (S-SPIRE), Stanford University School of Medicine, Stanford, California. E.W. Lee was a surgical education fellow, Department of Surgery, Stanford University School of Medicine, Stanford, California. He is currently a resident in general surgery, Inova Fairfax Medical Campus, Falls Church, Virginia. Y.S. Park is associate professor and associate head, Department of Medical Education, University of Illinois, Chicago, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335.
Abstract
PURPOSE: To examine the validity evidence for a scrub training knowledge assessment tool to demonstrate the utility and robustness of a multimodal, entrustable professional activity (EPA)-aligned, mastery learning scrub training curriculum. METHOD: Validity evidence was collected for the knowledge assessment used in the scrub training curriculum at Stanford University School of Medicine from April 2017 to June 2018. The knowledge assessment had 25 selected response items that mapped to curricular objectives, EPAs, and operating room policies. A mastery passing standard was established using the Mastery Angoff and Patient-Safety approaches. Learners were assessed pre curriculum, post curriculum, and 6 months after the curriculum. RESULTS: From April 2017 to June 2018, 220 medical and physician assistant students participated in the scrub training curriculum. The mean pre- and postcurriculum knowledge scores were 74.4% (standard deviation [SD] = 15.6) and 90.1% (SD = 8.3), respectively, yielding a Cohen's d = 1.10, P < .001. The internal reliability of the assessment was 0.71. Students with previous scrub training performed significantly better on the precurriculum knowledge assessment than those without previous training (81.9% [SD = 12.6] vs 67.0% [SD = 14.9]; P < .001). The mean item difficulty was 0.74, and the mean item discrimination index was 0.35. The Mastery Angoff overall cut score was 92.0%. CONCLUSIONS: This study describes the administration of and provides validity evidence for a knowledge assessment tool for a multimodal, EPA-aligned, mastery-based curriculum for scrub training. The authors support the use of scores derived from this test for assessing scrub training knowledge among medical and physician assistant students.
PURPOSE: To examine the validity evidence for a scrub training knowledge assessment tool to demonstrate the utility and robustness of a multimodal, entrustable professional activity (EPA)-aligned, mastery learning scrub training curriculum. METHOD: Validity evidence was collected for the knowledge assessment used in the scrub training curriculum at Stanford University School of Medicine from April 2017 to June 2018. The knowledge assessment had 25 selected response items that mapped to curricular objectives, EPAs, and operating room policies. A mastery passing standard was established using the Mastery Angoff and Patient-Safety approaches. Learners were assessed pre curriculum, post curriculum, and 6 months after the curriculum. RESULTS: From April 2017 to June 2018, 220 medical and physician assistant students participated in the scrub training curriculum. The mean pre- and postcurriculum knowledge scores were 74.4% (standard deviation [SD] = 15.6) and 90.1% (SD = 8.3), respectively, yielding a Cohen's d = 1.10, P < .001. The internal reliability of the assessment was 0.71. Students with previous scrub training performed significantly better on the precurriculum knowledge assessment than those without previous training (81.9% [SD = 12.6] vs 67.0% [SD = 14.9]; P < .001). The mean item difficulty was 0.74, and the mean item discrimination index was 0.35. The Mastery Angoff overall cut score was 92.0%. CONCLUSIONS: This study describes the administration of and provides validity evidence for a knowledge assessment tool for a multimodal, EPA-aligned, mastery-based curriculum for scrub training. The authors support the use of scores derived from this test for assessing scrub training knowledge among medical and physician assistant students.
Authors: Stephen P Canton; Christine E Foley; Isabel Fulcher; Laura K Newcomb; Noah Rindos; Nicole M Donnellan Journal: Int J Med Stud Date: 2022-04-05
Authors: Tiffany N Anderson; Brittany N Hasty; Ingrid S Schmiederer; Sarah E Miller; Robert Shi; Lauren R Aalami; Elizabeth M Huffman; Jennifer N Choi; James N Lau Journal: MedEdPORTAL Date: 2021-02-01