Katherine Hiller1, Julianna Jung2, Luan Lawson3, Rebecca Riddell4, Doug Franzen5. 1. Department of Emergency Medicine University of Arizona Tucson AZ USA. 2. the Department of Emergency Medicine Johns Hopkins University Baltimore MD USA. 3. the Department of Emergency Medicine East Carolina University Greenville NC USA. 4. the Office of Assessment and Evaluation Johns Hopkins University Baltimore MD USA. 5. and the Department of Emergency Medicine University of Washington Seattle WA USA.
Abstract
OBJECTIVES: Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported. METHODS: Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed. RESULTS: All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non-EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference. CONCLUSIONS: To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.
OBJECTIVES: Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported. METHODS: Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed. RESULTS: All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non-EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference. CONCLUSIONS: To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.
Authors: S M Keim; J A Rein; C Chisholm; P L Dyne; G W Hendey; N J Jouriles; R W King; W Schrading; J Salomone; G Swart; J M Wightman Journal: Acad Emerg Med Date: 1999-11 Impact factor: 3.451
Authors: Matthew C Tews; Collette Marie Ditz Wyte; Marion Coltman; Peter A Grekin; Kathy Hiller; Leslie C Oyama; Kiran Pandit; David E Manthey Journal: Acad Emerg Med Date: 2011-10 Impact factor: 3.451
Authors: Jeffrey N Love; Nicole M Deiorio; Sarah Ronan-Bentle; John M Howell; Christopher I Doty; David R Lane; Cullen Hegarty Journal: Acad Emerg Med Date: 2013-09 Impact factor: 3.451
Authors: Katherine G Katzung; Felix Ankel; Mark Clark; Luan E Lawson; Peter M C DeBlieux; Mohamad Ali Cheaito; Eveline A Hitti; Michael Epter; Amin Kazzi Journal: J Emerg Med Date: 2019-03-20 Impact factor: 1.484
Authors: Julianna Jung; Douglas Franzen; Luan Lawson; David Manthey; Matthew Tews; Nicole Dubosh; Jonathan Fisher; Marianne Haughey; Joseph B House; Arleigh Trainor; David A Wald; Katherine Hiller Journal: West J Emerg Med Date: 2017-12-22