Mamta K Singh1, Greg Ogrinc, Karen R Cox, Mary Dolansky, Julie Brandt, Laura J Morrison, Beth Harwood, Greg Petroski, Al West, Linda A Headrick. 1. Dr. Singh is associate professor of medicine, Division of General Medicine, Louis Stokes Veterans Affairs Medical Center, Case Western Reserve University, Cleveland, Ohio. Dr. Ogrinc is associate professor of community and family medicine and of medicine, VA Medical Center, White River Junction, Vermont, and Geisel School of Medicine, Hanover, New Hampshire. Dr. Cox is manager, Quality Improvement, Office of Clinical Effectiveness, University of Missouri Health Care, Columbia, Missouri. Dr. Dolansky is associate professor, Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland, Ohio. Dr. Brandt is associate director of quality improvement, School of Medicine, University of Missouri, Columbia, Missouri. Dr. Morrison is currently director of palliative medicine education, Department of Medicine, Yale University School of Medicine, New Haven, Connecticut, but was at Baylor College of Medicine in the Division of Geriatrics at the time of this study. Ms. Harwood is research associate, Geisel School of Medicine, Hanover, New Hampshire. Dr. Petroski is assistant professor of biostatistics, School of Medicine, University of Missouri, Columbia, Missouri. Dr. West is biostatistician, Department of Veterans Affairs, VA Medical Center, White River Junction, Vermont. Dr. Headrick is senior associate dean for education and professor of medicine, School of Medicine, University of Missouri, Columbia, Missouri.
Abstract
PURPOSE: Quality improvement (QI) has been part of medical education for over a decade. Assessment of QI learning remains challenging. The Quality Improvement Knowledge Application Tool (QIKAT), developed a decade ago, is widely used despite its subjective nature and inconsistent reliability. From 2009 to 2012, the authors developed and assessed the validation of a revised QIKAT, the "QIKAT-R." METHOD: Phase 1: Using an iterative, consensus-building process, a national group of QI educators developed a scoring rubric with defined language and elements. Phase 2: Five scorers pilot tested the QIKAT-R to assess validity and inter- and intrarater reliability using responses to four scenarios, each with three different levels of response quality: "excellent," "fair," and "poor." Phase 3: Eighteen scorers from three countries used the QIKAT-R to assess the same sets of student responses. RESULTS: Phase 1: The QI educators developed a nine-point scale that uses dichotomous answers (yes/no) for each of three QIKAT-R subsections: Aim, Measure, and Change. Phase 2: The QIKAT-R showed strong discrimination between "poor" and "excellent" responses, and the intra- and interrater reliability were strong. Phase 3: The discriminative validity of the instrument remained strong between excellent and poor responses. The intraclass correlation was 0.66 for the total nine-point scale. CONCLUSIONS: The QIKAT-R is a user-friendly instrument that maintains the content and construct validity of the original QIKAT but provides greatly improved interrater reliability. The clarity within the key subsections aligns the assessment closely with QI knowledge application for students and residents.
PURPOSE: Quality improvement (QI) has been part of medical education for over a decade. Assessment of QI learning remains challenging. The Quality Improvement Knowledge Application Tool (QIKAT), developed a decade ago, is widely used despite its subjective nature and inconsistent reliability. From 2009 to 2012, the authors developed and assessed the validation of a revised QIKAT, the "QIKAT-R." METHOD: Phase 1: Using an iterative, consensus-building process, a national group of QI educators developed a scoring rubric with defined language and elements. Phase 2: Five scorers pilot tested the QIKAT-R to assess validity and inter- and intrarater reliability using responses to four scenarios, each with three different levels of response quality: "excellent," "fair," and "poor." Phase 3: Eighteen scorers from three countries used the QIKAT-R to assess the same sets of student responses. RESULTS: Phase 1: The QI educators developed a nine-point scale that uses dichotomous answers (yes/no) for each of three QIKAT-R subsections: Aim, Measure, and Change. Phase 2: The QIKAT-R showed strong discrimination between "poor" and "excellent" responses, and the intra- and interrater reliability were strong. Phase 3: The discriminative validity of the instrument remained strong between excellent and poor responses. The intraclass correlation was 0.66 for the total nine-point scale. CONCLUSIONS: The QIKAT-R is a user-friendly instrument that maintains the content and construct validity of the original QIKAT but provides greatly improved interrater reliability. The clarity within the key subsections aligns the assessment closely with QI knowledge application for students and residents.
Authors: Samuel J Ridout; Kathryn K Ridout; Brian Theyel; Lisa M Shea; Lauren Weinstock; Lisa A Uebelacker; Gary Epstein-Lubow Journal: Acad Psychiatry Date: 2020-01-21
Authors: Krista M Johnson; Wendy Fiordellisi; Ethan Kuperman; Alexis Wickersham; Carly Kuehn; Aparna Kamath; Joseph Szot; Manish Suneja Journal: J Grad Med Educ Date: 2018-06
Authors: Megan E Miller; Ajanta Patel; Nancy Schindler; Kristen Hirsch; Mei Ming; Stephen Weber; Phyllis Turner; Michael D Howell; Vineet M Arora; Julie L Oyler Journal: J Grad Med Educ Date: 2018-10