Anthony R Artino1, Andrew W Phillips, Amol Utrankar, Andrew Q Ta, Steven J Durning. 1. A.R. Artino Jr is professor of medicine and deputy director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland; ORCID: http://orcid.org/0000-0003-2661-7853. A.W. Phillips is adjunct clinical professor of emergency medicine, Department of Emergency Medicine, University of North Carolina, Chapel Hill, North Carolina. A. Utrankar is a fourth-year medical student, Vanderbilt University School of Medicine, Nashville, Tennessee. A.Q. Ta is a second-year medical student, University of Illinois College of Medicine, Chicago, Illinois. S.J. Durning is professor of medicine and pathology and director of graduate programs in health professions education, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.
Abstract
PURPOSE: Surveys are widely used in health professions education (HPE) research, yet little is known about the quality of the instruments employed. Poorly designed survey tools containing unclear or poorly formatted items can be difficult for respondents to interpret and answer, yielding low-quality data. This study assessed the quality of published survey instruments in HPE. METHOD: In 2017, the authors performed an analysis of HPE research articles published in three high-impact journals in 2013. They included articles that employed at least one self-administered survey. They designed a coding rubric addressing five violations of established best practices for survey item design and used it to collect descriptive data on the validity and reliability evidence reported and to assess the quality of available survey items. RESULTS: Thirty-six articles met inclusion criteria and included the instrument for coding, with one article using 2 surveys, yielding 37 unique surveys. Authors reported validity and reliability evidence for 13 (35.1%) and 8 (21.6%) surveys, respectively. Results of the item-quality assessment revealed that a substantial proportion of published survey instruments violated established best practices in the design and visual layout of Likert-type rating items. Overall, 35 (94.6%) of the 37 survey instruments analyzed contained at least one violation of best practices. CONCLUSIONS: The majority of articles failed to report validity and reliability evidence, and a substantial proportion of the survey instruments violated established best practices in survey design. The authors suggest areas of future inquiry and provide several improvement recommendations for HPE researchers, reviewers, and journal editors.
PURPOSE: Surveys are widely used in health professions education (HPE) research, yet little is known about the quality of the instruments employed. Poorly designed survey tools containing unclear or poorly formatted items can be difficult for respondents to interpret and answer, yielding low-quality data. This study assessed the quality of published survey instruments in HPE. METHOD: In 2017, the authors performed an analysis of HPE research articles published in three high-impact journals in 2013. They included articles that employed at least one self-administered survey. They designed a coding rubric addressing five violations of established best practices for survey item design and used it to collect descriptive data on the validity and reliability evidence reported and to assess the quality of available survey items. RESULTS: Thirty-six articles met inclusion criteria and included the instrument for coding, with one article using 2 surveys, yielding 37 unique surveys. Authors reported validity and reliability evidence for 13 (35.1%) and 8 (21.6%) surveys, respectively. Results of the item-quality assessment revealed that a substantial proportion of published survey instruments violated established best practices in the design and visual layout of Likert-type rating items. Overall, 35 (94.6%) of the 37 survey instruments analyzed contained at least one violation of best practices. CONCLUSIONS: The majority of articles failed to report validity and reliability evidence, and a substantial proportion of the survey instruments violated established best practices in survey design. The authors suggest areas of future inquiry and provide several improvement recommendations for HPE researchers, reviewers, and journal editors.