F Reed Johnson1, Jui-Chen Yang2, Shelby D Reed2. 1. Preference Evaluation Research (PrefER) Group, Duke Clinical Research Institute, Duke University, Durham, NC, USA. Electronic address: reed.johnson@duke.edu. 2. Preference Evaluation Research (PrefER) Group, Duke Clinical Research Institute, Duke University, Durham, NC, USA.
Abstract
OBJECTIVES: To develop a tool for testing internal validity of discrete choice experiment (DCE) data, deploy the program, and collect summary test results from a sample of active health researchers to demonstrate the practical utility of the tool in a wide range of health applications. METHODS: A previously developed Gauss program had been in use for testing internal validity. The program was translated to MATLAB and adapted, compiled, and deployed. Sixty-seven authors who had coauthored one or more published DCE studies between 2013 and 2016 were contacted by email; provided access to the tool, instructions, and an example data file; and invited to submit test summaries for tabulation. RESULTS: Twenty-one researchers from 10 countries contributed test results from a total of 55 DCE data sets. Fifty-one studies included at least two out of a possible six tests. Attribute dominance was the most common test, and stability had the highest failure incidence. Only three summaries included a transitivity test, and no failures were detected. CONCLUSIONS: It was possible to evaluate multiple internal validity checks for most data sets even when the experimental design did not explicitly include tests. Nevertheless, internal validity is rarely reported. Free availability of the tool for testing data quality could improve both reporting and more careful design of DCE studies to help validate and interpret stated preference data.
OBJECTIVES: To develop a tool for testing internal validity of discrete choice experiment (DCE) data, deploy the program, and collect summary test results from a sample of active health researchers to demonstrate the practical utility of the tool in a wide range of health applications. METHODS: A previously developed Gauss program had been in use for testing internal validity. The program was translated to MATLAB and adapted, compiled, and deployed. Sixty-seven authors who had coauthored one or more published DCE studies between 2013 and 2016 were contacted by email; provided access to the tool, instructions, and an example data file; and invited to submit test summaries for tabulation. RESULTS: Twenty-one researchers from 10 countries contributed test results from a total of 55 DCE data sets. Fifty-one studies included at least two out of a possible six tests. Attribute dominance was the most common test, and stability had the highest failure incidence. Only three summaries included a transitivity test, and no failures were detected. CONCLUSIONS: It was possible to evaluate multiple internal validity checks for most data sets even when the experimental design did not explicitly include tests. Nevertheless, internal validity is rarely reported. Free availability of the tool for testing data quality could improve both reporting and more careful design of DCE studies to help validate and interpret stated preference data.
Authors: Sebastian Heidenreich; Andrea Phillips-Beyer; Bruno Flamion; Melissa Ross; Jaein Seo; Kevin Marsh Journal: Patient Date: 2020-11-11 Impact factor: 3.883
Authors: Hannah B Lewis; Melanie Schroeder; Necdet B Gunsoy; Ellen M Janssen; Samuel Llewellyn; Helen A Doll; Paul W Jones; Afisi S Ismaila Journal: Int J Chron Obstruct Pulmon Dis Date: 2020-03-18
Authors: T Wilson; P Javaheri; J Finlay; G Hazlewood; S B Wilton; T Sajobi; A Levin; W Pearson; C Connolly; M T James Journal: Can J Kidney Health Dis Date: 2021-01-27
Authors: Glen S Hazlewood; Gyanendra Pokharel; Robert Deardon; Deborah A Marshall; Claire Bombardier; George Tomlinson; Christopher Ma; Cynthia H Seow; Remo Panaccione; Gilaad G Kaplan Journal: PLoS One Date: 2020-01-16 Impact factor: 3.240