Joanna C Thorn1, Sian M Noble, William Hollingworth. 1. MRC ConDuCT Hub, School of Social and Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK. joanna.thorn@bristol.ac.uk
Abstract
BACKGROUND AND AIMS: Little is known about the extent and nature of publication bias in economic evaluations. Our objective was to determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data. METHODS: Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. Fifty comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results. RESULTS: Trials planning an economic study were more likely to be funded by government (p = 0.01) and larger (p = 0.003) than other trials. The trials planning an economic evaluation had a mean of 6.5 (range 2.7-13.2) years since the trial end in which to publish their results. Effectiveness results were reported by 70 %, while only 43 % published economic evaluations (p < 0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (p < 0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. The mean time to publication was 0.7 years longer for cost-effectiveness data than for effectiveness data where both were published (p = 0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for the corresponding economic publications (p = 0.01). Reasons for publishing in different journals included editorial decision making and the additional time that economic evaluation takes to conduct. CONCLUSIONS: Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data. Where economic results do appear, they are published later, and in journals with lower impact factors. These results suggest that economic output may be more susceptible than effectiveness data to publication bias. Funders, grant reviewers and trialists themselves should ensure economic evaluations are prioritized and adequately staffed to avoid potential problems with bias.
BACKGROUND AND AIMS: Little is known about the extent and nature of publication bias in economic evaluations. Our objective was to determine whether economic evaluations are subject to publication bias by considering whether economic data are as likely to be reported, and reported as promptly, as effectiveness data. METHODS: Trials that intended to conduct an economic analysis and ended before 2008 were identified in the International Standard Randomised Controlled Trial Number (ISRCTN) register; a random sample of 100 trials was retrieved. Fifty comparator trials were randomly drawn from those not identified as intending to conduct an economic study. The trial start and end dates, estimated sample size and funder type were extracted. For trials planning economic evaluations, effectiveness and economic publications were sought; publication dates and journal impact factors were extracted. Effectiveness abstracts were assessed for whether they reached a firm conclusion that one intervention was most effective. Primary investigators were contacted about reasons for non-publication of results, or reasons for differential publication strategies for effectiveness and economic results. RESULTS: Trials planning an economic study were more likely to be funded by government (p = 0.01) and larger (p = 0.003) than other trials. The trials planning an economic evaluation had a mean of 6.5 (range 2.7-13.2) years since the trial end in which to publish their results. Effectiveness results were reported by 70 %, while only 43 % published economic evaluations (p < 0.001). Reasons for non-publication of economic results included the intervention being ineffective, and staffing issues. Funding source, time since trial end and length of study were not associated with a higher probability of publishing the economic evaluation. However, studies that were small or of unknown size were significantly less likely to publish economic evaluations than large studies (p < 0.001). The authors' confidence in labelling one intervention clearly most effective did not affect the probability of publication. The mean time to publication was 0.7 years longer for cost-effectiveness data than for effectiveness data where both were published (p = 0.001). The median journal impact factor was 1.6 points higher for effectiveness publications than for the corresponding economic publications (p = 0.01). Reasons for publishing in different journals included editorial decision making and the additional time that economic evaluation takes to conduct. CONCLUSIONS: Trials that intend to conduct an economic analysis are less likely to report economic data than effectiveness data. Where economic results do appear, they are published later, and in journals with lower impact factors. These results suggest that economic output may be more susceptible than effectiveness data to publication bias. Funders, grant reviewers and trialists themselves should ensure economic evaluations are prioritized and adequately staffed to avoid potential problems with bias.
Authors: Joseph S Ross; Gregory K Mulvey; Elizabeth M Hines; Steven E Nissen; Harlan M Krumholz Journal: PLoS Med Date: 2009-09-08 Impact factor: 11.069
Authors: Kerry Dwan; Douglas G Altman; Juan A Arnaiz; Jill Bloom; An-Wen Chan; Eugenia Cronin; Evelyne Decullier; Philippa J Easterbrook; Erik Von Elm; Carrol Gamble; Davina Ghersi; John P A Ioannidis; John Simes; Paula R Williamson Journal: PLoS One Date: 2008-08-28 Impact factor: 3.240
Authors: Gemma Elizabeth Shields; Deborah Buck; Jamie Elvidge; Karen Petra Hayhurst; Linda Mary Davies Journal: Int J Technol Assess Health Care Date: 2019-07-22 Impact factor: 2.188
Authors: Christine Schmucker; Lisa K Schell; Susan Portalupi; Patrick Oeller; Laura Cabrera; Dirk Bassler; Guido Schwarzer; Roberta W Scherer; Gerd Antes; Erik von Elm; Joerg J Meerpohl Journal: PLoS One Date: 2014-12-23 Impact factor: 3.240
Authors: Ye Shen; Ilana R Cliffer; Devika J Suri; Breanne K Langlois; Stephen A Vosti; Patrick Webb; Beatrice L Rogers Journal: Nutr J Date: 2020-02-27 Impact factor: 3.271
Authors: Gemma E Shields; Deborah Buck; Filippo Varese; Alison R Yung; Andrew Thompson; Nusrat Husain; Matthew R Broome; Rachel Upthegrove; Rory Byrne; Linda M Davies Journal: BMC Psychiatry Date: 2022-02-17 Impact factor: 3.630