Literature DB >> 24358860

Biased under-reporting of research reflects biased under-submission more than biased editorial rejection.

Iain Chalmers1, Kay Dickersin2.   

Abstract

Stephen Senn challenges Ben Goldacre's assertion in 'Bad Pharma' that biased editorial acceptance of reports with 'positive' findings is not a cause of biased under-reporting of research. We agree with Senn that biased editorial decisions may contribute to reporting bias, but Senn ignores the evidence that biased decisions by researchers to submit reports for possible publication are the main causes of the problem.

Entities:  

Year:  2013        PMID: 24358860      PMCID: PMC3782352          DOI: 10.12688/f1000research.2-1.v1

Source DB:  PubMed          Journal:  F1000Res        ISSN: 2046-1402


Stephen Senn challenges Ben Goldacre’s assertion in ‘ Bad Pharma’ [1] that biased editorial acceptance of reports with ‘positive’ findings is not a cause of biased under-reporting of research, and concludes that "the prospects for disentangling cause and effect when it comes to publication bias are not great" [2]. Senn apparently overlooks the studies – including controlled experiments - which have investigated reporting biases. These are summarised in an article [3] from which the following is an excerpt: Reporting bias can be due to researchers and sponsors failing to submit study findings for publication, or due to journal editors and others rejecting reports for publication. Numerous surveys of investigators have left little doubt that almost all failure to publish is due to the failure of investigators to submit reports for publication [4, 5], with only a small proportion of studies remaining unpublished because of rejection by journals [6], although positive-outcome bias has been demonstrated among peer reviewers [7]. Qualitative studies of editorial discussion indicate that a study’s scientific rigour is the area of greatest concern [8]. Researchers report that the reason they do not write up and submit reports of their research for publication is usually because they are "not interested" in the results ("editorial rejection by journals" is only rarely given as a cause of failure to publish). Even those investigators who have initially published their results as (conference) abstracts are less likely to submit their findings for full publication unless the results are ‘significant’ [9]. Investigations of biased reporting of research began with surveys of journal articles, which revealed improbably high proportions of published studies showing statistically significant differences [10– 14]. Subsequent surveys of authors and peer reviewers showed that research that had yielded ‘negative’ results was less likely than other research to be submitted or recommended for publication [15– 18]. These findings have been reinforced by the results of experimental studies, which showed that studies with no reported statistically significant differences were less likely to be accepted for publication [7, 19– 21]". Senn’s use of the term ‘publication bias’ in his commentary suggests that he is restricting it to editorial bias whereas, as indicated above, the origins of reporting bias are largely due to researchers’ decisions not to submit, not editorial decisions not to accept. The analyses of observational data cited by Ben Goldacre in his book ‘ Bad Pharma’ [1] do not detect editorial bias, but neither do they support a confident conclusion that no editorial bias exists. However, we believe Goldacre is correct to castigate researchers and research sponsors as being more culpable than editors in betraying their responsibility to the patients who have participated in trials. The controlled experiments suggest that it is the results of studies, not their quality, that predisposes them to editorial bias. Senn believes that any editorial bias that exists can be ‘very plausibly explained’ by preferential publication of ‘positive’ studies, and that it "seems plausible that higher quality studies are more likely to lead to a positive result". Unless he is using the word ‘positive’ to mean something other than ‘a beneficial effect’, however, Senn appears to be overlooking substantial evidence challenging the plausibility of his belief (see, for example, reference [22]). Given the estimated likelihood of new treatments proving superior to standard treatments [23] it surprises us that, "as a statistician" Senn would find this evidence "unpalatable". I would just put one anecdotal observation and that is of second studies that replicate the findings of a study published in a journal. An editor may turn down the second study as 'nothing new' is being said although most would argue replication to be important. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The authors comment on a article by Stephen Senn who questions Ben Goldacre’s assertion in the book “Bad Pharma” that editorial process is not the main cause of publication bias. They present a large amount of evidence from the literature that researchers are the main cause of publication bias by selectively submitting paper for publication. They provide a lot of convincing information in this short reaction. However, some sentences are very difficult to read. Especially for readers who haven’t read the book by Goldacre, the comment by Senn, and some of the other references. I had to reread the first sentence about five times before I understood. The sentence is especially difficult to read because there is a double negation. Splitting the sentence in the statement of Ben Goldacre and the comment of Stephen Senn may help.  Also the last sentence of the comment is difficult to understand, especially when the reader is unaware of the conclusion of reference 23. The second part of the citation of Goldacre “the prospects for disentangling cause and effect when it comes to publication bias are not great” is difficult to understand and, as far as I can see, does not come back in the comment. Consider whether that part can be omitted, or refer to it again at the end of the comment. The last section starts with ‘The controlled experiments’. It is not clear to which experiments this refers. To ‘studies – including controlled experiments ‘mentioned in the first section? In conclusion, this is a very important and informative comment. However, the readability should be improved in order to make it better understandable for readers who have not read all previous papers. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
  11 in total

1.  Publication bias in editorial decision making.

Authors:  Carin M Olson; Drummond Rennie; Deborah Cook; Kay Dickersin; Annette Flanagin; Joseph W Hogan; Qi Zhu; Jennifer Reiling; Brian Pace
Journal:  JAMA       Date:  2002-06-05       Impact factor: 56.272

2.  Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO.

Authors:  Kay Dickersin; Iain Chalmers
Journal:  J R Soc Med       Date:  2011-12       Impact factor: 5.344

3.  Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial.

Authors:  Gwendolyn B Emerson; Winston J Warme; Fredric M Wolf; James D Heckman; Richard A Brand; Seth S Leopold
Journal:  Arch Intern Med       Date:  2010-11-22

Review 4.  Full publication of results initially presented in abstracts.

Authors:  R W Scherer; P Langenberg; E von Elm
Journal:  Cochrane Database Syst Rev       Date:  2007-04-18

Review 5.  Dissemination and publication of research findings: an updated review of related biases.

Authors:  F Song; S Parekh; L Hooper; Y K Loke; J Ryder; A J Sutton; C Hing; C S Kwok; C Pang; I Harvey
Journal:  Health Technol Assess       Date:  2010-02       Impact factor: 4.014

6.  Publication bias and clinical trials.

Authors:  K Dickersin; S Chan; T C Chalmers; H S Sacks; H Smith
Journal:  Control Clin Trials       Date:  1987-12

7.  Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials.

Authors:  Jelena Savović; Hayley E Jones; Douglas G Altman; Ross J Harris; Peter Jüni; Julie Pildal; Bodil Als-Nielsen; Ethan M Balk; Christian Gluud; Lise Lotte Gluud; John P A Ioannidis; Kenneth F Schulz; Rebecca Beynon; Nicky J Welton; Lesley Wood; David Moher; Jonathan J Deeks; Jonathan A C Sterne
Journal:  Ann Intern Med       Date:  2012-09-18       Impact factor: 25.391

8.  What do the JAMA editors say when they discuss manuscripts that they are considering for publication? Developing a schema for classifying the content of editorial discussion.

Authors:  Kay Dickersin; Elizabeth Ssemanda; Catherine Mansell; Drummond Rennie
Journal:  BMC Med Res Methodol       Date:  2007-09-25       Impact factor: 4.615

9.  Misunderstanding publication bias: editors are not blameless after all.

Authors:  Stephen Senn
Journal:  F1000Res       Date:  2012-12-04

10.  New treatments compared to established treatments in randomized trials.

Authors:  Benjamin Djulbegovic; Ambuj Kumar; Paul P Glasziou; Rafael Perera; Tea Reljic; Louise Dent; James Raftery; Marit Johansen; Gian Luca Di Tanna; Branko Miladinovic; Heloisa P Soares; Gunn E Vist; Iain Chalmers
Journal:  Cochrane Database Syst Rev       Date:  2012-10-17
View more
  8 in total

1.  Authors are also reviewers: problems in assigning cause for missing negative studies.

Authors:  Stephen Senn
Journal:  F1000Res       Date:  2013-01-21

2.  Tackling treatment uncertainties together: the evolution of the James Lind Initiative, 2003-2013.

Authors:  Iain Chalmers; Patricia Atkinson; Mark Fenton; Lester Firkins; Sally Crowe; Katherine Cowan
Journal:  J R Soc Med       Date:  2013-07-03       Impact factor: 5.344

Review 3.  Why are medical and health-related studies not being published? A systematic review of reasons given by investigators.

Authors:  Fujian Song; Yoon Loke; Lee Hooper
Journal:  PLoS One       Date:  2014-10-15       Impact factor: 3.240

4.  The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

Authors:  Sven Kepes; Michael A McDaniel
Journal:  PLoS One       Date:  2015-10-30       Impact factor: 3.240

5.  Interactions between a Candidate Gene for Migration (ADCYAP1), Morphology and Sex Predict Spring Arrival in Blackcap Populations.

Authors:  Raeann Mettler; Gernot Segelbacher; H Martin Schaefer
Journal:  PLoS One       Date:  2015-12-18       Impact factor: 3.240

6.  Two Escape Mechanisms of Influenza A Virus to a Broadly Neutralizing Stalk-Binding Antibody.

Authors:  Ning Chai; Lee R Swem; Mike Reichelt; Haiyin Chen-Harris; Elizabeth Luis; Summer Park; Ashley Fouts; Patrick Lupardus; Thomas D Wu; Olga Li; Jacqueline McBride; Michael Lawrence; Min Xu; Man-Wah Tan
Journal:  PLoS Pathog       Date:  2016-06-28       Impact factor: 6.823

7.  Towards an open science publishing platform.

Authors:  Vitek Tracz; Rebecca Lawrence
Journal:  F1000Res       Date:  2016-02-03

8.  A new Critical Care channel in F1000Research.

Authors:  Jean-Louis Vincent
Journal:  F1000Res       Date:  2015-11-19
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.