| Literature DB >> 29515380 |
Abstract
In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.Entities:
Keywords: journal ranking; journals; reliability; reproducibility of results; science policy
Year: 2018 PMID: 29515380 PMCID: PMC5826185 DOI: 10.3389/fnhum.2018.00037
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Figure 1Ranking journals according to crystallographic quality reveals high-ranking journals with the lowest quality work. The quality metric (y-axis) is computed as a deviation from perfect. Hence, lower values denote higher quality work. Each dot denotes a single structure. The quality metric was normalized to the sample average and journals ranked according to their mean quality. Asterisks denote significant difference from sample average. Figure courtesy of Dr. Ramaswamy, methods in Brown and Ramaswamy (2007).
Figure 2Relationship between impact factor (IF) and extent to which an individual study overestimates the likely true effect. Data represent 81 candidate gene studies of various candidate genes with psychiatric traits. The bias score (y-axis) represents the effect size of the individual study divided by the pooled effect size estimated indicated by meta-analysis, on a log-scale. Therefore, a value greater than zero indicates that the study provided an overestimate of the likely true effect size. This is plotted against the IF of the journal the study was published in (x-axis). The size of the circles is proportional to the sample size of the individual study. Bias score is significantly positively correlated with IF, sample size significantly negatively. Figure from Munafò et al. (2009).
Figure 3No association between statistical power and journal IF. The statistical power of 650 eligible neuroscience studies plotted as a function of the IF of the publishing journal. Each red dot denotes a single study. Figure from Brembs et al. (2013).
Figure 4High ranking journals do not have a higher tendency to report more randomization nor blinding in animal experiments. Prevalence of reporting of randomization and blinded assessment of outcome in 2671 publications describing the efficacy of interventions in animal models of eight different diseases identified in the context of systematic reviews. Figure modified from Macleod et al. (2015).
Figure 5Journals with above-average error-rate rank higher than journals with a lower error-rate. Shown is the prevalence of gene name errors in supplementary Excel files as the percentage of publications with supplementary gene lists in Excel files affected by gene name errors. Figure modified from Ziemann et al. (2016).
Figure 6p-value reporting errors correlate significantly with journal rank. The correlation of the median percentage of articles with erroneous articles (left; which can contain multiple erroneous records) or individual records (right) in a given journal and journal IFs. Both linear and logarithmic (log[journal IF]) trend lines are shown. Figure redrawn from Szucs and Ioannidis (2016).
Overview of the cited literature on journal rank and methodological soundness.
| Field | Criteria | Outcome | References |
|---|---|---|---|
| Biomedicine | Image duplications | Higher ranking journals show a lower incidence of image duplications | Bik et al. ( |
| Crystallography | Quality of computer models | Five high-ranking journals significantly below average quality | Brown and Ramaswamy ( |
| Molecular psychiatry | Sample sizes and effect sizes | Higher ranking journals overestimated effect sizes with smaller sample sizes | Munafò et al. ( |
| Neuroscience, psychology | Statistical power | Either no correlation of journal rank with statistical power or a negative correlation | Brembs et al. ( |
| Reporting of randomization and blinded assessment of outcome | Lower reporting of randomization in higher ranking journals and no correlation with reporting of blinded assessment of outcome | Macleod et al. ( | |
| Genomics, cognitive neuroscience and psychology | Gene name and | More errors in higher ranking journals | Ziemann et al. ( |
| Medicine | Criteria for evidence-based medicine | Two studies found that higher-ranking journals met more criteria, while two failed to detect such an effect | Obremskey et al. ( |
| Psychology | Three reliability metrics: P-Curve, TIVA and R-index | All three metrics indicate that the higher ranking of two journals publishes less reliable work | Bittner and Schönbrodt ( |
| Biomedicine | Reproducibility of experiments | Reproducibility is low, not even “top” journals stand out | Scott et al. ( |
Applying criteria such as peer-review, methodology and spread of journal rank covered, the studies cited in the top six rows would be considered as better supported than the bottom three rows.