Literature DB >> 25168036

How does under-reporting of negative and inconclusive results affect the false-positive rate in meta-analysis? A simulation study.

Michal Kicinski1.   

Abstract

OBJECTIVE: To investigate the impact of a higher publishing probability for statistically significant positive outcomes on the false-positive rate in meta-analysis.
DESIGN: Meta-analyses of different sizes (N=10, N=20, N=50 and N=100), levels of heterogeneity and levels of publication bias were simulated. PRIMARY AND SECONDARY OUTCOME MEASURES: The type I error rate for the test of the mean effect size (ie, the rate at which the meta-analyses showed that the mean effect differed from 0 when it in fact equalled 0) was estimated. Additionally, the power and type I error rate of publication bias detection methods based on the funnel plot were estimated.
RESULTS: In the presence of a publication bias characterised by a higher probability of including statistically significant positive results, the meta-analyses frequently concluded that the mean effect size differed from zero when it actually equalled zero. The magnitude of the effect of publication bias increased with an increasing number of studies and between-study variability. A higher probability of including statistically significant positive outcomes introduced little asymmetry to the funnel plot. A publication bias of a sufficient magnitude to frequently overturn the meta-analytic conclusions was difficult to detect by publication bias tests based on the funnel plot. When statistically significant positive results were four times more likely to be included than other outcomes and a large between-study variability was present, more than 90% of the meta-analyses of 50 and 100 studies wrongly showed that the mean effect size differed from zero. In the same scenario, publication bias tests based on the funnel plot detected the bias at rates not exceeding 15%.
CONCLUSIONS: This study adds to the evidence that publication bias is a major threat to the validity of medical research and supports the usefulness of efforts to limit publication bias. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

Entities:  

Keywords:  EGGER'S TEST; FUNNEL PLOT; META-ANALYSIS; PUBLICATION BIAS; TYPE I ERROR

Mesh:

Year:  2014        PMID: 25168036      PMCID: PMC4156818          DOI: 10.1136/bmjopen-2014-004831

Source DB:  PubMed          Journal:  BMJ Open        ISSN: 2044-6055            Impact factor:   2.692


This is the first study that evaluated both the impact of publication bias on the conclusions from meta-analysis and the ability of publication bias methods to detect publication bias in the same meta-analysis samples. The model for publication bias was realistic since it was based on empirical research on publication bias in the medical literature. Selection models were not considered in this study because their relatively large computational burden made it impossible to incorporate them in the simulations, which involved analysing hundreds of thousands of samples.

Introduction

The tendency to decide whether to publish a study based on its results is commonly referred to as publication bias. Clearly, when some study outcomes are more likely to be reported than others, the available literature may be misleading. The phenomenon of research under-reporting has been long recognised as a potential source of bias.1–3 Meta-analysis, a tool that allows researchers to summarise the findings from multiple studies in a single estimate, plays an important role in the era of evidence-based decision-making. A key assumption of the standard meta-analysis model is that the sample of retrieved studies is representative of all conducted studies.4–6 One consequence of publication bias is that it affects the sample of studies that is available for a meta-analysis, thereby violating that assumption.7 Indeed, more and more evidence suggests that publication bias is present in many meta-analyses.8–11 Deciding whether to publish a study based on the statistical sinificance and the direction of the effect is the best-documented form of publication bias in the medical literature.12 13 Investigators, who followed research projects from the moment of the submission of the study protocols to ethics committees and medical agencies to the publication of the results, demonstrated that statistically significant and positive results often have a multiple times higher probability to be published than other results.13–15 Consistent with this evidence, a recent study observed that statistically significant findings favouring treatment often had a multiple times higher probability to enter meta-analyses of clinical trials than other findings.16 The effect of publication bias on the validity of meta-analytic conclusions remains largely unexplored. Hedges17 showed that censoring all non-significant results induces a strong bias when conslusions are drawn from multiple studies. Simulation studies have demonstrated that the standard meta-analysis model produces biased estimates of the mean effect size when publication bias is present.18–20 The conclusions from meta-analyses are sometimes inconsistent with the results of large studies and publication bias is a likely cause of this inconsistency.21–24 The validity of any statistical procedure requires a low rate of false-positive findings. In the case of meta-analysis, a low type I error rate (ie, a rate at which a meta-analysis leads to the conclusion that the mean effect differs from 0 when it in fact equals 0) is particularly important because a meta-analytic conclusion is assumed to summarise the existing evidence. In the context of a meta-analysis of clinical trials, a false-positive result may lead to the conclusion of a beneficial effect from a treatment that is in fact less efficient than the available alternatives.25 In general, a false-positive finding from a meta-analysis misinforms doctors, scientists and policymakers, potentially causing wastefullnes or even harm. The aim of this study was to investigate the impact of a higher publishing probability for statistically significant positive outcomes on the type I error rate in meta-analysis. A simulation approach was used because the effect of publication bias on the conclusions from meta-analysis can only be evaluated when the exact nature of the selection process is known.

Methods

Data from individual studies

Meta-analyses of clinical trials with two arms and a binary outcome were simulated. However, the results of the simulations are applicable to other study designs as well because the distribution of the log-OR is approximately normal, similarly to the distribution of other commonly used effect size measures. Similar to another simulation study,26 the sample size was modelled using the exponential of a normal distribution. This approach gives a right-skewed distribution, which is a realistic model. Based on the characteristics of the meta-analyses from the Cochrane Database of Systematic Reviews,27 a mean of 4.51 and a variance of 1.47 was chosen. With these values, the median sample size equalled 91 and the IQR was 166. Following other simulation studies,19 20 26 28 29 equal sizes were used for the treatment group and control group. As in other simulation studies,19 20 26 the probability of the event in the control group (pC) was sampled from a uniform distribution U (0.3, 0.7). The probability of the event in the treatment group (pT) was calculated from the equation logit (pT)=logit (pC)+δ+θ, where δ was the effect of study-specific characteristics on the log-OR, and θ was the mean effect size. The mean effect size equalled 0 because the effect of publication bias on the type I error rate for the test of the mean effect size was investigated. I sampled δ from a normal distribution N (0, τ2). For the between-study variability, τ2, the values 0.02, 0.12 and 0.9 were considered. These values are the 10th, 50th and 90th centiles of the predictive distribution of the between-study variability in the meta-analyses of clinical trials from the Cochrane database.30 The size of the between-study variability is often expressed in terms of I2, defined as the proportion of the total variability due to heterogeneity.31 The considered values of τ2 correspond to I2=17%, I2=56% and I2=90%. The number of events in the treatment and control group was sampled from a binomial distribution.

Selection process

The relative risk (RR) was defined as the ratio of the probability of including statistically significant positive results to the probability of including other results. However, the conclusions of the study are equally applicable to the case of a higher publishing probability for statistically significant negative outcomes. A conventional two-sided significance level of 0.05 was assumed. Three values of RR were considered: 1, 4 and 10. For RR=1, no publication bias was present. A value of four was chosen because multiple studies on publication bias estimated the ratio of the probability of publishing studies showing statistically significant positive results to the probability of publishing other results as close to four.13–15 A value of 10 represents a strong publication bias and is still relevant in the light of the empirical research on publication bias in the medical literature.13 16 32

Publication bias detection

A meta-analysis is often accompanied by an investigation of the presence of publication bias. Therefore, publication bias tests were incorporated in the simulations. The funnel plot is a scatter plot of effect estimates against some measure of precision. In the absence of a bias, the effect estimates from smaller studies scatter widely at the bottom of the funnel plot, with the spread narrowing among larger studies, so that the plot resembles a symmetrical inverted funnel.33 If there is a bias, funnel plots are often asymmetrical.33 34 Since a funnel plot asymmetry is commonly used to investigate the presence of publication bias,35 the funnel plots were inspected visually and using the following formal tests: the Egger's test, ‘Egger’;34 the rank correlation test, ‘Rank’;36 a modified Egger's test based on the efficient score, ‘Harbord’;28 a regression test based on sample size, ‘Peters’;26 a rank correlation test for binary data, ‘Schwarzer’;37 the Egger's test based on the arcsine transformation, ‘Arc-Egger’;38 a rank correlation test based on the arcsine transformation, ‘Arc-rank’;38 the trim and fill method, ‘Trim’.39 For all tests, a significance level of 0.05 was used. For ‘Egger’, ‘Rank’, ‘Harbord’, ‘Peters’, ‘Schwarzer’, ‘Arc-Egger’ and ‘Arc-rank’ two-sided tests were used. For the trim and fill method, the presence of publication bias was indicated when the number of missing studies estimated by the R estimator in the first step of the algorithm was greater than 3.39

Meta-analysis

The mean log-OR was estimated using the random effects model proposed by DerSimonian and Laird, which is a widely used approach to conduct a meta-analysis.40 Four sizes of meta-analyses were considered: N=10, N=20, N=50 and N=100. Meta-analyses including less than 10 studies were not considered because publication bias tests were not recommended for use in this case due to a low power.33

Simulations

Four sample sizes (N=10, N=20, N=50 and N=100), three sizes of the between-study variability (τ2=0.02, τ2=0.12 and τ2=0.9), and three levels of publication bias were considered (RR=1, RR=4 and RR=10), resulting in 36 simulation scenarios. For each scenario, the estimates of the mean effect size were evaluated in terms of the bias and the mean squared error. The effect of publication bias on the type I error rate for the test of the mean effect size was estimated for a grid of values within the considered ranges of the level of publication bias and the size of between-study variability. A two-sided significance level of 0.05 was assumed. For each scenario, the power and the type I error rate for the publication bias tests were also investigated. Additionally, I estimated the type I error rate for the test of the mean effect size using only those samples where no publication bias was found. The purpose of this analysis was to investigate the effect of a one-sided selection process based on the statistical significance on the false-positive rate in meta-analysis in situations where publication bias detection methods cannot not identify the bias. All reported estimates are based on 10 000 simulations. The analysis was conducted in R (V.2.15.0). The R code used to perform the simulations is available online (see data sharing statement).

Results

Validity of the mean effect size estimates

Figure 1 shows the type I error rates for the test of the mean effect size for the range of the level of publication bias and the amount of between-study variability considered in the study. In the presence of a selection process characterised by a higher probability of including statistically significant positive results, the meta-analyses frequently concluded that the mean effect size differed from zero when it in fact equalled zero. The magnitude of the effect of publication bias increased with an increasing number of studies and the amount of between-study variability. When statistically significant positive results were four times more likely to be included than other results, the type I error rate was between 11% and 100%. When statistically significant positive results were 10 times more likely to be included, between 25% and 100% of the meta-analyses concluded that the mean effect size differed from zero when it in fact equalled 0 (figure 1).
Figure 1

The effect of a higher probability of inclusion for statistically significant positive outcomes on the type I error rate for the test of the mean effect size in a meta-analysis of (A) 10 studies, (B) 20 studies, (C) 50 studies, (D) 100 studies. RR: the ratio of the probability of including statistically significant positive outcomes to the probability of including negative and/or not statistically significant outcomes.

The effect of a higher probability of inclusion for statistically significant positive outcomes on the type I error rate for the test of the mean effect size in a meta-analysis of (A) 10 studies, (B) 20 studies, (C) 50 studies, (D) 100 studies. RR: the ratio of the probability of including statistically significant positive outcomes to the probability of including negative and/or not statistically significant outcomes. A higher probability of including statistically significant positive outcomes led to a drastic increase of the bias and the mean squared error, especially when a large between-study variability was present (table 1). When statistically significant positive results were four times more likely to be included than other results and 90% of the variability was due to between-study differences, the random-effects meta-analysis overestimated the mean log-OR approximately by 0.5 on average. When statistically significant positive results were 10 times more likely to be included and the same amount of between-study variability was present, the random-effects meta-analysis overestimated the mean log-OR by 0.83 on average. The mean squared error was especially large when the between-study variability was large (table 1).
Table 1

Validity of estimates of the mean effect size

Publication biasNτ2I2 (%)BiasMSE
None100.02170.000.01
None100.12560.000.03
None100.90900.000.11
None200.02170.000.01
None200.12560.000.01
None200.90900.000.05
None500.02170.000.00
None500.12560.000.01
None500.90900.000.02
None1000.02170.000.00
None1000.12560.000.00
None1000.90900.000.01
RR=4100.02170.070.02
RR=4100.12560.160.06
RR=4100.90900.500.36
RR=4200.02170.060.01
RR=4200.12560.160.04
RR=4200.90900.490.30
RR=4500.02170.060.01
RR=4500.12560.160.03
RR=4500.90900.490.27
RR=41000.02170.060.01
RR=41000.12560.160.03
RR=41000.90900.490.25
RR=10100.02170.160.05
RR=10100.12560.340.15
RR=10100.90900.830.78
RR=10200.02170.160.04
RR=10200.12560.340.13
RR=10200.90900.830.73
RR=10500.02170.160.03
RR=10500.12560.340.12
RR=10500.90900.830.71
RR=101000.02170.160.03
RR=101000.12560.340.12
RR=101000.90900.830.70

All estimates were based on 10 000 samples.

Bias, the average estimate of the mean effect size; MSE, mean squared error.

Validity of estimates of the mean effect size All estimates were based on 10 000 samples. Bias, the average estimate of the mean effect size; MSE, mean squared error. Next, I investigated whether a one-sided selection process based on the statistical significance (which caused a drastic increase of the false-positive rate of the meta-analyses, as described in the previous section) was detectable by different publication bias methods. Figure 2 shows data from simulations without publication bias (A and B) and simulations in which statistically significant positive results were 10 times more likely to be included than other results (C and D). A visual examination of the funnel plots indicated that a one-sided selection process based on the statistical significance introduced little asymmetry to the funnel plot both when the between-study variability was small (compare figure 2A, C) and large (compare figure 2B, D). In other words, the funnel plot provided no evidence of publication bias when positive statistically significant results were 10 times more likely to be included than other results.
Figure 2

A funnel plot of simulated data when: (A) the probability of inclusion was the same for all outcomes and a small between-study variability was present (τ2=0.02), (B) the probability of inclusion was the same for all outcomes and a large between-study variability was present (τ2=0.9), (C) statistically significant positive outcomes were 10 times more likely to be included than other outcomes and a small between-study variability was present (τ2=0.02), (D) statistically significant positive outcomes were 10 times more likely to be included than other outcomes and a large between-study variability was present (τ2=0.9).

A funnel plot of simulated data when: (A) the probability of inclusion was the same for all outcomes and a small between-study variability was present (τ2=0.02), (B) the probability of inclusion was the same for all outcomes and a large between-study variability was present (τ2=0.9), (C) statistically significant positive outcomes were 10 times more likely to be included than other outcomes and a small between-study variability was present (τ2=0.02), (D) statistically significant positive outcomes were 10 times more likely to be included than other outcomes and a large between-study variability was present (τ2=0.9). Table 2 gives the proportions of the meta-analyses in which the presence of publication bias was indicated by formal tests. The scenarios with publication bias (RR=4 and RR=10) provide estimates of the power of different tests to detect a one-sided selection process based on the statistical significance. The scenarios without publication bias provide estimates of the type I error rate (the rate at which publication bias was indicated when no publication bias was present). When statistically significant positive results were four times more likely to be included than other results, all methods indicated the presence of publication bias in not more than 15% of the meta-analyses for all simulation settings (table 2). When statistically significant positive results were 10 times more likely to be included, the power of publication bias detection methods did not exceed 30% for any simulation setting. The type I error rates for the ‘Egger’, ‘Harbord’ and ‘Arc-Egger’ tests substantially exceeded 0.05 for some simulation settings, especially when a large between-study variability was present
Table 2

Power and type I error rate of publication bias detection methods

Publication biasNτ2I2 (%)EggerRankHarbordPetersSchwarzerArc-EggerArc-rankTrim
None100.02170.060.030.060.040.030.060.030.01
None100.12560.060.020.060.040.020.060.020.00
None100.90900.060.010.070.030.010.070.010.00
None200.02170.060.030.060.050.020.060.030.02
None200.12560.080.020.080.040.020.080.020.01
None200.90900.090.010.110.030.010.110.010.00
None500.02170.070.020.070.040.020.070.020.03
None500.12560.110.020.110.040.020.110.020.02
None500.90900.130.010.150.030.010.150.010.01
None1000.02170.080.030.080.050.020.080.020.04
None1000.12560.140.020.140.050.020.140.020.03
None1000.90900.150.020.180.040.010.180.020.02
RR=4100.02170.050.020.050.040.020.050.030.00
RR=4100.12560.060.020.060.030.020.060.020.00
RR=4100.90900.050.010.060.030.020.060.020.00
RR=4200.02170.060.020.050.030.020.050.020.01
RR=4200.12560.070.010.070.030.010.080.010.00
RR=4200.90900.080.010.090.030.020.090.020.00
RR=4500.02170.080.020.080.040.020.080.020.04
RR=4500.12560.100.020.110.040.020.110.020.02
RR=4500.90900.110.020.120.030.050.130.050.00
RR=41000.02170.080.020.080.040.020.080.020.05
RR=41000.12560.130.030.150.050.040.140.040.03
RR=41000.90900.130.030.150.040.130.150.110.00
RR=10100.02170.050.020.050.040.020.050.020.00
RR=10100.12560.050.020.050.030.020.050.020.00
RR=10100.90900.080.030.070.040.020.070.030.00
RR=10200.02170.050.020.050.030.020.050.020.02
RR=10200.12560.060.020.060.030.020.060.020.00
RR=10200.90900.110.040.090.040.020.090.030.00
RR=10500.02170.060.020.060.030.020.060.020.05
RR=10500.12560.070.020.070.030.020.080.020.01
RR=10500.90900.190.060.140.040.030.140.030.00
RR=101000.02170.070.020.060.040.010.060.020.09
RR=101000.12560.090.030.090.030.050.090.040.03
RR=101000.90900.300.080.190.060.030.180.030.00

All proportions were based on 10 000 samples.

Power and type I error rate of publication bias detection methods All proportions were based on 10 000 samples.

False-positive rate in meta-analyses in which no publication bias was found

For the completeness of the study, I repeated the investigation of the effect of a selection process based on the statistical significance on the type I error rate for the test of the mean effect size using only those samples in which a certain publication bias test did not show evidence of publication bias. The aim of this analysis was to study whether a one-sided selection process based on the statistical significance threatened the validity of those meta-analyses where no evidence of publication bias was apparent. For example, meta-analyses were simulated until 10 000 samples were identified in which the ‘Egger’ test did not show any evidence of publication bias. Next, those samples were used to estimate the rate at which the meta-analysis led to the conclusion that the mean effect size differed from 0 when it actually did not, under a selection process based on the statistical significance that could not be detected by the ‘Egger’ test. Table 3 compares the proportion of meta-analyses incorrectly showing that the mean effect size differed from zero among all samples (column ‘All’) and among samples where no publication bias was found. There was little difference in the type I error rate for the test of the mean effect size between the meta-analyses without evidence of publication bias and all meta-analyses.
Table 3

Type I error rate for the test for the mean effect size when no evidence of bias was present

Publication biasNτ2I2 (%)AllEggerRankHarbordPetersSchwarzerArc-EggerArc-rankTrim
None100.02170.060.060.060.060.060.060.060.060.06
None100.12560.100.100.100.090.100.100.100.100.10
None100.90900.100.100.100.100.100.100.100.100.10
None200.02170.070.070.070.070.070.070.070.070.07
None200.12560.080.080.080.080.080.080.080.080.08
None200.90900.080.080.080.080.080.080.080.080.08
None500.02170.060.060.060.060.060.060.060.060.06
None500.12560.060.060.060.060.060.060.060.060.06
None500.90900.060.060.060.060.060.060.060.060.06
None1000.02170.070.060.060.070.070.070.070.070.07
None1000.12560.060.060.060.060.060.060.060.060.06
None1000.90900.060.060.060.060.060.060.060.060.06
RR=4100.02170.110.110.110.110.110.110.110.110.11
RR=4100.12560.220.220.220.220.220.220.220.220.22
RR=4100.90900.410.410.420.410.410.420.410.410.41
RR=4200.02170.140.140.140.140.140.140.140.140.14
RR=4200.12560.290.280.290.280.280.290.280.280.29
RR=4200.90900.610.610.610.610.600.610.610.610.61
RR=4500.02170.210.210.210.210.210.210.210.210.21
RR=4500.12560.540.530.540.530.530.540.530.530.54
RR=4500.90900.910.910.910.910.910.910.910.910.91
RR=41000.02170.350.350.350.340.350.350.350.350.35
RR=41000.12560.810.800.810.800.800.810.800.800.81
RR=41000.90901.001.001.001.001.001.001.001.001.00
RR=10100.02170.250.250.250.250.250.250.250.250.25
RR=10100.12560.540.530.540.540.530.540.530.540.54
RR=10100.90900.790.790.790.790.790.790.790.790.79
RR=10200.02170.370.370.370.370.370.370.370.370.37
RR=10200.12560.770.770.770.770.770.770.770.770.77
RR=10200.90900.960.960.960.960.960.960.960.960.96
RR=10500.02170.710.710.710.710.710.710.710.700.71
RR=10500.12560.980.980.980.980.980.980.980.980.98
RR=10500.90901.001.001.001.001.001.001.001.001.00
RR=101000.02170.940.940.940.940.940.940.940.940.94
RR=101000.12561.001.001.001.001.001.001.001.001.00
RR=101000.90901.001.001.001.001.001.001.001.001.00

The column ‘All’ shows the type I error rates for the test for the mean effect size based on all samples. The remaining columns show the type I error rates based on meta-analyses, in which no publication bias was detected by the test in the column heading.

Type I error rate for the test for the mean effect size when no evidence of bias was present The column ‘All’ shows the type I error rates for the test for the mean effect size based on all samples. The remaining columns show the type I error rates based on meta-analyses, in which no publication bias was detected by the test in the column heading.

Discussion

The results of these realistic simulations demonstrate that when a one-sided selection process based on the statistical significance is present, the false-positive rate in meta-analysis dramatically increases. The magnitude of the problem increases with an increasing number of studies used and the amount of heterogeneity. When statistically significant positive results were four times more likely to be included in the meta-analyses than other results, the false-positive rate was between 11% and 100%. When statistically significant positive results were 10 times more likely to be included, between 25% and 100% of the meta-analyses wrongly concluded that the mean effect size differed from zero. Publication bias tests based on the funnel plot were unlikely to detect a publication bias of a sufficient magnitude to frequently overturn the meta-analytic conclusions. For example, when statistically significant positive results were four times more likely to be included and a large between-study variability was present, more than 90% of the meta-analyses of 50 and 100 studies wrongly concluded that the mean effect size differed from zero. In the same scenario, all publication bias tests based on the funnel plot detected the bias at rates not exceeding 15%. The power of the tests did not exceed 30% for any simulation settings. In general, the Egger's test,34 the modified Egger's test based on the efficient score28 and the Egger's test based on the arcsine transformation38 showed the highest power. However, the type I error rate of these tests substantially exceeded 0.05, especially when a large between-study variability was present. Many selection processes are known to introduce a considerable amount of asymmetry to the funnel plot. For example, when studies with most extreme negative effect estimates fail to enter a meta-analysis, a test based on the R estimator from the trim and fill method provides a powerful tool to detect this bias.39 In addition to the type of selection process, the mean effect size also determines the performance of publication bias detection methods. Several studies considering different selection processes have observed that tests based on the funnel plot are characterised by a low power when the mean effect size equals zero.26 41 The current study shows that this is also the case for a one-sided publication bias based on the statistical significance. A higher probability of including statistically significant positive results caused a large increase of the type I error rate for the test of the mean effect size also in those meta-analyses, where publication bias tests did not detect the bias. This result demonstrates that under-reporting of negative and non-significant results is also a threat to the validity of those meta-analyses where publication bias cannot be found by the methods based on the funnel plot. The most common approaches to address publication bias in a meta-analysis include ignoring the issue and applying methods based on the funnel plot.35 The current study demonstrates that when a one-sided publication bias based on the statistical significance is possibly present, the issue should never be ignored because this bias causes a severe increase of the false-positive rate in meta-analysis. Moreover, the study shows that the methods based on the funnel plot are not appropriate to address the problem because a selection process based on the statistical significance introduces little asymmetry to the funnel plot when the mean-effect size equals zero. Parametric 16 42 43 and non-parametric 44 45 selection models may be an attractive alternative to the methods based on the funnel plot. In a recent study with settings based on characteristics of large meta-analyses from major medical journals, a Bayesian hierarchical selection model outperformed methods based on the funnel plot.16 Future research should compare the performance of different selection models and methods based on the funnel plot in a wider range of scenarios. Selection models were not considered in this study because their relatively large computational burden made it impossible to incorporate them in the simulations, which involved analysing hundreds of thousands of samples. Many recent developments enhance complete and unbiased reporting of clinical trials. The International Committee of Medical Journal Editors began to require trial registration as a condition for publication in 2005. In 2008, the 59th World Medical Association (WMA) General Assembly stated that clinical trials must be registered prospectively and called a public disclosure of positive, negative and inconclusive results an author's duty. The results of this study add to the evidence that publication bias is a major threat to the validity of conclusions from medical research and strongly support the usefulness of the efforts to limit publication bias.

Conclusions

Under-reporting of negative and inconclusive results, which was demonstrated by studies on publication bias, represents a major threat to the validity of meta-analysis. A higher probability of including statistically significant positive outcomes causes a severe increase of the false-positive rate in meta-analysis. Moreover, a one-sided selection process based on the statistical significance of a sufficient magnitude to dramatically bias meta-analysis conclusions is poorly detectable by publication bias methods based on the funnel plot when the mean effect size equals 0. Future research is needed to compare the performance of these methods with selection models. The study supports the usefulness of initiatives aiming to reduce publication bias in the medical literature.
  35 in total

1.  A comparison of methods to detect publication bias in meta-analysis.

Authors:  P Macaskill; S D Walter; L Irwig
Journal:  Stat Med       Date:  2001-02-28       Impact factor: 2.373

2.  Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.

Authors:  S Duval; R Tweedie
Journal:  Biometrics       Date:  2000-06       Impact factor: 2.571

3.  Quantifying heterogeneity in a meta-analysis.

Authors:  Julian P T Higgins; Simon G Thompson
Journal:  Stat Med       Date:  2002-06-15       Impact factor: 2.373

4.  A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints.

Authors:  Roger M Harbord; Matthias Egger; Jonathan A C Sterne
Journal:  Stat Med       Date:  2006-10-30       Impact factor: 2.373

5.  A test for publication bias in meta-analysis with sparse binary data.

Authors:  Guido Schwarzer; Gerd Antes; Martin Schumacher
Journal:  Stat Med       Date:  2007-02-20       Impact factor: 2.373

6.  Selection models with monotone weight functions in meta analysis.

Authors:  Kaspar Rufibach
Journal:  Biom J       Date:  2011-05-12       Impact factor: 2.207

7.  Operating characteristics of a rank correlation test for publication bias.

Authors:  C B Begg; M Mazumdar
Journal:  Biometrics       Date:  1994-12       Impact factor: 2.571

8.  Predictive ability of meta-analyses of randomised controlled trials.

Authors:  J Villar; G Carroli; J M Belizán
Journal:  Lancet       Date:  1995-03-25       Impact factor: 79.321

9.  Selective publication of antidepressant trials and its influence on apparent efficacy.

Authors:  Erick H Turner; Annette M Matthews; Eftihia Linardatos; Robert A Tell; Robert Rosenthal
Journal:  N Engl J Med       Date:  2008-01-17       Impact factor: 91.245

10.  Characteristics of meta-analyses and their component studies in the Cochrane Database of Systematic Reviews: a cross-sectional, descriptive analysis.

Authors:  Jonathan Davey; Rebecca M Turner; Mike J Clarke; Julian P T Higgins
Journal:  BMC Med Res Methodol       Date:  2011-11-24       Impact factor: 4.615

View more
  12 in total

Review 1.  Ten simple rules for neuroimaging meta-analysis.

Authors:  Veronika I Müller; Edna C Cieslik; Angela R Laird; Peter T Fox; Joaquim Radua; David Mataix-Cols; Christopher R Tench; Tal Yarkoni; Thomas E Nichols; Peter E Turkeltaub; Tor D Wager; Simon B Eickhoff
Journal:  Neurosci Biobehav Rev       Date:  2017-11-24       Impact factor: 8.989

2.  Editor's Spotlight/Take 5: Can Topical Vancomycin Prevent Periprosthetic Joint Infection in Hip and Knee Arthroplasty? A Systematic Review.

Authors:  Seth S Leopold
Journal:  Clin Orthop Relat Res       Date:  2021-08-01       Impact factor: 4.755

3.  Systematic meta-analyses of gene-specific genetic association studies in prostate cancer.

Authors:  Qiang Hao; Dong Wei; Yaoguang Zhang; Xin Chen; Fan Yang; Ze Yang; Xiaoquan Zhu; Jianye Wang
Journal:  Oncotarget       Date:  2016-04-19

Review 4.  Assessment of the extent of unpublished studies in prognostic factor research: a systematic review of p53 immunohistochemistry in bladder cancer as an example.

Authors:  Peggy Sekula; Julia B Pressler; Willi Sauerbrei; Peter J Goebell; Bernd J Schmitz-Dräger
Journal:  BMJ Open       Date:  2016-08-16       Impact factor: 2.692

Review 5.  Dealing with the positive publication bias: Why you should really publish your negative results.

Authors:  Ana Mlinarić; Martina Horvat; Vesna Šupak Smolčić
Journal:  Biochem Med (Zagreb)       Date:  2017-10-15       Impact factor: 2.313

6.  Tracheal Replacement Therapy with a Stem Cell-Seeded Graft: Lessons from Compassionate Use Application of a GMP-Compliant Tissue-Engineered Medicine.

Authors:  Martin J Elliott; Colin R Butler; Aikaterini Varanou-Jenkins; Leanne Partington; Carla Carvalho; Edward Samuel; Claire Crowley; Peggy Lange; Nicholas J Hamilton; Robert E Hynds; Tahera Ansari; Paul Sibbons; Anja Fierens; Claire McLaren; Derek Roebuck; Colin Wallis; Nagarajan Muthialu; Richard Hewitt; David Crabbe; Sam M Janes; Paolo De Coppi; Mark W Lowdell; Martin A Birchall
Journal:  Stem Cells Transl Med       Date:  2017-06       Impact factor: 6.940

7.  Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI.

Authors:  Freya Acar; Ruth Seurinck; Simon B Eickhoff; Beatrijs Moerkerke
Journal:  PLoS One       Date:  2018-11-30       Impact factor: 3.240

8.  Leveling the Playing Field: A New Initiative to Publish Negative and Replication Data in Brain Trauma.

Authors:  Chantelle Ferland-Beckham; Sandra Petty; Eric Prager; Nicole Harmon; Magali Haas; Andreas Jeromin
Journal:  Neurotrauma Rep       Date:  2020-10-29

9.  Examining publication bias-a simulation-based evaluation of statistical tests on publication bias.

Authors:  Andreas Schneck
Journal:  PeerJ       Date:  2017-11-30       Impact factor: 2.984

10.  The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression.

Authors:  Y A de Vries; A M Roest; P de Jonge; P Cuijpers; M R Munafò; J A Bastiaansen
Journal:  Psychol Med       Date:  2018-08-02       Impact factor: 7.723

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.