| Literature DB >> 34173943 |
Rubén López-Nicolás1, José Antonio López-López2, María Rubio-Aparicio3, Julio Sánchez-Meca2.
Abstract
Meta-analysis is a powerful and important tool to synthesize the literature about a research topic. Like other kinds of research, meta-analyses must be reproducible to be compliant with the principles of the scientific method. Furthermore, reproducible meta-analyses can be easily updated with new data and reanalysed applying new and more refined analysis techniques. We attempted to empirically assess the prevalence of transparency and reproducibility-related reporting practices in published meta-analyses from clinical psychology by examining a random sample of 100 meta-analyses. Our purpose was to identify the key points that could be improved, with the aim of providing some recommendations for carrying out reproducible meta-analyses. We conducted a meta-review of meta-analyses of psychological interventions published between 2000 and 2020. We searched PubMed, PsycInfo and Web of Science databases. A structured coding form to assess transparency indicators was created based on previous studies and existing meta-analysis guidelines. We found major issues concerning: completely reproducible search procedures report, specification of the exact method to compute effect sizes, choice of weighting factors and estimators, lack of availability of the raw statistics used to compute the effect size and of interoperability of available data, and practically total absence of analysis script code sharing. Based on our findings, we conclude with recommendations intended to improve the transparency, openness, and reproducibility-related reporting practices of meta-analyses in clinical psychology and related areas.Entities:
Keywords: Data sharing; Meta-analysis; Meta-science; Reproducibility; Transparency and openness practices
Mesh:
Year: 2021 PMID: 34173943 PMCID: PMC8863703 DOI: 10.3758/s13428-021-01644-z
Source DB: PubMed Journal: Behav Res Methods ISSN: 1554-351X
Fig. 1Percentage of b meta-analysis preregistered, b preregistration locations, c protocol availibility, d guidelines adherence, e competing interest statements, f funding statements, and g accesibility of meta-analyses. N indicates total number of meta-analyses assessed for each indicator
Fig. 2Percentage reported of systematic review methods by a eligibility criteria and literature search, and b data collecion process, showing different indicators for each category. N indicates total number of meta-analyses assessed for each indicator
Fig. 3Percentage reported of meta-analysis methods by a effect measures, and b synthesis and analysis methods, showing different indicators for each category. N indicates total number of meta-analyses assessed for each indicator
Fig. 4Percentage of a meta-analysis that reported some raw data, c meta-analysis that shared the analysis script code, and b what data were available and if these were in interoperable formats; each interoperability bar corresponds to the primary data represented over it. N indicates total number of meta-analyses assessed for each indicator
Odds ratios and 95% CI between predictors and transparency and reproducibility-related indicators
| Indicator | Year | Preregistration | Guideline adherence statement | |||
|---|---|---|---|---|---|---|
| Simple | Multiple | Simple | Multiple | Simple | Multiple | |
| Specify the year for first date searched | 1.06 [0.96–1.17] | 1.04 [0.93–1.17] | 1.24 [0.50–3.25] | .64 [0.21–1.93] | ||
| Report the full search strategy | 1.1 [1.00–1.22] | 1.05 [0.94–1.17] | 2.50 [0.82–9.38] | 1.41 [0.40–5.76] | ||
| Specify the eligibility criteria operatively | 1.19 [0.95–1.50] | |||||
| Describe the screening process | 2.44 [0.37–48.35] | |||||
| List all variables for which data were sought | 1.12 [0.99–1.26] | 2.97 [0.90–13.55] | 1.56 [0.33–11.25] | 2.27 [0.61–11.18] | ||
| Describe methods used for assessing risk of bias of individual studies | 1.10 [0.97–1.25] | |||||
| Identify the statistical model assumed | 1.31 [0.28–9.34] | 0.25 [0.03–2.29] | ||||
| Identify the estimation method of τ2 | 1.15 [0.96–1.47] | 1.06 [0.89–1.34] | 3.12 [0.71–13.41] | 3.18 [0.88–12.03] | 1.97 [0.47–8.43] | |
| Describe any methods to assess reporting biases (including publication bias) | 3.79 [0.99–25.08] | 4.73 [0.97–38.21] | 0.81 [0.32–2.14] | |||
| Mention the software used to carry out the statistical analyses | 2.54 [0.44–48.04] | 1.58 [0.19–35.96] | 2.07 [0.49–14.13] | 0.99 [0.17–8.25] | ||
| Statistics used to compute the effect are size available | 1.09 [0.98–1.24] | 1.05 [0.93–1.2] | 2.08 [0.72–5.86] | 1.38 [0.43–4.26] | 2.04 [0.74–5.66] | |
Odds ratio and CIs not interpretable due to separation were omitted. Odds ratio 95% CI is presented in brackets. Bolded values indicated CIs that do not contain the null value
Summary of results and recommendations on the key points lacking transparency
| Point | Reporting rate | Why is it important? | Recommendations |
|---|---|---|---|
| Completely reproducible electronic search | 37% [28%–47%] | Facilitates the evaluation of the comprehensiveness of the review and its update in the same direction. | Always report the full search strategy for ALL databases consulted, detailing dates, limits, specific terms, and the Boolean connectors. For space-saving reasons, it is recommended to report these details as supplementary material hosted by the journal or online repositories. |
| Specify effect measure formula | 15% [9%–24%] | Due to the variety of approaches to define standardized and unstandardized mean differences, specification of the formula used is required to ensure the reproducibility of results. | Always report the specific formula on the paper itself or refer readers to a reference (including the equation number and/or the book/article page where the formula can be found). |
| Identify the weighting factor | 30% [22%–40%] | Although inverse variances are the most popular weighting scheme, other alternatives are available, and the choice can have an impact on the results. | Always specify the weighting factor used. Note that this should only take a few words. |
| Identify the estimation method of the heterogeneity variance, τ2 | 13% [7%–21%] | The between-studies (or heterogeneity) variance is used in random-effects weights and prediction intervals, as well as in the calculation of popular indices in meta-analysis such as I2 and pseudo-R2. Many estimators of τ2 have been proposed, and the resulting estimates often show important discrepancies among estimators. | Always report and justify the estimation method of the heterogeneity variance. The choice should be based on the data set features along with recommendations from simulation studies under conditions similar to those of the meta-analytic database. |
| Open availability of statistics used to compute the effect size | 30% [21%–39%] | This is the primary raw data used to calculate the effect measures. Availability of this information, along with the effect measure formula, allows the analytic reproducibility of primary effect measures. | Always share ALL coded raw data prior to any data handling in easily computer-readable formats, such as Online repositories are very useful for this (OSF, Fighshare, Zenodo, GitHub…), but other options include journal or personal websites. |
| Interoperability of data sharing format | 3% [1–9%] 3% [1–9%] 4% [1–10%] 4% [1–12%] 7% [2–22%] 4% [1–12%] | Significantly increases the efficiency of data reusability through the use of computer-readable and non-proprietary value formats. Avoiding the error-prone process of manual recoding of available data for reproduction or reuse attempts. | Always share data in interoperable formats such as |
| Open availability of analysis script code | 1% [0–5%] | It contains a detailed step-by-step description of the analyses performed. Sharing it is the best way to ensure the analytic reproducibility and to avoid the ambiguities of verbal descriptions. | Always share the analysis script code. Moreau and Gamble ( Again, online repositories, own websites or journal hosting are very useful for hosting the files. |
95% CIs are presented in brackets
Summary of results and recommendations on different practices related to promoting transparency
| Point | Practice rate | Why is it important? | Recommendations |
|---|---|---|---|
| Use of reporting guidelines | 30% [20–40%] | It’s a very helpful tool that facilitates the transparent reporting of all relevant points on the rationale, methods and results of a systematic review or meta-analysis. Furthermore, it standardizes the report, facilitating the readability, assessment and update of the systematic review and/or meta-analysis. | Use well-established, up-to-date reporting guidelines intended for meta-analyses such as: the recently updated PRISMA 2020 (Page et al., |
| Preregistration | 19% [12–17%] | It prevents the result-based bias by stating the main hypotheses, design and analysis plans prior to obtaining the results. Furthermore, it could provide a transparent project timeline, workflow and general decision-making process. | Specialized repositories such as PROSPERO could be helpful since they are tailored to the SR/MA design. General repositories such as OSF could also be helpful as they provide a useful space to store all relevant material related to the project. It’s important to note that a preregistration protocol does not restrict flexibility. Deviations from the preregistration protocol are normal and usual; they should simply be reported. |