| Literature DB >> 33954258 |
Marcus R Munafò1,2, Brian A Nosek3,4, Dorothy V M Bishop5, Katherine S Button6, Christopher D Chambers7, Nathalie Percie du Sert8, Uri Simonsohn9, Eric-Jan Wagenmakers10, Jennifer J Ware11, John P A Ioannidis12,13,14.
Abstract
Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.Entities:
Year: 2017 PMID: 33954258 PMCID: PMC7610724 DOI: 10.1038/s41562-016-0021
Source DB: PubMed Journal: Nat Hum Behav ISSN: 2397-3374
Figure 1Threats to reproducible science.
An idealized version of the hypothetico-deductive model of the scientific method is shown. Various potential threats to this model exist (indicated in red), including lack of replication[5], hypothesizing after the results are known (HARKing)[7], poor study design, low statistical power[2], analytical flexibility[51], P-hacking[4], publication bias[3] and lack of data sharing[6]. Together these will serve to undermine the robustness of published research, and may also impact on the ability of science to self-correct.
A manifesto for reproducible science.
| Theme | Proposal | Examples of initiatives/potential solutions (extent of current adoption) | Stakeholder(s) |
|---|---|---|---|
| Methods | Protecting against cognitive biases | All of the initiatives listed below (* to ****) Blinding (**) | J, F |
| Improving methodological training | Rigorous training in statistics and research methods for future researchers (*) | I, F | |
| Independent methodological support | Involvement of methodologists in research (**) Independent oversight (*) | F | |
| Collaboration and team science | Multi-site studies/distributed data collection (*) Team-science consortia (*) | I, F | |
| Reporting and dissemination | Promoting study pre-registration | Registered Reports (*) | J, F |
| Improving the quality of reporting | Use of reporting checklists (**) | J | |
| Protecting against conflicts of interest | Disclosure of conflicts of interest (***) | J | |
| Reproducibility | Encouraging transparency and open science | Open data, materials, software and so on (* to **) | J, F, R |
| Evaluation | Diversifying peer review | Preprints (* in biomedical/behavioural sciences, **** in physical sciences) | J |
| Incentives | Rewarding open and reproducible practices | Badges (*) | J, I, F |
Estimated extent of current adoption: *, <5%; **, 5–30%; ***, 30–60%; ****, >60%. Abbreviations for key stakeholders: J, journals/publishers; F, funders; I, institutions; R, regulators.
Figure 2The impact of introducing badges for data sharing.
In January 2014, the journal Psychological Science (PSCI) introduced badges for articles with open data. Immediately afterwards, the proportion of articles with open data increased steeply, and by October 2015, 38% of articles in Psychological Science had open data. For comparison journals (Clinical Psychological Science (CPS), Developmental Psychology (DP), Journal of Experimental Psychology: Learning, Memory and Cognition (JEPLMC) and Journal of Personality and Social Psychology (JPSP)) the proportion of articles with open data remained uniformly low. Figure adapted from ref. 75, PLoS.