| Literature DB >> 29684013 |
Anne L Plant1, Chandler A Becker1, Robert J Hanisch1, Ronald F Boisvert2, Antonio M Possolo2, John T Elliott1.
Abstract
The current push for rigor and reproducibility is driven by a desire for confidence in research results. Here, we suggest a framework for a systematic process, based on consensus principles of measurement science, to guide researchers and reviewers in assessing, documenting, and mitigating the sources of uncertainty in a study. All study results have associated ambiguities that are not always clarified by simply establishing reproducibility. By explicitly considering sources of uncertainty, noting aspects of the experimental system that are difficult to characterize quantitatively, and proposing alternative interpretations, the researcher provides information that enhances comparability and reproducibility.Entities:
Mesh:
Year: 2018 PMID: 29684013 PMCID: PMC5933802 DOI: 10.1371/journal.pbio.2004299
Source DB: PubMed Journal: PLoS Biol ISSN: 1544-9173 Impact factor: 8.029
Identifying, reporting, and mitigating sources of uncertainty in a research study.
Clearly articulate the goals of the study and the basis for generalizability to other settings, species, conditions, etc., if claimed in the conclusions. State the experimental design, including variables to be tested, numbers of samples, statistical models to be used, how sampling is performed, etc. Provide preliminary data or evaluations that support the selection of protocols and statistical models. Identify and evaluate assumptions related to anticipated experiments, theories, and methods for analyzing results. |
Characterize reagents and control samples (e.g., composition, purity, activity, etc.). Ensure that experimental equipment is responding correctly (e.g., through use of calibration materials and verification of vendor specifications). Show that positive and negative control samples are appropriate in composition, sensitivity, and other characteristics to be meaningful indictors of the variables being tested. Evaluate the experimental environment (e.g., laboratory conditions such as temperature and temperature fluctuations, humidity, vibration, electronic noise, etc.). |
Acquire supplementary data that provide indicators of the quality of experimental data. These indicators include precision (i.e., repeatability, with statistics such as standard deviation and variance), accuracy (which can be assessed by applying alternative [orthogonal] methods or by comparison to a reference material), sensitivity to environmental or experimental perturbants (by testing for assay robustness to putatively insignificant experimental protocol changes), and the dynamic range and response function of the experimental protocol or assay (and assuring that data points are within that valid range). Reproduce the data using different technicians, laboratories, instruments, methods, etc. (i.e., meet the conditions for reproducibility as defined in the VIM). |
Justify the basis for the selected statistical analyses. Quantify the combined uncertainties of the values measured using methods in the GUM [ Evaluate the robustness and accuracy of algorithms, code, software, and analytical models to be used in analysis of data (e.g., by testing against reference datasets). Compare data and results with previous data and results (yours and others’). Identify other uncontrolled potential sources of bias or uncertainty in the data. Consider feasible alternative interpretations of the data. Evaluate the predictive power of models used. |
Make available all supplementary material that fully describes the experiment/simulation and its analysis. Release well-documented data and code used in the study. Collect and archive metadata that provide documentation related to process details, reagents, and other variables; include with numerical data as part of the dataset. |
Abbreviations: GUM, Guide to Expression of Uncertainty in Measurement; VIM, International Vocabulary of Basic and General Terms in Metrology