| Literature DB >> 16018352 |
Abstract
Analysis of scientific data involves many components, one of which is often statistical testing with the calculation of p-values. However, researchers too often pepper their papers with p-values in the absence of critical thinking about their results. In fact, statistical tests in their various forms address just one question: does an observed difference exceed that which might reasonably be expected solely as a result of sampling error and/or random allocation of experimental material? Such tests are best applied to the results of designed studies with reasonable control of experimental error and sampling error, as well as acquisition of a sufficient sample size. Nevertheless, attributing an observed difference to a specific treatment effect requires critical thinking on the part of the scientist. Observational studies involve data sets whose size is usually a matter of convenience with results that reflect a number of potentially confounding factors. In this situation, statistical testing is not appropriate and p-values may be misleading; other more modern statistical tools should be used instead, including graphic analysis, computer-intensive methods, regression trees, and other procedures broadly classified as bioinformatics, data mining, and exploratory data analysis. In this review, the utility of p-values calculated from designed experiments and observational studies are discussed, leading to the formation of a decision tree to aid researchers and reviewers in understanding both the benefits and limitations of statistical testing.Mesh:
Year: 2005 PMID: 16018352
Source DB: PubMed Journal: Aviat Space Environ Med ISSN: 0095-6562