| Literature DB >> 33440127 |
Klaus Fiedler1, Linda McCaughey1, Johannes Prager1.
Abstract
The current debate about how to improve the quality of psychological science revolves, almost exclusively, around the subordinate level of statistical significance testing. In contrast, research design and strict theorizing, which are superordinate to statistics in the methods hierarchy, are sorely neglected. The present article is devoted to the key role assigned to manipulation checks (MCs) for scientific quality control. MCs not only afford a critical test of the premises of hypothesis testing but also (a) prompt clever research design and validity control, (b) carry over to refined theorizing, and (c) have important implications for other facets of methodology, such as replication science. On the basis of an analysis of the reality of MCs reported in current issues of the Journal of Personality and Social Psychology, we propose a future methodology for the post-p < .05 era that replaces scrutiny in significance testing with refined validity control and diagnostic research designs.Entities:
Keywords: attention check; demand effect; diagnostic design; manipulation check; scientific scrutiny; significance testing; validity
Year: 2021 PMID: 33440127 PMCID: PMC8273363 DOI: 10.1177/1745691620970602
Source DB: PubMed Journal: Perspect Psychol Sci ISSN: 1745-6916
Ratings Averaged Across Raters and All Studies/Articles
| MC classification | Studies in this category | Articles with at least one study in this category |
|---|---|---|
| Included MC (of some kind) | 40% | 50% |
| Included an MC subject to a | 10% | 9% |
| Included | 5% | 6% |
| Included a | 19% | 22% |
| Tried to realize a | 5% | 9% |
| Did not include MC | 60% | |
| Because of | 15% | |
| MC was | 38% | |
| Inclusion of MC explicitly | 0% | |
| Apparently deemed MC to be | 6% | |
| Study | 1% |
Fig. 1.The logic of scientific inference in Bayesian odds notation. The prior odds ratio Ωprior on the left is the ratio of the probability, p(Htrue), that the focal theoretical hypothesis is true, divided by the complementary probability, p(Hfalse), that the hypothesis is false. Ωprior highlights the need for a priori theorizing as it reflects the theoretical expectations before the assessment of empirical data D. The posterior odds Ωposterior = p(Htrue|D)/p(Hfalse|D) on the right indicate the updated ratio in the light of new data. The updating factor or likelihood ratio (LR) reflects diagnosticity – the ratio of p(D|Htrue), the likelihood of D given Htrue divided by the likelihood p(D|Hfalse) that another hypothesis accounts for the data D.