| Literature DB >> 21843884 |
John R Crawford1, Paul H Garthwaite.
Abstract
Five inferential methods employed in single-case studies to compare a case to controls are examined; all of these make use of a t-distribution. It is shown that three of these ostensibly different methods are in fact strictly equivalent and are not fit for purpose; they are associated with grossly inflated Type I errors (these exceed even the error rate obtained when a case's score is converted to a z score and the latter used as a test statistic). When used as significance tests, the two remaining methods (Crawford and Howell's method and a prediction interval method first used by Barton and colleagues) are also equivalent and achieve control of the Type I error rate (the two methods do differ however in other important aspects). A number of broader issues also arise from the present findings, namely: (a) they underline the value of accompanying significance test results with the effect size for the difference between a case and controls, (b) they suggest that less care is often taken over statistical methods than over other aspects of single-case studies, and (c) they indicate that some neuropsychologists have a distorted conception of the nature of hypothesis testing in single-case research (it is argued that this may stem from a failure to distinguish between group studies and single-case studies).Entities:
Mesh:
Year: 2011 PMID: 21843884 DOI: 10.1016/j.cortex.2011.06.021
Source DB: PubMed Journal: Cortex ISSN: 0010-9452 Impact factor: 4.027