James Liley1. 1. Department of Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK. ajl88@medschl.cam.ac.uk.
Abstract
BACKGROUND: High dimensional case control studies are ubiquitous in the biological sciences, particularly genomics. To maximise power while constraining cost and to minimise type-1 error rates, researchers typically seek to replicate findings in a second experiment on independent cohorts before proceeding with further analyses. This can be an expensive procedure, particularly when control samples are difficult to recruit or ascertain; for example in inter-disease comparisons, or studies on degenerative diseases. RESULTS: This paper presents a method in which control (or case) samples from the discovery cohort are re-used in a replication study. The theoretical implications of this method are discussed and simulated genome-wide association study (GWAS) tests are used to compare performance against the standard approach in a range of circumstances. Using similar methods, a procedure is proposed for 'partial replication' using a new independent cohort consisting of only controls. This methods can be used to provide some validation of findings when a full replication procedure is not possible. The new method has differing sensitivity to confounding in study cohorts compared to the standard procedure, which must be considered in its application. Type-1 error rates in these scenarios are analytically and empirically derived, and an online tool for comparing power and error rates is provided. CONCLUSIONS: In several common study designs, a shared-control method allows a substantial improvement in power while retaining type-1 error rate control. Although careful consideration must be made of all necessary assumptions, this method can enable more efficient use of data in GWAS and other applications.
BACKGROUND: High dimensional case control studies are ubiquitous in the biological sciences, particularly genomics. To maximise power while constraining cost and to minimise type-1 error rates, researchers typically seek to replicate findings in a second experiment on independent cohorts before proceeding with further analyses. This can be an expensive procedure, particularly when control samples are difficult to recruit or ascertain; for example in inter-disease comparisons, or studies on degenerative diseases. RESULTS: This paper presents a method in which control (or case) samples from the discovery cohort are re-used in a replication study. The theoretical implications of this method are discussed and simulated genome-wide association study (GWAS) tests are used to compare performance against the standard approach in a range of circumstances. Using similar methods, a procedure is proposed for 'partial replication' using a new independent cohort consisting of only controls. This methods can be used to provide some validation of findings when a full replication procedure is not possible. The new method has differing sensitivity to confounding in study cohorts compared to the standard procedure, which must be considered in its application. Type-1 error rates in these scenarios are analytically and empirically derived, and an online tool for comparing power and error rates is provided. CONCLUSIONS: In several common study designs, a shared-control method allows a substantial improvement in power while retaining type-1 error rate control. Although careful consideration must be made of all necessary assumptions, this method can enable more efficient use of data in GWAS and other applications.
High-dimensional case-control studies have become a mainstay of investigation of pathophysiology in complex diseases and traits. An important part of their analysis is the process of replication [1], in which the results of a high-dimensional study are used to inform the design of a second study at a subset of the original variables, with a joint analysis used to determine overall association.Replicating studies in this way has the advantage of increasing the effective study sample sizes without requiring measurement of all variables in all samples. It also serves to protect against false-positives due to systematic errors in the original datasets, by re-testing association in a second nominally independent dataset.Replication has a significant cost, and can require large numbers of samples, especially when associated variables have small effects (i.e. [2]). There is therefore a need to minimise the number of additional samples which need to be analysed. This paper presents a method to perform replication by combining controls in both the original ‘discovery’ and second ‘replication’ datasets, potentially reducing the number of new samples required. Shared-control approaches can improve study efficiency in many related applications in which studies are compared [3-8].Results from original and replication datasets for which some or all controls are shared cannot be directly compared due to the correlation between test statistics directly resulting from shared controls even under the null hypothesis [5]; use of the same thresholds in a shared-control design as used in an independent-controls design will lead to higher type-1 error rates. This paper demonstrates a simple adaptation to a standard design to account for the changed correlation structure and retain control of type-1 error rate, only requiring a change to one p-value threshold.An important purpose of replication is control against false-positives arising from variables for which confounding causes an apparent case-control difference in one of the discovery- or replication- phase experiments, but not the other. The action of sharing control samples results in a different spectrum of sensitivity to variables of this type. It necessitates a sacrifice of type-1 error rate control in variables for which confounding affects the discovery-phase control cohort, but improves type-1 error rate control in variables for which confounding affects the replication-phase control cohort. The type-1 error rate is largely equivalent to an independent-controls design in variables affected by confounding in either case cohort.The new spectrum of false positive rates can be advantageous in circumstances where control samples in the replication cohort are less well-ascertained than those in the discovery cohort. This may be the case in studies on degenerative disease, where control ascertainment is generally uncertain, and population-sourced controls may be used for replication. The shared-control design can reduce power losses from mis-specified controls in the replication cohort, as well as reducing false-positive rates caused by confounding in the cohort.When used with shared cases instead of controls, this method can be adapted to a ‘partial replication’ procedure where only a new control set is used. Although not equivalent to a full replication in an independent dataset, the procedure enables improvement in type-1 error rates and control over confounding. This is applicable in studies on rare traits, where all available samples need to be included in the discovery analysis for adequate power.Throughout this paper, GWAS terminology will be used (single-nucleotide polymorphisms (SNPs), allele frequency, variants etc) although the method is applicable to any high-dimensional case control study. ‘Controls’ will be considered to generally be samples unaffected by a disease or trait of interest, although the method can be applied with case/control labels swapped, or applied to comparisons between subgroups of a case group.Differences in power (at fixed type-1 error rate) between standard (independent-controls) and new (shared-control) methods are established by considering hypothesis tests typical of those found in GWAS. Asymptotic analytical results are established where possible, but all type 1/type 2 error rates are readily tractable empirically to good accuracy given study sizes and proposed p-value thresholds, and a tool is provided to do this at https://wallacegroup-liley.shinyapps.io/replication_shared/.
Results
Overview of method
We assume a GWAS dataset of a set of cases C1 and controls C0 used in a ‘discovery’ phase of a GWAS or similar study, and corresponding sets of cases and controls , in the replication phase. We assume that C0 and C1 are genotyped at a set of SNPs S and , at a set S′⊆S.For each SNP we designate μ1, μ0, , as the population minor allele frequency in the corresponding group, and m1, m0, , as the observed allele frequency (so E(m)=μ). We designate two null hypotheses; and , noting that . In a typical conservative GWAS approach, we seek to test against , since μ1≠μ0 or may hold at non-disease associated SNPs due to confounding in the original or replication studies respectively. The alternative null hypothesis , which implies and is implied by , is more appropriate than in cases where replication is performed in a different population than discovery. However, this situation is not adaptable to a shared-control design.A typical two-stage genetic testing procedure [9], which we will refer to as method A, begins by comparing genotypes of C1 and C0 at SNPs S generating p-values p (discovery). A subset S′ of SNPs reaching putative significance level p<α are genotyped in and , with genotypes compared to generate p-values p (replication stage). Finally, genotypes are compared between and at SNPs S′ to generate p-values p (meta-analytic stage). SNPs are designated as ‘hits’ if p<α,p<β,p<γ for some β, γ, and all effects have the same direction. The values α, β, γ may not be explicitly stated in some study designs, although they are usually implicitly present. This is discussed further in the “Choice of thresholds” section below.The main modification proposed in this paper, denoted as method B, differs at the replication stage in that is compared with at S′ instead of just C0 (Fig. 1). The p-values resulting from the modified replication stage are termed p, and the criterion to designate a hit changed to p<α,p<β∗,p<γ, with all effects in the same direction. The threshold β∗ is chosen to conserve type-1 error rate between methods (see “Methods” section and Additional file 1: Appendix 1). This requires estimation of systematic correlation between Z scores, which may be estimated either empirically or (in some cases) analytically.
Fig. 1
Diagram of methods a, b, and c. Method b differs by comparing to pooled C0 and at the replication stage to generate p-value P instead of P. Method c also pools controls at the discovery phase, comparing C1 to pooled C0 and to generate p-values P instead of P. A ‘hit’ is declared in method a if P<α, P<β, P<γ, in method b if P<α, P<β∗, P<γ and in method c if P<α, P<β⊥, P<γ
Diagram of methods a, b, and c. Method b differs by comparing to pooled C0 and at the replication stage to generate p-value P instead of P. Method c also pools controls at the discovery phase, comparing C1 to pooled C0 and to generate p-values P instead of P. A ‘hit’ is declared in method a if P<α, P<β, P<γ, in method b if P<α, P<β∗, P<γ and in method c if P<α, P<β⊥, P<γA second modification, denoted method C, combines C0 and at both the discovery and replication phase (see Fig. 1). This is analogous to a situation in which only a single control cohort is available, and a choice must be made to split it between discovery and replication procedures or to use it for both. In this case, is compared with C1 at SNPs S in the discovery phase to produce p-values p, then is compared with at SNPs S′ at the replication phase and compared with at the meta-analytic stage to produce p-values p and p as before. A hit is determined by p<α,p<β⊥,p<γ, with all effects in the same direction. Again, β⊥ is chosen to maintain the type-1 error rate between methods.
General properties
For SNPs in , the overall type-1 error rate is conserved between methods by the definition of β∗, β⊥ (Eq. 4) at a level P0. It is shown in Additional file 1: Appendix 2.2 that β>β∗>β⊥. For SNPs in the type-1 error rates differ between methods. Such SNPs may be characterised by the group(s) amongst C0, C1, , in which their expected minor allele frequency (MAF) is aberrant from the expected MAF in the population which the group ostensibly represents. ‘Aberrance’ is taken to mean an incorrect expected value from systematic measurement error or uncorrected confounding, rather than random deviance around a correct expected value.Bounds on type-1 error rates with aberrance in each group are shown in Table 1. Methods B and C necessitate sacrificing bounds on error rates with aberrance in C0 and respectively. The bound on error with aberrance in improves through methods A-C. In the “Methods” section, it is shown that the type-1 error with aberrance in decreases from methods A to B, and the error with aberrance in increases from A through C, although the upper bound is the same for both.
Table 1
Upper bounds on type 1 error rates with aberrance in cohorts, with β>β∗>β⊥
Upper bounds on type 1 error rates with aberrance in cohorts, with β>β∗>β⊥
Bias in effect size estimates
If a set of variants in a study are selected based on p-value (either by ordering all p-values and selecting some number, or by choosing all with a p-value below some threshold), the observed case-control odds ratios at those variants are upwards-biased when used as estimates of the true odds-ratios of these variants between cases and controls in the population [10]. This bias is highest amongst variants for which the true log-odds-ratio is 0 (non-associated).A standard replication procedure can be considered as enabling an unbiased effect size estimate [11]; for non-associated variants, this estimate has expectation 0. If controls are reused in the replication procedure, the estimate of effect size for associated variants from the replication procedure is no longer unbiased (since the original control samples are reused), and summary statistics from the replication procedure cannot be used directly as estimates of effect size (although estimates can still be made by considering summary statistics p, p if these can be calculated). After sharing controls, the effect size estimate for null variants in the replication procedure is similarly biased, and the adjustment β→β∗/β⊥ corresponds to an adjustment for this effect.
Differences in power between methods
The power difference between methods B and A was analysed systematically by considering the behaviour of GWAS data across a range of values of . In each calculation, genetic data was considered for a single common SNP with average minor allele frequency across cases and controls equal to 0.1, with a given effect size between cases and controls quantified by log-odds ratio. Varying ratios , were considered, with held constant at 20,000 samples (Fig. 1).At large effect sizes (in GWAS terms, large allelic differences between case and control cohorts) both methods have power approaching 1, so the difference is slight. Similarly, at very small effect sizes, both methods have power near zero. Since the only power differences are at moderate effect sizes, the main metric for power difference used in this paper was the average effect size difference (Fig. 2). Considering power of A and power of B as functions Power(x), Power(x) of an underlying log-odds ratio x, the average power difference was defined as
Fig. 2
Power of both methods is equivocal at high effect sizes (high absolute log odds ratios) and at low effect sizes (log odds ratio near zero). The main region in which power can differ is at moderate effect sizes. A good metric for difference in power is the average difference in power (marked ‘i’). The maximum difference in power (marked ‘ii’) is also considered. This plot shows analytic rather than simulated results
Power of both methods is equivocal at high effect sizes (high absolute log odds ratios) and at low effect sizes (log odds ratio near zero). The main region in which power can differ is at moderate effect sizes. A good metric for difference in power is the average difference in power (marked ‘i’). The maximum difference in power (marked ‘ii’) is also considered. This plot shows analytic rather than simulated resultsThe maximum power difference:was also considered.Figure 3 shows average power difference at various study sizes for typical α, β, γ values (α=5×10−6, β=5×10−4, γ=5×10−8). The difference is typically highest when the ratio of controls to cases is high in the discovery cohort and low or equal in the replication cohort, and the number of cases in the discovery cohort is larger than the number in the replication cohort. Power to detect SNPs in H1 is typically highest in method C, second-highest in method B, and lowest in method A.
Fig. 3
General power differences (%) between methods A and B. Mean power difference is taken as the integral of power difference between methods B and A (see above) over with respect to log-odds ratio. In all cases, 20 000 samples are used overall for a SNP with MAF 0.1, with cutoffs α=5×10−6, β=5×10−4, γ=5×10−8
General power differences (%) between methods A and B. Mean power difference is taken as the integral of power difference between methods B and A (see above) over with respect to log-odds ratio. In all cases, 20 000 samples are used overall for a SNP with MAF 0.1, with cutoffs α=5×10−6, β=5×10−4, γ=5×10−8
Recommended applications
To demonstrate areas where this approach is applicable, several examples are constructed or sourced from the GWAS field in which the procedure of sharing controls or cases will improve power or type-1 error profile of the two-stage testing procedure or enable some form of orthogonal replication to be performed.
Assumptions
In order to use method B or C, it must be assumed that cohort C0 and are sampled from similar enough populations to be comparable to C1 and . A reasonable check on whether the method is appropriate is whether the cohorts C0 and could be interchanged without compromising matching between cases and controls in the discovery or validation studies (possibly with the inclusion of strata or covariates in the genetic risk model). An important caveat of methods B and C is sacrifice of control over errors arising from aberrance in C0 (method B) or (method C), so an assumption must be made that variables affected by confounding or measurement error in these cohorts are understood to be distinguishable from true associations by quality-control measures only. Variants which are aberrant in the same direction in both discovery and control cohorts - that is, - cannot be distinguished from true associations without the use of external data.Post-hoc assessment of all putative hits should be performed to check for genotyping errors [12] and assess whether the hit could have arisen from aberrance in C0.
Conventional GWAS
Method B is applicable in several cases in large conventional GWAS, particularly when then ratio of controls to cases in the discovery cohort is larger than that in the replication cohort. In a relatively recent GWAS on rheumatoid arthritis [13] with comparable sample populations for discovery and replication cohorts, method B could be used to attain greater power than method A for a fixed type-1 error rate. Assuming that summary statistics are well-approximated by binomial tests of allelic differences (so covariates and strata used in computation of summary statistics have only small effects), the improvement in power is around 4% for SNPs with an odds-ratio of 1.3, MAF 0.1, and is positive across all odds ratios. More than 2000 additional controls in would be needed to increase power by this amount (Fig. 4, top left).
Fig. 4
Examples of comparison of power of methods A and B. In all panels, a positive odds ratio corresponds to a deleterious mutation and average MAF is 10%. The top two panels show comparisons of method B with fixed against method A with varying . The top left panel has (values from a GWAS on RA [13]), and the top right panel . Both panels use (α,β,γ)=(5×10−6,5×10−4,5×10−8). The bottom left panel demonstrates the effect of false-ascertainment (F.A) in ; when cases are mis-ascertained as controls. In this case, (α,β,γ)=(1×10−4,1×10−3,5×10−8), reflecting values used in the paper [14]. The bottom right panel demonstrates a prospective scenario with 10000 samples for replication. Method B with (n0,n1) as above, is more powerful than any design using method A (grey region; ; )
Examples of comparison of power of methods A and B. In all panels, a positive odds ratio corresponds to a deleterious mutation and average MAF is 10%. The top two panels show comparisons of method B with fixed against method A with varying . The top left panel has (values from a GWAS on RA [13]), and the top right panel . Both panels use (α,β,γ)=(5×10−6,5×10−4,5×10−8). The bottom left panel demonstrates the effect of false-ascertainment (F.A) in ; when cases are mis-ascertained as controls. In this case, (α,β,γ)=(1×10−4,1×10−3,5×10−8), reflecting values used in the paper [14]. The bottom right panel demonstrates a prospective scenario with 10000 samples for replication. Method B with (n0,n1) as above, is more powerful than any design using method A (grey region; ; )Small power advantages such as this may make minimal difference in a single study, although since they require no extra cost, are worth attaining if possible. The power of method B is generally considerably higher than method A when n0>n1 and . Power advantages may be more substantial in some cases; for example, a study with , method B enables a power increase of up to 8% (Fig. 4, top right panel). To achieve comparable performance with method A, around 2000 additional controls would be necessary in the replication cohort. Method B with is also more powerful than method A would be if controls were divided equally between C0 and (see Fig. 4, top right panel).
Difficult control ascertainment
An important application of the method presented in this paper is in studies for which ‘control’ samples are expensive or difficult to ascertain. This is often the case in comparative studies between disease subtypes. In such studies, sharing controls can improve power substantially, especially if a proportion of samples in the replication cohort are falsely assigned to the control cohort (see “Methods” section).An international GWAS on fronto-temporal dementia in 2014 [14] is an example in which sharing controls may be beneficial. The study had sample sizes . Control samples in the discovery phase were assessed for current neurological disease, and were used in previous studies on Parkinson’s disease, indicating a high degree of reliability. Control samples in the replication phase were collected from the same geographic distribution as cases, but were not explicitly used in previous neurological studies, suggesting better control ascertainment amongst the discovery cohort.In this study, sharing controls could allow for a more strongly-ascertained control cohort, and reduce the effects of confounders affecting (see Fig. 4, bottom left panel). At typical values α=1×10−4, β=1×10−3, γ=5×10−8, power is nearly equivalent between the two methods assuming all controls are genuine. However, with 10% misascertainment in , the power advantage of method B is up to 5%. Given the near-identical distribution of cases in the discovery and validation cohort, cases could alternatively be shared, leading to a power increase of up to 6%.
Prospective study design
Studies may be planned and powered with the assumption that samples may be shared. For certain restrictions on sample numbers, this can provide the potential for greater power than would be attainable by restricting to an independent-controls design. For instance, if we seek to validate hits on a GWAS with 10,000 controls and 5000 cases, and can afford to genotype a further 10,000 samples, power is higher after recruiting 4000 additional controls and 6000 additional cases and sharing controls than can be achieved from any independent-control study design (Fig. 4, bottom right panel).This may be a common scenario if controls are sourced from a known database rather than specifically recruited for the study.
Partial replication
In circumstances where case recruitment is difficult, as in studies of rare diseases, an assessment of replicability may be made by re-testing results from a discovery phase with a new control set only. This can enable the use of control cohorts which only partially match the case cohort.In a GWAS on pemphigus vulgaris [15], a rare disease primarily affecting individuals of Ashkenazi Jewish ethnicity, the discovery cohorts were sampled from Jewish populations, with age- and population- matched controls. Control cohorts were small (), potentially due to difficulty recruiting both ethnically- and geographically-matched controls.Method C could be used in this instance to enable a larger control set and greater power. If a control cohort of Ashkenazi individuals could be assembled without requiring geographic matching with the case set, it would be inappropriate to use as a sole control cohort against the existing case cohort, due to the potential for geographic confounding. However, such a cohort could be used as either C0 or in method C, with the existing ethnically- and geographically- matched controls serving as the other cohort. In this way, the power advantage of the larger cohort could be used while maintaining control over potential aberrance in the larger control group.Method C enables computation of power and type-1 error rates, and comparison to alternative designs with cases split into smaller independent discovery and validation cohorts (method A). Testing a case cohort against two separate control cohorts is almost always more powerful for a fixed type-1 error rate than splitting the case cohort in two and performing method A (see Additional file 1: Figures S1 and S2).
Choice of thresholds
The designation of explicit thresholds α, β, γ in a two-stage study may not appear to reflect many real-life designs, but in general most studies will use it in some form, even if the thresholds are not directly stated. Heuristically, α is used as an initial ‘triage’ step, to reduce data dimensionality, β (which is usually less stringent than α to allow for some regression to the mean in true associations) is used as a check, and γ is used as a definitive test for association amongst candidate variants.Because studies are usually limited by cost or resources, a given number of variants are selected to pass through to the replication step, rather than following up all variants passing a predetermined threshold, which complicates assessment of summary statistics [11]. However, in practice, researchers will have an implicit or explicit maximum allowable p-value for a variant to proceed to replication. If, for example, resources were available to follow-up 100 variants, but the 100th smallest Bonferroni-corrected p value was >1, the variant would not generally be followed up. It is this implicit threshold - representing the maximum allowable p value which would be be deemed acceptable - which is considered to be α. A similar implicit threshold at the replication stage is the effective value of β. If no thresholds α, β are used (that is, α=β=1), then the procedure can be considered as a standard meta-analysis of the discovery and replication studies, and cannot be improved upon by combining controls at the replication stage.If the method proposed in this paper is to be considered in a study, the values α, β, γ should be determined by the values which would otherwise have been used in a standard replication procedure. In the context of GWAS analysis, the threshold γ=5×10−8 should be retained, and the values α, β should reflect the implicit maximum allowable level above. The corresponding β∗/β⊥ values can then be determined. As in any statistical procedure, the overall false-positive rate should be considered along with the cost of following up false-positives.
Discussion
This paper proposes a method to improve efficiency of data use in a replication procedure, adding to the body of methods for comparison of high-dimensional case-control studies. For many common study sizes, the method can reduce the cost of replication, or increase power of discovery. The adapted method is simple to apply, only requiring modification of a single association threshold.A standard replication procedure (or more general comparison of case-control studies) with independent control datasets does not make use of the information that the unconditional expected values of variables in control datasets are, in principle, the same. Conditional on p≤α, m0 is biased away from m1 (since the effect size is biased upwards), and this bias is greatest for non-associated variants. If the observed difference m1−m0 is large even accounting for this bias, and the observed difference is small but consistent in direction with m1−m0, we intuitively expect that the variant is disease associated, with the observed m1−m0 value being larger than its unconditional expectation, and the value being smaller. In a standard replication procedure, the variant would be declared null on the basis of being small, but in the shared controls procedure, some information from the first study is allowed to propagate through to the second. A meta-analysis in which observed values of both m1 and m0 are allowed to propagate information is stronger still, but this cannot in itself detect aberrance in .Correspondingly, a more stringent threshold β∗/β⊥ is needed to account for the bias in conditioning on p<α, and the differential in power between the standard replication procedure and the two proposed here relates to the trade-off between these two effects. By considering which method has the highest power in a given circumstance, the same dataset can in theory yield more information when controls are shared, while retaining some of the systematic error-detecting properties of the standard replication procedure.The most important caveat of these methods is the loss of systematic type-1 error rate control for null SNPs which are aberrant in C0. Control of such errors must not be sacrificed entirely, but in some circumstances it may be satisfactory to assess such errors on a SNP-by-SNP basis. Such assessment is important and standard for all proposed GWAS hits under any method [16] in the interests of quality control. In method C, control over aberrance in is additionally lost; however, since this method is largely applicable when is a single homogeneous control (or case) cohort, there is no way that aberrance in the cohort can be systematically identified by comparison with other cohorts.Somewhat better control of the type-1 error rate can often be achieved for SNPs with aberrance in C1 or . This may incentivise the use of this method when confidence in the representativeness of these cohorts is low compared to that of C0. The type 1 error rate is somewhat increased for SNPs with aberrance in , although as it remains bounded by α, this increase is not a major problem.The two-stage validation procedure is similar to a meta-analysis of the discovery and validation experiments, for which several adaptations to shared-control designs have been proposed [3, 4]. However, there are several important distinctions which necessitate an alternative approach in this case. Firstly, not all variables are measured in the second (replication) study; we are restricted to analysis of variables reaching a given observed effect size. Secondly, the studies to be ‘meta-analysed’ are not complete, in the sense that there may be residual confounding; a strong effect size in the meta-analysis alone is not adequate evidence for association and some level of association (with consistent direction) is additionally required in both constituent studies.The method is inapplicable when replication is performed on cohorts from completely distinct geographic groups, although there can be some difference in geographic distribution between control sets if this is controlled for in computing summary statistics. The method is most applicable when control groups are sampled from similar populations and genotyped on similar platforms. The method proposed in this paper is not universally applicable, and may only yield a modest increase in power, at the cost of changing sensitivity to different types of errors. However, it is in the interest of all researchers to use data as efficiently as possible, and methods such as this which may provide improvements without additional cost in resources should be considered as analytical options.The widespread discoveries of the GWAS field have led to corresponding increases in complexity of phenotypic definitions, with ever-finer delineations of disease types of ever-rarer prevalence. The genetic analysis of such complex phenotypes is necessarily comparative; there is little use understanding the genetics of a rare disease subtype except in the context of the genetics of the disease in general. Such analyses necessitate GWAS and other comparative studies between rare phenotypic types [17], with ‘controls’ meaning the better-characterised disease subphenotype in this sense, as well as between cases and controls. Rare disease subtypes are often afflicted with ascertainment difficulties, leading to varying degrees of expected aberrance in disease cohorts. Within this paradigm, the applicability of this method is likely to expand.
Conclusions
This paper details a method in which controls are shared in the replication phase of a two-stage association study. Sharing controls can improve the power of the two-stage procedure at a fixed type-1 error rate. The action of sharing controls changes the spectrum of sensitivity to systematic errors caused by confounders affecting one of the study cohorts, and this should be accounted for if the shared-control design is used. Adaptations of the method can enable a partial replication to be performed with only a new control cohort, or to enable robustness to mis-ascertainment of control samples in the replication cohort.
Methods
Definitions
Denote z, z, z, z, z as the signed z-scores corresponding to p, p, p, p and p respectively (where subscripts d, r, s, m, c are as defined in the “Results” section), so z=±Φ−1(p/2) and so on (where Φ, Φ−1 are the standard normal CDF and quantile functions). Define as the positive corresponding thresholds for α, β, β∗, β⊥, γ respectively, so z=−Φ−1(x/2) etc. Other than (z,z), all pairs of z-scores are correlated under , with correlation estimable from sample sizes or empirically if covariates are used (Additional file 1: Appendix 1). Denote ρ as the correlation between z and z, (i,j)∈{d,r,s,m,c}2, and setFor i∈{d,r,s,m,c} define ζ=E(z), where the expectation is conditional on the SNP in question. For SNPs in , ζ≡0 for all i, but this may not hold for SNPs in . In theoretical working, aberrance in groups is characterised by values ζ rather than log-odds ratios. Define R, R, R as the false-positive rates for a SNP of interest in methods A, B and C respectively.
General type 1 error rate
The values β∗, β⊥ are chosen to satisfythus conserving the type 1 error rate (denoted P0) against between methods (Fig. 5). If no threshold is used on p (ie, γ=1), then β∗, β⊥ satisfy
Fig. 5
Replication with shared controls. Red and blue shaded areas are regions where a pair of observed Z scores are deemed a ‘hit’ in the (+,+) quadrant under method A/B respectively. The value z is almost linearly dependent on (z,z) and on (z,z) (Additional file 1: Appendix 1). Solid red/blue ellipses indicate contours of the distribution of observed Z scores for a typical non-null SNP under methods A and B, and dashed ellipses indicate contours for a null SNP
Replication with shared controls. Red and blue shaded areas are regions where a pair of observed Z scores are deemed a ‘hit’ in the (+,+) quadrant under method A/B respectively. The value z is almost linearly dependent on (z,z) and on (z,z) (Additional file 1: Appendix 1). Solid red/blue ellipses indicate contours of the distribution of observed Z scores for a typical non-null SNP under methods A and B, and dashed ellipses indicate contours for a null SNPsince . Definition Eq. (4) will be considered a generalisation of definition Eq. (5), with results established first for β∗ as per definition Eq. (5) and extending where possible to definition Eq. (4).For β∗ defined as per definition Eq. (5) we have (see Additional file 1: Appendix 2)approaching from above, so and . As defined by Eq. 5, are also asymptotically linear in z, z, z as the former two tend to ∞, with some constraints (Additional file 1: Appendix 2.1), although the limit does not necessarily approach from above. For both definitions, β⊥<β∗<β (Additional file 1: Appendix 2.2).
Empirical computations
Define N(z) as the pdf of the multivariate normal with mean 0 and variance Σ at z. Determination of covariance is described in Additional file 1: Appendix 1. Given ζ, ζ, ζ, ζ, the probability of rejecting the null for a given SNP using method A isand using method BIf , matrix Σ is singular (Additional file 1: Appendix 1), in which case z=ρz+ρz and the expression above may be reduced to a two-dimensional integral over a more complex region (Fig. 5). Matrix Σ is generally singular, so the formula is used to reduce the integral in a similar way. A similar formula may be used if Σ is nearly singular.In Fig. 3, mean power difference is determined as the integral of the power difference with respect to the log-odds ratio over the real line, as discussed in the “Results” section.
Study sizes, odds ratios and allele frequencies
Consider a study with n0 controls and n1 cases, with underlying allele frequencies μ0 and μ1 in cases and observed allele frequencies m0, m1. Let Z be a signed Z-score derived from a GWAS p-value against the null hypothesis μ0=μ1. Considering Z to be proportional to the log-odds-ratio divided by its standard error, we have:Setting δ=m1−m0, , for some k, we havesowhere . Hencewhere varies between definitions (though it is taken to be approximately equal).
Estimation of covariance between Z scores
Correlation between Z-scores under H0 can be computed analytically with the following formulas (with ρ=0):More general formulae are given in Additional file 1: Appendix 1.
Empirical estimation of covariance and ζ values
The above formulae allow ζ and ρ to be estimated in empirical computations. The estimates may be poor if covariates or strata are used in the computation of z. Correlation may be estimated in several ways:If strata alone are used, or covariates are adjusted for in an analogous way to strata, correlations ρ between z-scores is estimable using analytic formulas (see Additional file 1: Appendix 1).If a set of variants known to be in is available, the sample correlation between observed z-scores at these variants can be used as an estimator for values ρ,A set of genotypes can be simulated for each sample for a set of variants in . Z-scores corresponding to these variants can then be computed under the same correlation structure, and the sample correlation between these Z-scores.Estimates of values ζ corresponding to given log-odds ratios and minor allele frequencies can be estimated in a similar way to method 1; that is, by simulating variants with given underlying odds-ratios between cases and controls, computing z scores using the same method and covariance structure as used in the main study, and setting the relevant value of ζ to the observed mean z score.
False ascertainment
In general, for a true association, and . If some proportion κ of samples in are incorrectly assigned and come from the case population, then . This lowers the absolute values of ζ, ζ and ζ, reducing the probability that the z score for the SNP will reach the requisite threshold β and hence reducing the power to detect the SNP using method A. This loss of power is lowered when using methods B or C.
Type 1 error rates
Aberrance in C1
For SNPs aberrant in only C1 we have ζ≠0, ζ≠0, ζ≠0, and ζ=ζ=0.R, R, R can be considered as functions of ζ. As ζ→0, R,R,R→P0 (Eq. 4). As ζ→±∞, , and . For positive ζ both R and R are increasing (and both are symmetric in ζ) so , , for all ζ.Since β⊥<β∗<β (often substantially), methods B and C are generally better at rejecting for such SNPs. In the simplified case where z=1, R≥R universally (Additional file 1: Appendix 3.1). This typically holds for all z, except for small deviations in pathological cases.In general, we consider aberrance which is only still present after any strata or covariates have been accounted for in the computation of z scores. If strata or covariates remove the effective aberrance between groups, the type-1 error rate is equivalent to that under .
Aberrance in
For SNPs aberrant in , we have ζ=0, ζ=0, ζ≠0, ζ≠0 and ζ≠0.Again, R,R,R→P0 as ζ→0. As ζ→±∞, , and both are bounded by . Although R and R are typically higher than R in this case, since both have the same (typically conservative) upper bound, this is not typically a large sacrifice in type 1 error.In the simplified case where γ=1, an approximate upper bound on R−R is given by (Additional file 1: Appendix 4)whereIn practice, there is typically a similarly small difference between R, R and R in the general case.For SNPs aberrant in , ζ=0, ζ≠0, ζ≠0, ζ≠0 and ζ≠0. As for SNPs with aberrance in , R,R,R→P0 as ζ→0 and as ζ→±∞, , both bounded above by . R, however, tends to 1 as ζ→∞.In method B the cohort C0 has a correcting effect on the replication study, meaning |ζ|<|ζ| and RFor the simplified case where γ=1, a similar bound to 14 holds for the difference R−R (note signs are reversed) within the place of k. The improvement in type-1 error rate for a SNP with aberrance in is generally larger than the loss with the same aberrance in (see methods), meaning that if aberrances are of similar prevalence and size in and , method B will typically have a lower type-1 error rate than method A.
Aberrance in C0
Aberrance in C0 represents a serious problem in case-control study comparison. False-positive rates are generally worse under method B, and tend to 1 as E(z)→∞. If aberrances of this type are expected to be very frequent, this may preclude use of methods B or C.However, aberrances of this type may be best detected retrospectively by examining aberrances between control groups at SNPs declared ‘hits’. This procedure is already a necessary quality-control procedure in method A [12, 16], as method A does not provide any control over differences between C0 and . The number of SNPs reaching significance in the two-stage procedure is usually small enough that this examination is readily tractable.
Aberrance in two or more cohorts
If SNPs are aberrant in both C1 and , or in both C0 and , the effect on R and R is similar. If both cohorts are aberrant in the same direction, there is no way to differentiate the SNP from a genuine association on the basis of the genotype data alone. If cohorts are aberrant in different directions, then in both methods, the type-1 error rate is lower than for a null SNP with no aberration or aberration in only one cohort, as effect sizes for the discovery and replication cohorts are biased in opposite directions. The same typically holds if and C1, or C0 and , are biased in the same direction.If and or C0 and C1 are both biased in the same direction, R is generally lower than R, as ζ≠0. Both R and R are bounded by in this case. In addition, a systematic bias in both replication groups (or both discovery groups) is likely to be due to a known confounder, the effect of which can be removed by performing a stratified test (as is typically good practice when confounders are known). Aberrance in opposite directions leads to R>R in the first case, and a scenario similar to aberrance in C0 in the second case.Aberrance in three or more cohorts corresponds to a chaotic scenario in which neither methods A,B, or C will reliably provide FPR control. Aberrance of this extent is typically detectable and removable using quality control procedures.Supplementary figures and appendices. Supplementary figures showing additional power comparisons, and appendices pertaining to the method. (PDF 941 kb)
Authors: Samsiddhi Bhattacharjee; Preetha Rajaraman; Kevin B Jacobs; William A Wheeler; Beatrice S Melin; Patricia Hartge; Meredith Yeager; Charles C Chung; Stephen J Chanock; Nilanjan Chatterjee Journal: Am J Hum Genet Date: 2012-05-04 Impact factor: 11.025
Authors: Carl A Anderson; Fredrik H Pettersson; Geraldine M Clarke; Lon R Cardon; Andrew P Morris; Krina T Zondervan Journal: Nat Protoc Date: 2010-08-26 Impact factor: 13.491
Authors: Christian Fuchsberger; Jason Flannick; Tanya M Teslovich; Anubha Mahajan; Vineeta Agarwala; Kyle J Gaulton; Clement Ma; Pierre Fontanillas; Loukas Moutsianas; Davis J McCarthy; Manuel A Rivas; John R B Perry; Xueling Sim; Thomas W Blackwell; Neil R Robertson; N William Rayner; Pablo Cingolani; Adam E Locke; Juan Fernandez Tajes; Heather M Highland; Josee Dupuis; Peter S Chines; Cecilia M Lindgren; Christopher Hartl; Anne U Jackson; Han Chen; Jeroen R Huyghe; Martijn van de Bunt; Richard D Pearson; Ashish Kumar; Martina Müller-Nurasyid; Niels Grarup; Heather M Stringham; Eric R Gamazon; Jaehoon Lee; Yuhui Chen; Robert A Scott; Jennifer E Below; Peng Chen; Jinyan Huang; Min Jin Go; Michael L Stitzel; Dorota Pasko; Stephen C J Parker; Tibor V Varga; Todd Green; Nicola L Beer; Aaron G Day-Williams; Teresa Ferreira; Tasha Fingerlin; Momoko Horikoshi; Cheng Hu; Iksoo Huh; Mohammad Kamran Ikram; Bong-Jo Kim; Yongkang Kim; Young Jin Kim; Min-Seok Kwon; Juyoung Lee; Selyeong Lee; Keng-Han Lin; Taylor J Maxwell; Yoshihiko Nagai; Xu Wang; Ryan P Welch; Joon Yoon; Weihua Zhang; Nir Barzilai; Benjamin F Voight; Bok-Ghee Han; Christopher P Jenkinson; Teemu Kuulasmaa; Johanna Kuusisto; Alisa Manning; Maggie C Y Ng; Nicholette D Palmer; Beverley Balkau; Alena Stančáková; Hanna E Abboud; Heiner Boeing; Vilmantas Giedraitis; Dorairaj Prabhakaran; Omri Gottesman; James Scott; Jason Carey; Phoenix Kwan; George Grant; Joshua D Smith; Benjamin M Neale; Shaun Purcell; Adam S Butterworth; Joanna M M Howson; Heung Man Lee; Yingchang Lu; Soo-Heon Kwak; Wei Zhao; John Danesh; Vincent K L Lam; Kyong Soo Park; Danish Saleheen; Wing Yee So; Claudia H T Tam; Uzma Afzal; David Aguilar; Rector Arya; Tin Aung; Edmund Chan; Carmen Navarro; Ching-Yu Cheng; Domenico Palli; Adolfo Correa; Joanne E Curran; Denis Rybin; Vidya S Farook; Sharon P Fowler; Barry I Freedman; Michael Griswold; Daniel Esten Hale; Pamela J Hicks; Chiea-Chuen Khor; Satish Kumar; Benjamin Lehne; Dorothée Thuillier; Wei Yen Lim; Jianjun Liu; Yvonne T van der Schouw; Marie Loh; Solomon K Musani; Sobha Puppala; William R Scott; Loïc Yengo; Sian-Tsung Tan; Herman A Taylor; Farook Thameem; Gregory Wilson; Tien Yin Wong; Pål Rasmus Njølstad; Jonathan C Levy; Massimo Mangino; Lori L Bonnycastle; Thomas Schwarzmayr; João Fadista; Gabriela L Surdulescu; Christian Herder; Christopher J Groves; Thomas Wieland; Jette Bork-Jensen; Ivan Brandslund; Cramer Christensen; Heikki A Koistinen; Alex S F Doney; Leena Kinnunen; Tõnu Esko; Andrew J Farmer; Liisa Hakaste; Dylan Hodgkiss; Jasmina Kravic; Valeriya Lyssenko; Mette Hollensted; Marit E Jørgensen; Torben Jørgensen; Claes Ladenvall; Johanne Marie Justesen; Annemari Käräjämäki; Jennifer Kriebel; Wolfgang Rathmann; Lars Lannfelt; Torsten Lauritzen; Narisu Narisu; Allan Linneberg; Olle Melander; Lili Milani; Matt Neville; Marju Orho-Melander; Lu Qi; Qibin Qi; Michael Roden; Olov Rolandsson; Amy Swift; Anders H Rosengren; Kathleen Stirrups; Andrew R Wood; Evelin Mihailov; Christine Blancher; Mauricio O Carneiro; Jared Maguire; Ryan Poplin; Khalid Shakir; Timothy Fennell; Mark DePristo; Martin Hrabé de Angelis; Panos Deloukas; Anette P Gjesing; Goo Jun; Peter Nilsson; Jacquelyn Murphy; Robert Onofrio; Barbara Thorand; Torben Hansen; Christa Meisinger; Frank B Hu; Bo Isomaa; Fredrik Karpe; Liming Liang; Annette Peters; Cornelia Huth; Stephen P O'Rahilly; Colin N A Palmer; Oluf Pedersen; Rainer Rauramaa; Jaakko Tuomilehto; Veikko Salomaa; Richard M Watanabe; Ann-Christine Syvänen; Richard N Bergman; Dwaipayan Bharadwaj; Erwin P Bottinger; Yoon Shin Cho; Giriraj R Chandak; Juliana C N Chan; Kee Seng Chia; Mark J Daly; Shah B Ebrahim; Claudia Langenberg; Paul Elliott; Kathleen A Jablonski; Donna M Lehman; Weiping Jia; Ronald C W Ma; Toni I Pollin; Manjinder Sandhu; Nikhil Tandon; Philippe Froguel; Inês Barroso; Yik Ying Teo; Eleftheria Zeggini; Ruth J F Loos; Kerrin S Small; Janina S Ried; Ralph A DeFronzo; Harald Grallert; Benjamin Glaser; Andres Metspalu; Nicholas J Wareham; Mark Walker; Eric Banks; Christian Gieger; Erik Ingelsson; Hae Kyung Im; Thomas Illig; Paul W Franks; Gemma Buck; Joseph Trakalo; David Buck; Inga Prokopenko; Reedik Mägi; Lars Lind; Yossi Farjoun; Katharine R Owen; Anna L Gloyn; Konstantin Strauch; Tiinamaija Tuomi; Jaspal Singh Kooner; Jong-Young Lee; Taesung Park; Peter Donnelly; Andrew D Morris; Andrew T Hattersley; Donald W Bowden; Francis S Collins; Gil Atzmon; John C Chambers; Timothy D Spector; Markku Laakso; Tim M Strom; Graeme I Bell; John Blangero; Ravindranath Duggirala; E Shyong Tai; Gilean McVean; Craig L Hanis; James G Wilson; Mark Seielstad; Timothy M Frayling; James B Meigs; Nancy J Cox; Rob Sladek; Eric S Lander; Stacey Gabriel; Noël P Burtt; Karen L Mohlke; Thomas Meitinger; Leif Groop; Goncalo Abecasis; Jose C Florez; Laura J Scott; Andrew P Morris; Hyun Min Kang; Michael Boehnke; David Altshuler; Mark I McCarthy Journal: Nature Date: 2016-07-11 Impact factor: 69.504