Literature DB >> 30610179

The harmonic mean p-value for combining dependent tests.

Daniel J Wilson1.   

Abstract

Analysis of "big data" frequently involves statistical comparison of millions of competing hypotheses to discover hidden processes underlying observed patterns of data, for example, in the search for genetic determinants of disease in genome-wide association studies (GWAS). Controlling the familywise error rate (FWER) is considered the strongest protection against false positives but makes it difficult to reach the multiple testing-corrected significance threshold. Here, I introduce the harmonic mean p-value (HMP), which controls the FWER while greatly improving statistical power by combining dependent tests using generalized central limit theorem. I show that the HMP effortlessly combines information to detect statistically significant signals among groups of individually nonsignificant hypotheses in examples of a human GWAS for neuroticism and a joint human-pathogen GWAS for hepatitis C viral load. The HMP simultaneously tests all ways to group hypotheses, allowing the smallest groups of hypotheses that retain significance to be sought. The power of the HMP to detect significant hypothesis groups is greater than the power of the Benjamini-Hochberg procedure to detect significant hypotheses, although the latter only controls the weaker false discovery rate (FDR). The HMP has broad implications for the analysis of large datasets, because it enhances the potential for scientific discovery.
Copyright © 2019 the Author(s). Published by PNAS.

Entities:  

Keywords:  big data; false positives; model averaging; multiple testing; p-values

Mesh:

Year:  2019        PMID: 30610179      PMCID: PMC6347718          DOI: 10.1073/pnas.1814092116

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


Analysis of big data has great potential, for instance by transforming our understanding of how genetics influences human disease (1), but it presents unique challenges. One such challenge faces geneticists designing genome-wide association studies (GWAS). Individuals have typically been typed at around 600,000 variants spread across the 3.2 billion base-pair genome. With the rapidly decreasing costs of DNA sequencing, whole-genome sequencing is becoming routine, raising the possibility of detecting associations at ever more variants (2, 3). However, increasing the number of tests of association conventionally requires more stringent p-value correction for multiple testing, reducing the probability of detecting any individual association. The idea that analyzing more data may lead to fewer discoveries is counterintuitive and suggests a flaw of logic. The problem of testing many hypotheses while controlling the appropriate false positive rate is a long-standing issue. The familywise error rate (FWER) is the probability of falsely rejecting a null in favor of an alternative hypothesis in one or more of all tests performed. Controlling the FWER in the presence of some true positives is challenging and considered the strongest form of protection against false positives (4). Unfortunately, the simple and widely used Bonferroni method for controlling the FWER is conservative, especially when the individual tests are positively correlated (5). Model selection is an important setting affected by correlated tests, in which the same data are used to evaluate many competing alternative hypotheses. Reanalysis of the same outcomes across tests in GWAS causes dependence because of correlations between regressors in different models (6). Other phenomena, such as unmeasured confounders, can induce dependence, even when alternative hypotheses are not mutually exclusive, such as in gene expression analyses (7). The conservative nature of Bonferroni correction, particularly when tests are correlated, exacerbates the stringent criterion of controlling the FWER, jeopardizing sensitivity to detect true signals. Simulations may be used to identify thresholds that are less stringent yet control the FWER. However, simulating can be time consuming; model-based simulations require knowledge of the dependency structure, which may be limited; and permutation-based procedures are not always appropriate (8). The false discovery rate (FDR) offers an alternative to the FWER. Controlling the FDR guarantees that, among the significant tests, the proportion in which the null hypothesis is incorrectly rejected in favor of the alternative is limited (9). The widely used Benjamini–Hochberg (BH) procedure (9) for controlling the FDR shares with the Bonferroni method a robustness to positive correlation between tests (10) but is less conservative. These advantages have made FDR a popular alternative to FWER, in practice trading off larger numbers of false positives for more statistical power. Combined tests offer a different way to improve power. By aggregating multiple hypothesis tests, combined tests are sensitive to signals that may be individually too subtle to detect, especially after multiple testing correction. Their conclusions, therefore, apply collectively rather than to individual tests. Fisher’s method (11) is perhaps the best known and has been widely used in gene set enrichment analysis, but it makes the strong assumption that tests are independent. Bayesian model averaging offers a way to combine alternative hypotheses in the model selection setting. By comparing groups of alternative hypotheses against a common null, the null hypothesis may be ruled out collectively. In the case of GWAS, even if no individual variant shows sufficient evidence of association in a region, the model-averaged signal across that region may yet achieve sufficiently strong posterior odds (12, 13). Combining tests in this way makes an asset of more data by creating the potential for more fine-grained discovery when the signal is strong enough without the liability of requiring that all hypotheses are evaluated individually at the higher level of statistical stringency. In this paper, I use Bayesian model averaging to develop a method, the harmonic mean p-value (HMP), for combining dependent p-values while controlling the strong-sense FWER. The method is derived in the model selection setting and is best interpreted as offering a complementary method to Fisher’s that combines tests by model averaging when they are mutually exclusive, not independent. However, the HMP is applicable beyond model selection problems, because it assumes only that the p-values are valid. It enjoys several remarkable properties that offer benefits across a wide range of big data problems.

Methods

Model-Averaged Mean Maximum Likelihood.

The original idea motivating this paper was to develop a classical analogue to the model-averaged Bayes factor by deriving the null distribution for the mean maximized likelihood ratio,with maximized likelihood ratios and weights , where . The maximized likelihood ratio is a classical analogue of the Bayes factor and measures the evidence for the alternative hypothesis against the null given the data :In a likelihood ratio test, the p-value is calculated as the probability of obtaining an as or more extreme if the null hypothesis were true:For nested hypotheses (), Wilks’ theorem (14) approximates the null distribution of as when there are degrees of freedom. The distribution of cannot be approximated by central limit theorem, because the LogGamma distribution is heavy tailed, with undefined variance when . Instead, generalized central limit theorem can be used (15), which states that, for equal weights () and independent and identically distributed s,where and are constants and is a Stable distribution with tail index . The specific Stable distribution is a type of Landau distribution (16) with parameters that depend on and (). Theory, supported by detailed simulations in , shows that (i) the assumptions of equal weights, independence, and identical degrees of freedom can be relaxed and that (ii) the Landau distribution approximation performs best when .

The Harmonic Mean p-Value.

Notably, when and the assumptions of Wilks’ theorem are met, the p-value equals the inverse maximized likelihood ratio:and therefore, the mean maximized likelihood ratio equals the inverse HMP:Under these conditions, interpreting and the HMP is exactly equivalent. This equivalence motivates use of the HMP more generally because of the following. The Landau distribution gives an excellent approximation for with , and hence for . Wilks’ theorem can be replaced with the simpler assumption that the p-values are well calibrated. The HMP will capture similar information to for any degrees of freedom. Combining s rather than s automatically accounts for differences in degrees of freedom. A combined p-value, which becomes exact as the number of p-values increases, can be calculated aswith the Landau distribution probability density function Remarkably, however, the HMP can be directly interpreted, because it is approximately well calibrated when small. Using the theory of regularly varying functions (see ref. 17),This property suggests the following test, which controls the strong-sense FWER at level approximately for an HMP calculated on a subset of p-values :where . Directly interpreting the HMP using Eq. constitutes a multilevel test in the sense that any significant subset of hypotheses implies that the HMP of the whole set is also significant, becauseConversely, if the “headline” HMP is not significant, nor is the HMP for any subset . The significance thresholds apply no matter how many subsets are combined and tested. The above properties show that directly interpreting the HMP (i) is a closed testing procedure (4) that controls the strong-sense FWER (); (ii) is more powerful than Bonferroni and Simes correction, because the HMP is always smaller than the p-values for those tests (); and therefore, (iii) produces significant results whenever the Simes-based BH procedure does, although BH only controls the less stringent FDR. While direct interpretation of the HMP controls the strong-sense FWER, the level at which it does so is only approximately , and is in fact anticonservative, but only very slightly for small and small . Assessing the adjusted HMP, , against level calculated by inverting Eq. permits a test that is exact up to the order of the Landau distribution approximation (Table 1). (Equivalently, one can compare the exact p-value from Eq. with .) Simulations suggest that this exact test remains more powerful than Bonferroni, Simes, and therefore, BH ().
Table 1.

Significance thresholds for , the adjusted HMP, for varying numbers of alternative hypotheses and false positive rates

|R|α=0.05α=0.01α=0.001
100.0400.00940.00099
1000.0360.00920.00099
1,0000.0340.00900.00099
10,0000.0310.00880.00098
100,0000.0290.00860.00098
1,000,0000.0270.00840.00098
10,000,0000.0260.00830.00098
100,000,0000.0240.00810.00098
1,000,000,0000.0230.00800.00097
Significance thresholds for , the adjusted HMP, for varying numbers of alternative hypotheses and false positive rates I recommend the use of this asymptotically exact test, available in the R package “harmonicmeanp” (https://CRAN.R-project.org/package=harmonicmeanp), on which all subsequent analyses in Results are based. Analyses based on direct interpretation of the HMP are also presented and reveal the practical differences between the approaches to be small for .

Choice of Weights.

I anticipate that the HMP will usually be used with equal weights, as are procedures such as Bonferroni correction and Simes’ test. considers optimal weights. Based on Bayesian (18) and classical arguments and assuming that all tests have good power, the optimal weight is found to be proportional to the product of the prior probability of alternative hypothesis and the expectation of under . This optimal weighting would favor alternatives that are more probable a priori while penalizing those associated with more powerful tests. Consequently, the use of equal weights can be interpreted as assuming that all alternative hypotheses are equally likely a priori and that all tests are equally powerful. If tests are not equally powerful for a given “effect size,” the equal power assumption implies that alternatives associated with inherently less powerful tests are expected to have larger effect sizes a priori, a testable assumption that has been used often in GWAS (19).

Results

The main result of this paper is that the weighted harmonic mean p-value of any subset of the p-values ,(i) combines the evidence in favor of the group of alternative hypotheses , (ii) is an approximately well-calibrated p-value for small values, and (iii) controls the strong-sense FWER at level approximately when compared against the threshold , no matter how many other subsets of the same p-values are tested ( and ). An asymptotically exact test is also available (Eq. ). The HMP has several helpful properties that arise from generalized central limit theorem. It is:The HMP outperforms Bonferroni and Simes (5) correction. This advantage over Simes’ test means that whenever the BH procedure (9), which controls only the FDR, finds significant hypotheses, the HMP will find significant hypotheses or groups of hypotheses. The HMP complements Fisher’s method for combining independent p-values (11), because the HMP is more appropriate when (i) rejecting the null implies that only one alternative hypothesis may be true and not all of them or (ii) the p-values might be positively correlated and cannot be assumed to be independent. Robust to positive dependency between p-values. Insensitive to the exact number of tests. Robust to the distribution of weights . Most influenced by the smallest p-values.

HMP Enables Adaptive Multiple Testing Correction by Combining p-Values.

That the Bonferroni method for controlling the FWER can be overly stringent, especially when the tests are nonindependent, has long been recognized. In Bonferroni correction, a p-value is deemed significant if , which becomes more stringent as the number of tests increases. Since human GWAS began routinely testing millions of variants by statistically imputing untyped variants, a new convention was adopted in which a p-value is deemed significant if , a rule that implies that the effective number of tests is no more than . Several lines of argument were used to justify this threshold (20–22), most applicable specifically to human GWAS. In contrast, the HMP affords strong control of the FWER while avoiding both simulation studies and the undue stringency of Bonferroni correction, an advantage that increases when tests are nonindependent. To show how the HMP can recover significant associations among groups of tests that are individually nonsignificant, I reanalyzed a GWAS of neuroticism (23), defined as a tendency toward intense or frequent negative emotions and thoughts (24). Genotypes were imputed for variants across individuals. I used the HMP to perform model-averaged tests of association between neuroticism and variants within contiguous regions of 10 kb, 100 kb, 1,000 kb, 10 Mb, entire chromosomes, and the whole genome, assuming equal weights across variants (). Fig. 1 shows the p-value from Eq. for each region adjusted by a factor to enable direct comparison with the significance threshold . Similar results were obtained from direct interpretation of the HMP (). Model averaging tends to make significant and near-significant adjusted p-values more significant. For example, for every variant significant after Bonferroni correction, the model-averaged p-value for the corresponding chromosome was found to be at least as significant.
Fig. 1.

Results of a GWAS of neuroticism in 170,911 people (23). This Manhattan plot shows the significance of the association between neuroticism and variants (dark and light gray points) and overlapping regions of lengths 10 kb (blue bars), 100 kb (cyan bars), 1,000 kb (green bars), 10,000 kb (yellow bars), entire chromosomes (orange bars), and the whole genome (red bar). Significance is defined as the adjusted p-value, where the p-value for region is defined by Eq. and adjusted by a factor to enable direct comparison with the threshold (black dashed line). The conventional threshold of is shown for comparison (gray dotted line).

Results of a GWAS of neuroticism in 170,911 people (23). This Manhattan plot shows the significance of the association between neuroticism and variants (dark and light gray points) and overlapping regions of lengths 10 kb (blue bars), 100 kb (cyan bars), 1,000 kb (green bars), 10,000 kb (yellow bars), entire chromosomes (orange bars), and the whole genome (red bar). Significance is defined as the adjusted p-value, where the p-value for region is defined by Eq. and adjusted by a factor to enable direct comparison with the threshold (black dashed line). The conventional threshold of is shown for comparison (gray dotted line). Model averaging increases significance more when combining a group of comparably significant p-values, e.g., the top hits in chromosome 9. The least improvement is seen when one p-value is much more significant than the others, e.g., the top hit in chromosome 3. This behavior is predicted by the tendency of harmonic means to be dominated by the smallest values. In the extreme case that one p-value dominates the significance of all others, the HMP test becomes equivalent to Bonferroni correction. This implies that Bonferroni correction might not be improved on for “needle-in-a-haystack” problems. Conversely, dependency among tests actually improves the sensitivity of the HMP, because one significant test may be accompanied by other correlated tests that collectively reduce the harmonic mean p-value. In some cases, the HMP found significant regions where none of the individual variants were significant. For example, no variants on chromosome 12 were significant by Bonferroni correction nor by the conventional genome-wide significance threshold of . However, the HMP found significant 10-Mb regions spanning several peaks of nonsignificant individual p-values. One of those, variant rs7973260, which showed an individual p-value for association with neuroticism of , had been reported as also associated with depressive symptoms (). Such cross-association or “quasireplication,” in which a variant is nearly significant for the trait of interest and significant for a related trait, can be regarded as providing additional support for the variant’s involvement in the trait of interest (23). In chromosome 3, individual variants were found to be significant by the conventional threshold of , but neither Bonferroni correction nor the HMP agreed that those variants or regions were significant at an FWER of . Indeed, the HMP found chromosome 3 nonsignificant as a whole. Variant rs35688236, which had the smallest p-value on chromosome 3 of , had not validated when tested in a quasireplication exercise that involved testing variants associated with neuroticism for association with subjective wellbeing or depressive symptoms (23). These observations illustrate that the HMP adaptively combines information among groups of similarly significant tests where possible, while leaving lone significant tests subject to Bonferroni-like stringency, providing a general approach to combining p-values that does not require specific knowledge of the dependency structure between tests.

HMP Allows Large-Scale Testing for Higher-Order Interactions Without Punitive Thresholds.

Scientific discovery is currently hindered by avoidance of large-scale exploratory hypothesis testing for fear of attracting multiple testing correction thresholds that render signals found by more limited testing no longer significant. A good example is the approach to testing for pairwise or higher-order interactions between variants in GWAS. The Bonferroni threshold for testing all pairwise interactions invites a threshold times more stringent than the threshold for testing variants individually, and strictly speaking this must be applied to every test, even though this is highly conservative because of the dependency between tests. The alternative of controlling the FDR risks a high probability of falsely detecting artifacts among any genuine associations discovered. Therefore, interactions are not usually tested for. To show how model averaging using the HMP greatly alleviates this problem, I reanalyzed human and pathogen genetic variants from a GWAS of pretreatment viral load in hepatitis C virus (HCV)-infected patients (25) (). Jointly analyzing the influence of human and pathogen variation on infection is an area of great interest, but it requires a Bonferroni threshold of when there are and variants in the human and pathogen genomes, respectively, compared with if testing the human and pathogen variants separately. In this example, and . In the original study, a known association with viral load was replicated at human chromosome 19 variant rs12979860 in IFNL4 (), below the Bonferroni threshold of for tests. The most significant pairwise interaction that I found, assuming equal weights, involved the adjacent variant rs8099917 with . However, this did not meet the more stringent Bonferroni threshold of for 330 million tests (Fig. 2). If the original study’s authors had performed and reported 330 million tests, they could have been compelled to declare the marginal association in IFNL4 nonsignificant, despite what intuitively seems like a clear signal.
Fig. 2.

Joint human–pathogen GWAS reanalysis of viral load in 410 HCV genotype 3a-infected white Europeans (25). All pairs of human nucleotide variants and viral amino acid variants were tested for association. Interactions between human and virus variants’ effects on viral load were not constrained to be additive. (A) Significance of 330,320,340 tests plotted by position of both the human and the viral variants. (B) Significance of 399,420 human variants model averaged using the HMP over every possible interaction with 827 viral variants and vice versa. The significance thresholds controlling the FWER at are indicated (black dashed lines): , , and .

Joint human–pathogen GWAS reanalysis of viral load in 410 HCV genotype 3a-infected white Europeans (25). All pairs of human nucleotide variants and viral amino acid variants were tested for association. Interactions between human and virus variants’ effects on viral load were not constrained to be additive. (A) Significance of 330,320,340 tests plotted by position of both the human and the viral variants. (B) Significance of 399,420 human variants model averaged using the HMP over every possible interaction with 827 viral variants and vice versa. The significance thresholds controlling the FWER at are indicated (black dashed lines): , , and . Model averaging using the HMP reduces this disincentive to perform additional related tests. Fig. 2 shows that, despite no significant pairwise tests involving rs8099917, model averaging recovered a combined p-value of , below the multiple testing threshold of for the model-averaged tests. Additionally, two viral variants produced statistically significant model-averaged p-values of and at polyprotein positions 10 and 2,061 in the capsid and NS5a zinc finger domain (GenBank accession no. AQW44528), below the multiple testing threshold of for the 827 model-averaged tests. These results show how model averaging using the HMP can assist discovery making by (i) encouraging tests for higher-order interactions when they otherwise would not be attempted and (ii) recovering lost signals of marginal associations after performing an “excessive” number of tests.

Untangling the Signals Driving Significant Model-Averaged p-Values.

When more than one alternative hypothesis is found to be significant, either individually or as part of a group, it is desirable to quantify the relative strength of evidence in favor of the competing alternatives. This is particularly true when disentangling the contributions of a group of individually nonsignificant alternatives that are significant only in combination. Sellke et al. (18) proposed a conversion from p-values to Bayes factors which, when combined with prior information and test power through the model weights, produces posterior model probabilities and credible sets of alternative hypotheses. details how the Bayes factors are approximately proportional to the weighted inverse p-values. This linearity mirrors the HMP itself, the inverse of which is an arithmetic mean of the inverse p-values. After conditioning on rejection of the null hypothesis by normalizing the approximate model probabilities to sum to 100%, the probability that the association involved human variant rs8099917 was 54.4%. This signal was driven primarily by the three viral variants with the highest probability of interacting with rs8099917 in their effect on pretreatment viral load: position 10 in the capsid (10.9%), position 669 in the E2 envelope (8.7%), and position 2,061 in the NS5a zinc finger domain (11.4%) (Fig. 3). Even though the model-averaged p-value for the envelope variant was not itself significant, this revealed a plausible interaction between it and the most significant human variant rs8099917.
Fig. 3.

In the joint human–HCV GWAS, the approximate posterior probability of association with rs8099917 was 54.4% in total, with the most probable interactions involving three polyprotein positions.

In the joint human–HCV GWAS, the approximate posterior probability of association with rs8099917 was 54.4% in total, with the most probable interactions involving three polyprotein positions.

Discussion

The HMP provides a way to calculate model-averaged p-values, providing a powerful and general method for combining tests while controlling the strong-sense FWER. It provides an alternative to both the overly conservative Bonferroni control of the FWER, and the lower stringency of FDR control. The HMP allows the incorporation of prior information through model weights and is robust to positive dependency between the p-values. The HMP is approximately well calibrated for small values, while a null distribution, derived from generalized central limit theorem, is easily computed. When the HMP is not significant, neither is any subset of the constituent tests. The HMP is more appropriate for combining p-values than Fisher’s method when the alternative hypotheses are mutually exclusive, as in model comparison. When the alternative hypotheses all have the same nested null hypothesis, the HMP is interpreted in terms of a model-averaged likelihood ratio test. However, the HMP can be used more generally to combine tests that are not necessarily mutually exclusive but that may have positive dependency, with the caveat that more powerful approaches may be available depending on the context. The HMP can be used alone or in combination: for example, with Fisher’s method to combine model-averaged p-values between groups of independent data. The theory underlying the HMP provides a fundamentally different way to think about controlling the FWER through multiple testing correction. The Bonferroni threshold increases linearly with the number of tests, whereas the HMP is the reciprocal of the mean of the inverse p-values. To maintain significance with Bonferroni correction, the minimum p-value must decrease linearly as the number of tests increases. This strongly penalizes exploratory and follow-up analyses. In contrast, when the false positive rate is small, maintenance of significance with the HMP requires only that the mean inverse p-value remains constant as the number of tests increases. This does not penalize exploratory and follow-up analyses so long as the “quality” of the additional hypotheses tested, measured by the inverse p-value, does not decline. Through example applications to GWAS, I have shown that the HMP combines tests adaptively, producing Bonferroni-like adjusted p-values for needle-in-a-haystack problems when one test dominates, but able to capitalize on numerous strongly significant tests to produce smaller adjusted p-values when warranted. I have shown how model averaging using the HMP encourages exploratory analysis and can recover signals of significance among groups of individually nonsignificant tests, properties that have the potential to enhance the scientific discovery process.
  13 in total

1.  Modeling interactions with known risk loci-a Bayesian model averaging approach.

Authors:  Teresa Ferreira; Jonathan Marchini
Journal:  Ann Hum Genet       Date:  2010-11-30       Impact factor: 1.670

2.  The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants.

Authors:  João Fadista; Alisa K Manning; Jose C Florez; Leif Groop
Journal:  Eur J Hum Genet       Date:  2016-01-06       Impact factor: 4.246

3.  A haplotype map of the human genome.

Authors: 
Journal:  Nature       Date:  2005-10-27       Impact factor: 49.962

4.  To permute or not to permute.

Authors:  Yifan Huang; Haiyan Xu; Violeta Calian; Jason C Hsu
Journal:  Bioinformatics       Date:  2006-07-26       Impact factor: 6.937

5.  Estimation of the multiple testing burden for genomewide association studies of nearly all common variants.

Authors:  Itsik Pe'er; Roman Yelensky; David Altshuler; Mark J Daly
Journal:  Genet Epidemiol       Date:  2008-05       Impact factor: 2.135

6.  A general framework for multiple testing dependence.

Authors:  Jeffrey T Leek; John D Storey
Journal:  Proc Natl Acad Sci U S A       Date:  2008-11-24       Impact factor: 11.205

7.  Genome-wide association analyses identify new risk variants and the genetic architecture of amyotrophic lateral sclerosis.

Authors:  Wouter van Rheenen; Aleksey Shatunov; Annelot M Dekker; Russell L McLaughlin; Frank P Diekstra; Sara L Pulit; Rick A A van der Spek; Urmo Võsa; Simone de Jong; Matthew R Robinson; Jian Yang; Isabella Fogh; Perry Tc van Doormaal; Gijs H P Tazelaar; Max Koppers; Anna M Blokhuis; William Sproviero; Ashley R Jones; Kevin P Kenna; Kristel R van Eijk; Oliver Harschnitz; Raymond D Schellevis; William J Brands; Jelena Medic; Androniki Menelaou; Alice Vajda; Nicola Ticozzi; Kuang Lin; Boris Rogelj; Katarina Vrabec; Metka Ravnik-Glavač; Blaž Koritnik; Janez Zidar; Lea Leonardis; Leja Dolenc Grošelj; Stéphanie Millecamps; François Salachas; Vincent Meininger; Mamede de Carvalho; Susana Pinto; Jesus S Mora; Ricardo Rojas-García; Meraida Polak; Siddharthan Chandran; Shuna Colville; Robert Swingler; Karen E Morrison; Pamela J Shaw; John Hardy; Richard W Orrell; Alan Pittman; Katie Sidle; Pietro Fratta; Andrea Malaspina; Simon Topp; Susanne Petri; Susanne Abdulla; Carsten Drepper; Michael Sendtner; Thomas Meyer; Roel A Ophoff; Kim A Staats; Martina Wiedau-Pazos; Catherine Lomen-Hoerth; Vivianna M Van Deerlin; John Q Trojanowski; Lauren Elman; Leo McCluskey; A Nazli Basak; Ceren Tunca; Hamid Hamzeiy; Yesim Parman; Thomas Meitinger; Peter Lichtner; Milena Radivojkov-Blagojevic; Christian R Andres; Cindy Maurel; Gilbert Bensimon; Bernhard Landwehrmeyer; Alexis Brice; Christine A M Payan; Safaa Saker-Delye; Alexandra Dürr; Nicholas W Wood; Lukas Tittmann; Wolfgang Lieb; Andre Franke; Marcella Rietschel; Sven Cichon; Markus M Nöthen; Philippe Amouyel; Christophe Tzourio; Jean-François Dartigues; Andre G Uitterlinden; Fernando Rivadeneira; Karol Estrada; Albert Hofman; Charles Curtis; Hylke M Blauw; Anneke J van der Kooi; Marianne de Visser; An Goris; Markus Weber; Christopher E Shaw; Bradley N Smith; Orietta Pansarasa; Cristina Cereda; Roberto Del Bo; Giacomo P Comi; Sandra D'Alfonso; Cinzia Bertolin; Gianni Sorarù; Letizia Mazzini; Viviana Pensato; Cinzia Gellera; Cinzia Tiloca; Antonia Ratti; Andrea Calvo; Cristina Moglia; Maura Brunetti; Simona Arcuti; Rosa Capozzo; Chiara Zecca; Christian Lunetta; Silvana Penco; Nilo Riva; Alessandro Padovani; Massimiliano Filosto; Bernard Muller; Robbert Jan Stuit; Ian Blair; Katharine Zhang; Emily P McCann; Jennifer A Fifita; Garth A Nicholson; Dominic B Rowe; Roger Pamphlett; Matthew C Kiernan; Julian Grosskreutz; Otto W Witte; Thomas Ringer; Tino Prell; Beatrice Stubendorff; Ingo Kurth; Christian A Hübner; P Nigel Leigh; Federico Casale; Adriano Chio; Ettore Beghi; Elisabetta Pupillo; Rosanna Tortelli; Giancarlo Logroscino; John Powell; Albert C Ludolph; Jochen H Weishaupt; Wim Robberecht; Philip Van Damme; Lude Franke; Tune H Pers; Robert H Brown; Jonathan D Glass; John E Landers; Orla Hardiman; Peter M Andersen; Philippe Corcia; Patrick Vourc'h; Vincenzo Silani; Naomi R Wray; Peter M Visscher; Paul I W de Bakker; Michael A van Es; R Jeroen Pasterkamp; Cathryn M Lewis; Gerome Breen; Ammar Al-Chalabi; Leonard H van den Berg; Jan H Veldink
Journal:  Nat Genet       Date:  2016-07-25       Impact factor: 41.307

8.  Estimation of significance thresholds for genomewide association scans.

Authors:  Frank Dudbridge; Arief Gusnanto
Journal:  Genet Epidemiol       Date:  2008-04       Impact factor: 2.135

9.  The genetic architecture of type 2 diabetes.

Authors:  Christian Fuchsberger; Jason Flannick; Tanya M Teslovich; Anubha Mahajan; Vineeta Agarwala; Kyle J Gaulton; Clement Ma; Pierre Fontanillas; Loukas Moutsianas; Davis J McCarthy; Manuel A Rivas; John R B Perry; Xueling Sim; Thomas W Blackwell; Neil R Robertson; N William Rayner; Pablo Cingolani; Adam E Locke; Juan Fernandez Tajes; Heather M Highland; Josee Dupuis; Peter S Chines; Cecilia M Lindgren; Christopher Hartl; Anne U Jackson; Han Chen; Jeroen R Huyghe; Martijn van de Bunt; Richard D Pearson; Ashish Kumar; Martina Müller-Nurasyid; Niels Grarup; Heather M Stringham; Eric R Gamazon; Jaehoon Lee; Yuhui Chen; Robert A Scott; Jennifer E Below; Peng Chen; Jinyan Huang; Min Jin Go; Michael L Stitzel; Dorota Pasko; Stephen C J Parker; Tibor V Varga; Todd Green; Nicola L Beer; Aaron G Day-Williams; Teresa Ferreira; Tasha Fingerlin; Momoko Horikoshi; Cheng Hu; Iksoo Huh; Mohammad Kamran Ikram; Bong-Jo Kim; Yongkang Kim; Young Jin Kim; Min-Seok Kwon; Juyoung Lee; Selyeong Lee; Keng-Han Lin; Taylor J Maxwell; Yoshihiko Nagai; Xu Wang; Ryan P Welch; Joon Yoon; Weihua Zhang; Nir Barzilai; Benjamin F Voight; Bok-Ghee Han; Christopher P Jenkinson; Teemu Kuulasmaa; Johanna Kuusisto; Alisa Manning; Maggie C Y Ng; Nicholette D Palmer; Beverley Balkau; Alena Stančáková; Hanna E Abboud; Heiner Boeing; Vilmantas Giedraitis; Dorairaj Prabhakaran; Omri Gottesman; James Scott; Jason Carey; Phoenix Kwan; George Grant; Joshua D Smith; Benjamin M Neale; Shaun Purcell; Adam S Butterworth; Joanna M M Howson; Heung Man Lee; Yingchang Lu; Soo-Heon Kwak; Wei Zhao; John Danesh; Vincent K L Lam; Kyong Soo Park; Danish Saleheen; Wing Yee So; Claudia H T Tam; Uzma Afzal; David Aguilar; Rector Arya; Tin Aung; Edmund Chan; Carmen Navarro; Ching-Yu Cheng; Domenico Palli; Adolfo Correa; Joanne E Curran; Denis Rybin; Vidya S Farook; Sharon P Fowler; Barry I Freedman; Michael Griswold; Daniel Esten Hale; Pamela J Hicks; Chiea-Chuen Khor; Satish Kumar; Benjamin Lehne; Dorothée Thuillier; Wei Yen Lim; Jianjun Liu; Yvonne T van der Schouw; Marie Loh; Solomon K Musani; Sobha Puppala; William R Scott; Loïc Yengo; Sian-Tsung Tan; Herman A Taylor; Farook Thameem; Gregory Wilson; Tien Yin Wong; Pål Rasmus Njølstad; Jonathan C Levy; Massimo Mangino; Lori L Bonnycastle; Thomas Schwarzmayr; João Fadista; Gabriela L Surdulescu; Christian Herder; Christopher J Groves; Thomas Wieland; Jette Bork-Jensen; Ivan Brandslund; Cramer Christensen; Heikki A Koistinen; Alex S F Doney; Leena Kinnunen; Tõnu Esko; Andrew J Farmer; Liisa Hakaste; Dylan Hodgkiss; Jasmina Kravic; Valeriya Lyssenko; Mette Hollensted; Marit E Jørgensen; Torben Jørgensen; Claes Ladenvall; Johanne Marie Justesen; Annemari Käräjämäki; Jennifer Kriebel; Wolfgang Rathmann; Lars Lannfelt; Torsten Lauritzen; Narisu Narisu; Allan Linneberg; Olle Melander; Lili Milani; Matt Neville; Marju Orho-Melander; Lu Qi; Qibin Qi; Michael Roden; Olov Rolandsson; Amy Swift; Anders H Rosengren; Kathleen Stirrups; Andrew R Wood; Evelin Mihailov; Christine Blancher; Mauricio O Carneiro; Jared Maguire; Ryan Poplin; Khalid Shakir; Timothy Fennell; Mark DePristo; Martin Hrabé de Angelis; Panos Deloukas; Anette P Gjesing; Goo Jun; Peter Nilsson; Jacquelyn Murphy; Robert Onofrio; Barbara Thorand; Torben Hansen; Christa Meisinger; Frank B Hu; Bo Isomaa; Fredrik Karpe; Liming Liang; Annette Peters; Cornelia Huth; Stephen P O'Rahilly; Colin N A Palmer; Oluf Pedersen; Rainer Rauramaa; Jaakko Tuomilehto; Veikko Salomaa; Richard M Watanabe; Ann-Christine Syvänen; Richard N Bergman; Dwaipayan Bharadwaj; Erwin P Bottinger; Yoon Shin Cho; Giriraj R Chandak; Juliana C N Chan; Kee Seng Chia; Mark J Daly; Shah B Ebrahim; Claudia Langenberg; Paul Elliott; Kathleen A Jablonski; Donna M Lehman; Weiping Jia; Ronald C W Ma; Toni I Pollin; Manjinder Sandhu; Nikhil Tandon; Philippe Froguel; Inês Barroso; Yik Ying Teo; Eleftheria Zeggini; Ruth J F Loos; Kerrin S Small; Janina S Ried; Ralph A DeFronzo; Harald Grallert; Benjamin Glaser; Andres Metspalu; Nicholas J Wareham; Mark Walker; Eric Banks; Christian Gieger; Erik Ingelsson; Hae Kyung Im; Thomas Illig; Paul W Franks; Gemma Buck; Joseph Trakalo; David Buck; Inga Prokopenko; Reedik Mägi; Lars Lind; Yossi Farjoun; Katharine R Owen; Anna L Gloyn; Konstantin Strauch; Tiinamaija Tuomi; Jaspal Singh Kooner; Jong-Young Lee; Taesung Park; Peter Donnelly; Andrew D Morris; Andrew T Hattersley; Donald W Bowden; Francis S Collins; Gil Atzmon; John C Chambers; Timothy D Spector; Markku Laakso; Tim M Strom; Graeme I Bell; John Blangero; Ravindranath Duggirala; E Shyong Tai; Gilean McVean; Craig L Hanis; James G Wilson; Mark Seielstad; Timothy M Frayling; James B Meigs; Nancy J Cox; Rob Sladek; Eric S Lander; Stacey Gabriel; Noël P Burtt; Karen L Mohlke; Thomas Meitinger; Leif Groop; Goncalo Abecasis; Jose C Florez; Laura J Scott; Andrew P Morris; Hyun Min Kang; Michael Boehnke; David Altshuler; Mark I McCarthy
Journal:  Nature       Date:  2016-07-11       Impact factor: 69.504

10.  Genome-to-genome analysis highlights the effect of the human innate and adaptive immune systems on the hepatitis C virus.

Authors:  M Azim Ansari; Vincent Pedergnana; Camilla L C Ip; Andrea Magri; Annette Von Delft; David Bonsall; Nimisha Chaturvedi; Istvan Bartha; David Smith; George Nicholson; Gilean McVean; Amy Trebes; Paolo Piazza; Jacques Fellay; Graham Cooke; Graham R Foster; Emma Hudson; John McLauchlan; Peter Simmonds; Rory Bowden; Paul Klenerman; Eleanor Barnes; Chris C A Spencer
Journal:  Nat Genet       Date:  2017-04-10       Impact factor: 41.307

View more
  50 in total

1.  Recent advances toward understanding the mysteries of the acute to chronic pain transition.

Authors:  Theodore J Price; Pradipta R Ray
Journal:  Curr Opin Physiol       Date:  2019-06-04

2.  Reply to Held: When is a harmonic mean p-value a Bayes factor?

Authors:  Daniel J Wilson
Journal:  Proc Natl Acad Sci U S A       Date:  2019-03-19       Impact factor: 11.205

3.  On the Bayesian interpretation of the harmonic mean p-value.

Authors:  Leonhard Held
Journal:  Proc Natl Acad Sci U S A       Date:  2019-03-19       Impact factor: 11.205

4.  Cell Type-Specific Transcriptomics Reveals that Mutant Huntingtin Leads to Mitochondrial RNA Release and Neuronal Innate Immune Activation.

Authors:  Hyeseung Lee; Robert J Fenster; S Sebastian Pineda; Whitney S Gibbs; Shahin Mohammadi; Jose Davila-Velderrain; Francisco J Garcia; Martine Therrien; Hailey S Novis; Fan Gao; Hilary Wilkinson; Thomas Vogt; Manolis Kellis; Matthew J LaVoie; Myriam Heiman
Journal:  Neuron       Date:  2020-07-17       Impact factor: 17.173

5.  Treefrogs exploit temporal coherence to form perceptual objects of communication signals.

Authors:  Saumya Gupta; Mark A Bee
Journal:  Biol Lett       Date:  2020-09-23       Impact factor: 3.703

6.  The harmonic mean p-value: Strong versus weak control, and the assumption of independence.

Authors:  Jelle J Goeman; Jonathan D Rosenblatt; Thomas E Nichols
Journal:  Proc Natl Acad Sci U S A       Date:  2019-10-29       Impact factor: 11.205

7.  MotifAnalyzer-PDZ: A computational program to investigate the evolution of PDZ-binding target specificity.

Authors:  Jordan Valgardson; Robin Cosbey; Paul Houser; Milo Rupp; Raiden Van Bronkhorst; Michael Lee; Filip Jagodzinski; Jeanine F Amacher
Journal:  Protein Sci       Date:  2019-11-01       Impact factor: 6.725

8.  Rethinking carcinogenesis: The detached pericyte hypothesis.

Authors:  Stuart G Baker
Journal:  Med Hypotheses       Date:  2020-06-30       Impact factor: 1.538

9.  Adaptive group-regularized logistic elastic net regression.

Authors:  Magnus M Münch; Carel F W Peeters; Aad W Van Der Vaart; Mark A Van De Wiel
Journal:  Biostatistics       Date:  2021-10-13       Impact factor: 5.899

10.  Reply to Goeman et al.: Trade-offs in model averaging using multilevel tests.

Authors:  Daniel J Wilson
Journal:  Proc Natl Acad Sci U S A       Date:  2019-10-29       Impact factor: 11.205

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.