Literature DB >> 25927705

A comparison of aggregate p-value methods and multivariate statistics for self-contained tests of metabolic pathway analysis.

Matthew W Mitchell1.   

Abstract

For pathway analysis of genomic data, the most common methods involve combining p-values from individual statistical tests. However, there are several multivariate statistical methods that can be used to test whether a pathway has changed. Because of the large number of variables and pathway sizes in genomics data, some of these statistics cannot be computed. However, in metabolomics data, the number of variables and pathway sizes are typically much smaller, making such computations feasible. Of particular interest is being able to detect changes in pathways that may not be detected for the individual variables. We compare the performance of both the p-value methods and multivariate statistics for self-contained tests with an extensive simulation study and a human metabolomics study. Permutation tests, rather than asymptotic results are used to assess the statistical significance of the pathways. Furthermore, both one and two-sided alternatives hypotheses are examined. From the human metabolomic study, many pathways were statistically significant, although the majority of the individual variables in the pathway were not. Overall, the p-value methods perform at least as well as the multivariate statistics for these scenarios.

Entities:  

Mesh:

Year:  2015        PMID: 25927705      PMCID: PMC4415974          DOI: 10.1371/journal.pone.0125081

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

In metabolomics, we are not only interested in changes for individual metabolites, but also groups of related metabolites, which may be metabolites in the same class, metabolites in the same pathway, or other groups of bio-signatures. For simplicity, henceforth, all these categories will be referred to as “pathways.” In particular we are interested in the cases where the individual metabolites miss the cut-off for statistical significance from univariate analyses, but in aggregate are found to be statistically significant. For example, suppose we observe eight metabolites in a pathway with p-values of 0.07. If the pair-wise correlations (unless otherwise stated, “correlation” refers to the Pearson correlation) are 0.99, then we expect the aggregate p-value to be similar to an individual p-value. However, if these are statistically independent, then the Fisher meta-analysis [1] p-value = 0.0003. So the aggregate p-value could range from 0.07 (all correlated = 1) to 0.0003. Hence, it is desirable to formally test whether a pathway is changed. In genomics, there are two categories of pathway analysis: (1) competitive tests and (2) self-contained tests [2, 3]. Competitive tests assess whether there are more changes for those genes in a given pathway than those not in that pathway. Hence, the scenario in the previous paragraph would not be detected if a p-value < 0.05 is defined as a metabolite “changing.” The self-contained tests assess whether the pathway has changed without regard to genes in other pathways. These tests are of more interest for our applications and will be the focus of this paper. For genomics pathway analysis, the methods for self-contained tests often involve combining the p-values, and in some cases “data modeling” (e.g., logistic regression, principal components analysis) [2-5] is performed, but various multivariate statistics, such as Hotelling’s T 2 statistic [6], are not considered. Conversely, in the literature of multivariate methods, the aforementioned methods are not considered, but instead multivariate statistics are employed [7-14]. Some of these methods, such as Hotelling’s T 2 statistic, require the inversion of the sample covariance matrix, which is not possible when the number of observations is less than the number of variables, as is typically the case for-omics data. However, other multivariate statistics such as those proposed by Dempster [7, 8], Bai and Saranadasa [9], and Srivastava and Du [10] do not require the inversion of the sample covariance matrix. The distributions of some of these statistics, such as the Bai-Saranadasa statistic [9] rely on asymptotic results, which require even larger sample sizes. Thus, in genomics, many of these statistics will not apply. However, while genomics data sets typically have over 10,000 variables, metabolomics sets often have fewer than 1,000 (chemo-centric, rather than ion-centric [15]). Furthermore, the sizes of the pathways for metabolomics are also significantly smaller, with many containing fewer than 20 metabolites. Thus, these multivariate statistics can apply in many cases for metabolomics data. In the next section, we describe various methods that aggregate p-values and methods that employ multivariate statistics. Only one “data modeling” approach is considered because for the global models mentioned in [3], logistic regression performed on multiple principal components often failed to converge for the smaller sample sizes in the simulation study. Many of the tests described are two-sided tests; however, we often have pre-specified alternatives, i.e., we expect that the metabolites in the pathway to change in a certain direction, so statistics that can take these into account are also desirable. We thus adapt several methods for one-sided alternatives. These statistical tests are compared in a simulation study and are then compared in an application to a human metabolomics study.

Materials and Methods

Let X 11, X 12, …, X 1,n1 be random vectors from a distribution with mean μ 1 = (μ11, μ 21,.., μm1), and let X 12, X 22, …, X 2,n2 be random vectors from a distribution with mean μ 2 = (μ12, μ22,.., μm2). Then, the hypotheses of interest are H0: μ 1 = μ 2 vs. Ha: μ 1 ≠ μ 2. Here n 1 and n 2 represent the two sample sizes, and m represents the number of variables. Hence, we are testing whether no metabolites are truly differential against the alternative that at least one metabolite is truly differential.

Test Statistics for Pathway Analysis

The following are compared for testing the above hypothesis. Fisher’s method. Fisher’s method [1] is often used in meta-analysis for testing the same hypothesis across different data sets. So in this context, each member of the pathway is seen as a representative of the same biology. Hence, this test will not work well for the case where one variable changes, but the others do not. However, this is not of primary interest for our applications. Let p i, i = 1,2, …, m, be the set of p-values for the m members of the pathway. For example, these could be the p-values that result from performing two-sided t-tests for each metabolite. If the variables are independent, then the p-values have a Uniform(0,1) distribution, and thus the statistic has a chi-squared distribution with 2*m degrees of freedom. Since, in the scenarios examined, we are considering related metabolites, which often have strong correlations, we use a permutation test as described by Fridley [3] instead of the chi-squared test. Tail Strength. The tail strength measure [4] was developed to give an overall measure of significance for a set of hypotheses and has also been used to test significance of pathways [2, 3]. The test statistic compares the ordered p-values to the expected moments of the Uniform(0,1) distribution, i.e., , where p (i) is the ith ordered p-value, i.e., p (1) is the smallest p-value and p (m) is the largest p-value. This test is approximately normal when m is large, which is not the case in our scenarios, so as with (1) we determine the distribution with a permutation test. Alternatively, since the variables are correlated, the moments could also be estimated from the permutation distribution, but this was not done here. Adaptive Rank Truncated Product. For the adaptive rank truncated product method (ARTP) [5], the J smallest p-values are used for the statistic W = p (1) p (2)…p (, where J is chosen to minimize the p-value. More specifically, first permute the group labels B times in order to generate B data sets. Next, for each of these data sets compute the m p-values, p b (1), p b (2),…, p b (m). Then for each truncation point J, 1 ≤ J ≤ m, and each data set b, 0 ≤b ≤ B (b = 0 corresponds to the original data set), compute . Now for each b and J, compute its estimated p-value with Ge’s algorithm , where I is the usual indicator function. Next, let over 1 ≤ J ≤ m. Then the p-value for the adaptive rank truncated product is given as . Principal Components Analysis (PCA). There are various ways a principal components analysis (PCA) can be performed—one may simply choose a set number of components or choose the first k components so that a certain percentage of the total variation of is accounted for (such as 80%) [3], and then perform logistic regression on these components. Because the numbers of variables in the pathways under consideration are low, we simply choose the first component, and then perform a two-sample t-test. One advantage of this method is that a single score can be obtained for each sample and fold changes computed. Hotelling’s T . Let X 11, X 12, …, X 1,n1 be i.i.d. random vectors from a multinormal distribution with mean μ 1 = (μ 11, μ 21,.., μ m1) and covariance matrix ∑, and let X 12, X 22, …, X 2,n2 be random vectors from a multinormal distribution with mean μ 2 = (μ 12, μ 22,.., μ m2) and covariance matrix ∑. Then under the null hypothesis, where with and representing the usual vector of sample means and p representing the pooled estimate of the covariance matrix, i.e., , with 1 and 2 representing the usual sample covariance matrices [6]. As one can see, this requires multivariate normality, the same covariance matrix for each group, and requires that the total number of observations (n 1 + n 2) is greater than (m+1). If the normality assumption is questionable, a permutation distribution could be used instead of the F-statistic. Dempster’s Test. Dempster also developed multivariate statistics for these scenarios [7, 8]. This test statistic is given by , where tr represents the trace function. Let n = n 1 + n 2–2. When the data are sampled from a multivariate normal distribution, this has an approximate F-distribution with r and nr degrees of freedom, where r is given by [9,10] and can be estimated by with and Rather than using the approximation for the test statistic’s distribution, which relies on both the F-approximation and the approximation for the degrees of freedom, a permutation based value is used instead. Like Hotelling’s T 2, this requires the same covariance matrix for each group, but this has a strong advantage over Hotelling’s T 2 because this test statistic does not require the inversion of the sample covariance matrix. Bai-Saranadasa Test. Bai and Saranadasa [9] also developed a test statistic that does not require the inversion of the sample covariance matrix. Let n = n 1 + n 2. − 2. Their test statistic is given by , where . This statistic has an asymptotic N(0,1) distribution, but since the sample sizes may be too small, we again use a permutation distribution of the statistic to determine the p-value. Srivastava-Du Test. Srivastava and Du [10] also developed a multivariate test statistic that does not require the inversion of the sample covariance matrix. This test statistic is given by , where D is the matrix consisting of the sample pooled variances on the diagonal and zero otherwise, c = 1 + tr(R 2)/m 3/2, and R is the matrix obtained from , where D s -1/2 is the matrix with the reciprocals of the pooled standard deviations along the diagonal and zero otherwise. Similar to (1)-(3), (6), (7), a permutation distribution is used to compute the p-values. For one-sided alternatives, we propose a simple modification of statistics of the form (2.4) given in Sen [11], which was used in the context of union-intersection tests for one-sample problems: where and S are the usual one-sample mean vector and covariance matrix for testing a series of hypotheses of the form H 0, = θ = 0. We propose the natural two-sample analog. Let a represent the vector of coefficients related to the hypothesized direction of change. For example, if one is considering four variables and expects them to increase under one experimental condition (i.e., one expects that the means for group 2 are greater than those for group 1), then a = (1,1,1,1); or if we expect the first two variables to increase, but the second two variables to decrease then a = (1,1,-1,-1). The proposed statistic for one-sided alternatives, is . As with (6)-(8), this does not require inversion of the sample covariance matrix. Furthermore, we are not concerned with the asymptotic distribution, as a permutation distribution will be used. We also consider (1)-(3) by computing p-values for the one-sided tests. For example, if the individual two-sided p-values for each variable from a t-test are computed to determine the statistics in (1)-(3), then here, the one-sided p-values are computed to determine the statistics in (1)-(3). Additional one-sided alternatives. In the literature there are several publications concerning testing whether the parameters fall in the positive orthant, i.e., where at least one of the mean differences is positive, and the others are non-negative [12-14]. The common form of testing this is by using the test statistic in conjunction with the condition that the sum of the sample mean differences is positive: for a level of 0.05, reject if the test statistic has p-value less than 0.10 and the sum of the mean differences is positive [12].

Data

Simulation Study

For the simulation study, we use sample sizes and variable sizes often seen in metabolomic animal studies, which typically have low numbers of animals used (of the 100 most recently analyzed data sets at Metabolon, 39 were rodent studies with median group size = 5, and 38 of the 39 had group sizes from 2 to 8 animals) and small pathway sizes. We use n 1 = n 2 = 5, n 1 = n 2 = 10, and m = 4, 8. The data are simulated from multinormal distributions with the same covariance matrix for each group. The correlation matrices are of the form where all the off-diagonal elements are equal to ρ. Let R 1, R 2, R 3, R 4, represent the correlation matrices where ρ = 0, 0.5, 0.7, and 0.9, respectively. Let 1 k and 0 k represent the vectors of length k (1, 1, …, 1)k and (0, 0, …, 0)k, respectively. The population standard deviations are σ 11 = 0.3*I 4, σ 12 = (0.15, 0.25, 0.35, 0.45), σ 21 = 0.3*I 8, and σ 22 = (σ 12, σ 12) = (0.15, 0.25, 0.35, 0.45, 0.15, 0.25, 0.35, 0.45). The differences in means assessed are m 10 = 0 4, m 11 = 0.15* I 4, m 12 = 0.3* I 4, m 13 = (0.3,0,0,0), m 20 = 0 8, m 21 = 0.15*I 8, m 22 = 0.3* I 8, and m 23 = (0.3,0,0,0,0,0,0,0). For each permutation test, 1,000 permutations were used to determine the p-value, i.e., the group labels were permuted 1,000 times, and then the test statistic was computed for each permuted data set. The p-value is computed as the proportion of these values at least as large as the observed value. More formally, let Y 0 represent the test statistic (from (1)–(13)), and let Y 1, Y 2…Y 1000 represent the test statistics generated from the permuted data sets. Then , where p represents the empirical p-value. Next 1,000 simulations runs were performed to determine the empirical type I error and power for the combination of parameters under consideration, i.e., the power or type I error is equal to , where α is the level of the test and p j is the p-value computed from the j th simulation run. For the one-sided alternatives, the tests used a = I 4 or a = I (i.e., all mean changes are positive). All simulations were performed in R [16] and the computation of Hotelling’s T 2 was performed with the package “Hotelling” [17].

Insulin Resistance Human Metabolomic Study

Next, we apply these methods to a large human metabolomics data set concerning insulin resistance [18], which is a data set that has been previously published in this journal. Here the comparison is for insulin resistant subjects,” IR”, (n1 = 138), vs. insulin sensitive subjects,” IS”, (n2 = 261) as defined in the paper. There are many more subjects in this study than many metabolomics data sets, as this was used for human biomarker discovery. More details on the data processing are given in that paper. All statistics are performed on the log-transformed data. This data set shows many of the difficulties in performing metabolic pathway analysis in practice, many of which are similar problems encountered in genomics pathway analysis. For example, as genes belong to multiple pathways [19, 20], some metabolites belong to multiple pathways. For example, using KEGG [21], alanine is listed in 17 pathways, many of which overlap. Furthermore, incomplete, limited, or erroneous annotations and information poses problems in classifying genes into pathways [19, 20], which also occurs in metabolomics. For example, using KEGG [21] some metabolites are not assigned to any pathways, such as 1-methyladenosine, which has one reaction listed, but no pathways listed. Additionally, in metabolomics, every element in the pathway may not be observed depending on the concentration of the metabolite and the technology used to measure its levels. Note that as the statistics used are functional class scoring methods [19], any specific relationships within a pathway are not modeled [19], and the analyses performed do not take into account relationships of the pathways to each other [19, 20]. For this data set, each metabolite was assigned to a single pathway as defined by in-house experts, who made use of such public databases such as KEGG [21]. As mentioned previously, “pathway” is being used to refer to any group of related metabolites, so metabolites in the same class may be grouped, even though they do not belong the same physical pathway. The pathways are larger in genomics studies where the minimum and maximum numbers of elements in the pathways considered are often 10 and 100–200 genes, respectively [20]. However, virtually none of the metabolic pathways would satisfy these criteria, as the metabolic pathways tend to be much smaller. Here, pathways with only one representative were excluded from the analysis. Since this data set had large sample sizes, the permutation distributions for each statistic were determined from 10,000 permutations, rather than 1,000 permutations, which were used for the simulation study.

Results

Simulation Study

For all tables, MU is the mean difference, σ the vector of the standard deviations, ρ, the pair-wise correlations, N = n 1 = n 2, FX = Fisher’s statistic using the chi-squared distribution, FP = Fisher’s statistic using the permutation distribution, TS = tail strength statistic, ARTP = adaptive rank truncated product, PCA, the results from performing the two-sample t-test on the first principal component, HT = Hotelling’s’ T 2, BSN = Bai-Saranadasa statistic using the normal approximation, BSP = Bai-Saranadasa statistic using the permutation distribution, DM = Dempster’s statistic, SD = Srivastava and Du’s statistic and T0 = the two-sample analog of Sen’s statistic given in (9). We first assess the type I error for each of the tests under the various parameter combinations listed in the previous section. The chi-squared statistic for (1) (FX) and the normal approximation for (7) (BSN) are also examined in order to see how close these are to the desired type I error. A level of α = 0.05 was used for each test. As we can see from Tables 1 and 2, the non-permutation based measurements (FX, BSN) have type I errors larger than the nominal levels, especially with highly correlated variables, as expected. All the others (the permutation-based measures) have type I errors at the nominal level. The results for the one-sided tests (not shown) were similar. Note that If the true type I error = 0.05, then the standard error of its estimate is , which is approximately 0.007.
Table 1

Empirical Type 1 Error, 4 variables, two-sided tests.

σρNFXFPTSARTPPCAHTBSNBSPDMSD
σ110.950.1360.0610.0610.0610.0460.0520.1160.0590.0590.059
σ120.950.1410.0590.060.0620.0450.0570.120.0580.0580.06
σ110.750.0980.0510.0490.0550.0430.0360.0990.0390.0390.039
σ120.750.1040.0520.0520.0470.0340.0480.1050.060.060.057
σ110.550.0990.0650.0720.0620.0540.0590.1230.0590.0610.064
σ120.550.0720.0420.0460.0450.0320.0440.1070.0550.0540.044
σ11050.0510.0550.0540.0550.0510.0410.0960.0530.0480.052
σ12050.0390.0450.0460.0580.0340.0440.0970.0590.0560.039
σ110.9100.1110.0410.040.0440.0380.040.0780.0440.0440.045
σ120.9100.1270.0430.0440.0430.0420.0520.080.0430.0430.044
σ110.7100.1150.0520.0480.0540.0480.0480.0910.0370.0360.036
σ120.7100.1030.0430.0420.050.0410.0430.0830.0360.0360.036
σ110.5100.0950.0590.0560.0640.0490.0470.0930.0540.0530.051
σ120.5100.0670.0450.0480.0440.0410.0510.0780.0530.0520.048
σ110100.050.0510.0490.0490.0440.0410.080.0350.0360.041
σ120100.0660.0650.050.0640.0550.0520.080.0550.0570.045
Table 2

Empirical Type I, 8 variables, two-sided tests.

σρNFXFPTSARTPPCAHTBSNBSPDMSD
σ210.950.2120.0560.0570.0680.050.0590.1320.0490.0490.049
σ220.950.1940.0570.0570.0580.0450.0450.120.0480.0480.047
σ210.750.1480.0510.0520.0560.0470.0570.1110.0460.0450.049
σ220.750.150.0520.0490.0530.0470.0580.1070.0440.0460.049
σ210.550.1010.0380.0420.0440.0330.050.0890.050.0490.05
σ220.550.140.0630.0580.0640.0480.0450.1070.0520.0490.051
σ21050.0530.0610.050.0640.0580.050.0970.0560.0530.057
σ22050.0390.0460.0440.0520.0430.0490.0840.0410.0390.063
σ210.9100.2020.060.060.0550.0590.0430.0940.0450.0450.045
σ220.7100.1570.0530.0590.0540.0520.0580.0940.0510.0510.053
σ210.5100.1320.0570.0530.0540.0510.0560.10.050.0490.05
σ220100.0490.0470.0460.0470.0490.0450.0770.0480.0470.051
σ210.9100.1660.0510.0510.0540.0490.0520.0880.050.0510.05
σ220.7100.1360.0440.0450.0470.0440.0580.0780.0430.0450.039
σ210.5100.1210.050.0520.0510.0470.0460.080.0490.0480.045
σ220100.0480.0430.0480.0540.0520.0380.0830.0450.0430.044
Because of the inflated type I errors, FX and BSN are not considered, henceforth. The empirical power is computed for the combinations of means and covariances defined in the previous section. Both Bai and Saranadasa [9] and Srivastava and Du [10] have shown that asymptotically, Dempster’s statistic has the same power as the Bai-Saranadasa statistic. For the small sample sizes used in the simulation runs, these two statistics also have essentially equivalent performance as seen in Fig 1, which plots the empirical powers of one against the other (the line y = x is shown for comparison) for all the combinations shown in S1 and S2 Tables (the two-sided tests). Since these two tests are essentially equivalent, the remainder of comparisons with these two will reference only the BSP values.
Fig 1

Comparison of the empirical power for Dempster’s test (DM) and Bai-Saranadasa (BSP) for all of the two-sided tests.

The line y = x is plotted for comparison.

Comparison of the empirical power for Dempster’s test (DM) and Bai-Saranadasa (BSP) for all of the two-sided tests.

The line y = x is plotted for comparison. In the comparison of the p-value methods performed by Fridley et al., the Fisher statistic (FP) was the top overall performer [3]. Fig 2, shows the performance of the other tests relative to FP for the two-sided tests. Those above the line y = x have higher power than FP, and those below the line have lower power. From Fig 2, it is clear that Hotelling’s test (HT) performs much differently from the others. In many of the parameter combinations, it has much lower power than the other methods. To see this more specifically, Fig 3 shows the comparison of the empirical power for two groups of 5 subjects and 8 variables when all 8 elements of the mean vector are 0.3 (m 22) and the standard deviations are 0.3 (σ 21). The power to detect the change for a single variable is 0.29, and HT even performs worse than that (also see S1, S2, and S3 Tables). This low power is the result of the low sample size relative to the number of variables, as 10 subjects (two groups of 5) is the minimum number of subjects required for the inversion of the sample covariance matrix, and this is the only method that uses the inverse of the sample covariance matrix in the computation of the statistic.
Fig 2

Comparison of the empirical power of Fisher’s statistic (FP) to the other statistics for the two-sided tests.

The line y = x is shown for comparison to Fisher’s empirical power.

Fig 3

Comparison of the empirical power for the tests for 8 variables and two groups of 5 when all the mean changes are 0.3 and all of the standard deviations are 0.3.

The vertical line shows the power to detect a single variable with a mean difference of 0.3 and standard deviation 0.3 (or the same ratio).

Comparison of the empirical power of Fisher’s statistic (FP) to the other statistics for the two-sided tests.

The line y = x is shown for comparison to Fisher’s empirical power.

Comparison of the empirical power for the tests for 8 variables and two groups of 5 when all the mean changes are 0.3 and all of the standard deviations are 0.3.

The vertical line shows the power to detect a single variable with a mean difference of 0.3 and standard deviation 0.3 (or the same ratio). There are some cases where the power for HT is much larger than the others. For example, when there are two groups of 10 subjects each, the power of HT to detect the mean difference of m 13 = (0.3,0,0,0) with all standard deviations 0.3 (σ 11) is much higher than the others, and much higher than its ability to detect m 12 = (0.3,0.3,0.3,0.3)! Furthermore, this power increases with increasing correlation, when a decrease would be expected (see the bottom four rows of S1, S2, S4, and S5 Tables). This is very counterintuitive, but by examining the two-variable case, we can gain some insight. Suppose there are two variables with correlation ρ and standard deviation = 1. For the same sample sizes, we evaluate (substitute the sample means and sample covariance matrix in the test statistic with the expected values) . Thus, when the mean difference for each variable is μ, this reduces to and if the mean difference for variable one is μ, but variable two is 0, then we have . For independent variables this is 2μ2 vs. μ2. However, when ρ = 0.9, the first equation is approximately 1.05*μ2, while the second equation is approximately 5.26*μ2! This is definitely not a desirable property for the power. The other multivariate statistics behave as desired. From Fig 2, we see that SD and ARTP appear to be the most competitive methods to FP, while PCA, TS, BSP often underperform compared to the others. To see some of the patterns more clearly, the empirical power for SD compared to BSP is shown in Fig 4 for the two-sided tests for all combinations except for those where the means had only one non-zero element (m 13 and m 23). From Fig 4, one can clearly see that for the cases where the standard deviations for each variable are the same (σ 11 and σ 21), the two methods are comparable; however, when the standard deviations are mixed (σ 12 and σ 22), SD clearly outperforms BSP. In Fig 5, a similar comparison is made for the p-value methods with the line y = x indicating deviations from FP. For the mixed standard deviations, TS clearly underperforms FP and ARTP; but, for the mixed standard deviations, ARTP often outperforms FP, while for many cases where the standard deviations are the same, it underperforms FP. Fig 6 shows a comparison now with just the top 3 performers: FP, ARTP, and SD. Finally, to see the effect of larger samples sizes, FP was compared to SD for four variables with two groups of 20 subjects each and two groups with 50 subjects each: their empirical powers are virtually identical (see S6 Table).
Fig 4

Comparison of the empirical power of Srivastava-Du ‘s test (SD) to Bai-Saranadasa’s test (BSP) for two-sided tests.

This includes all the data from S1 and S2 Tables for these except for the case of one strong mean change (m 13 and m 23).

Fig 5

Comparison of the three p-value methods for the two-sided tests.

This includes all the data from S1 and S2 Tables for these except for the case of one strong mean change (m 13 and m 23).

Fig 6

Comparison of the empirical power for the top 3 methods for two-sided tests.

Comparison of the empirical power of Srivastava-Du ‘s test (SD) to Bai-Saranadasa’s test (BSP) for two-sided tests.

This includes all the data from S1 and S2 Tables for these except for the case of one strong mean change (m 13 and m 23).

Comparison of the three p-value methods for the two-sided tests.

This includes all the data from S1 and S2 Tables for these except for the case of one strong mean change (m 13 and m 23). For one-sided tests, comparing S1 Table to S4 Table and S2 Table to S5 Table, one can clearly see the improvement in power for the one-sided tests, as expected. Next, we compare the top 3 performers for the two-sided tests (FP, ARTP, SD) and T0. These are shown in Fig 7, with FP compared against the other three. Here, all combinations were used except for those where the mean vector contained only one non-zero element (m 13 and m 23). T0 generally underperforms against the others, except for independent variables. The others are fairly comparable, but FP consistently outperforms SD for the independent variables (but these two are similar for two-sided tests—see S1 and S2 Tables for the ρ = 0 rows).
Fig 7

Comparison of the empirical power of Fisher’s test to the other top performers for one-sided alternatives.

This includes all the data from S4 and S5 Tables for these except for the case of one strong mean change (m 13 and m 23).

Comparison of the empirical power of Fisher’s test to the other top performers for one-sided alternatives.

This includes all the data from S4 and S5 Tables for these except for the case of one strong mean change (m 13 and m 23). Overall, we see that many of the methods are comparable for many of the combinations, but HT often has much lower power and sometimes lower than the power for a single variable. In most cases, the p-value methods perform at least as well as the multivariate statistics. In particular, FP is usually among the top ranking statistics. ARTP is competitive with FP when the variances are unequal. SD is typically the top performer for the multivariate statistics and is often comparable to the top p-value methods, with the exception of independent variables for the one-sided tests.

Human Metabolomics Study

Table 3 shows a summary of the results from performing Welch’s two-sample t-test for each metabolite. As mentioned previously, any pathway where only one metabolite was observed was dropped from the analysis. There are 39 pathways remaining. Of these, only five pathways contain at least 10 metabolites, with 24 as the maximum. There is only one pathway, where every member is significant at the 0.05 level (P02 = benzoate metabolism, see Table 3 and S7 Table), and as expected, the p-values for assessing the significance of this pathway are very strong (Table 4). Results for pathways such as P01, where none of its individual members are even close to significant, and pathways such as P17, which has many strong p-values, have pathway statistics one would expect. The results for some of the other pathways are not obvious from simply examining the individual p-values. In particular, there are some cases where none of the individual p-values reach the threshold for significance, but the pathway statistic is indeed significant. Two such pathways are P13 (p-values 0.1334, 0.0540) and P31 (p-values 0.1332, 0.0563) whose p-values from Fisher’s statistic are 0.0449 and 0.0475, respectively. The individual p-values for P13 and P31 are similar to P08 (0.1444, 0.0633), but the p-value for P08 from Fisher’s statistic is 0.0857. The difference between these results can be explained by the differences in the correlations, which are positive in general (see S1 Fig). For the P13 and P31 pathways, the pair-wise correlations (computed from the pooled covariance matrices) are -0.09 and -0.08, respectively, so for both of these pathways, the two members are essentially independent. However, for P08, the pair-wise correlation between its two members is 0.81, showing that these two variables are highly redundant, and hence combining the signals does not strengthen the overall results.
Table 3

Summary of individual p-values.

PATHWAYmsig a p-values
P01400.6721, 0.6190, 0.5413, 0.4519
P02 c 330.0386, 1.17E-05, 2.41E-06
P03200.4179, 0.2534
P04 210.0713, 2.95E-06
P05 210.3630, 0.0002
P06 840.6591, 0.6140, 0.4535, 0.2111,0.0407, 0.0366, 0.0002, 3.81E-05
P07200.7851, 0.5707
P08200.1444, 0.0633
P09 610.9781, 0.6044, 0.5859, 0.1883, 0.1103, 0.0032
P10 310.8279, 0.4513, 4.25E-07
P11 730.3994, 0.2128, 0.1814, 0.0534, 0.0468, 0.0354, 0.0272
P12300.7530, 0.2264, 0.1326
P13 200.1334, 0.0540
P14 540.9990, 0.0358, 0.0145, 0.0043, 1.60E-07
P15 520.4057, 0.3830, 0.0572, 0.0085, 9.70E-05
P16510.4169, 0.2648, 0.1696, 0.1653, 0.0008
P17 1380.6672–0.1693 (5) b , 0.0429, 0.0082, 0.0006, 0.0019, 0.0006–3.22E-05 (4)
P18 1140.7849–0.2877 (5), 0.1214, 0.0502, 0.0040, 0.0006, 0.0004, 6.70E-06
P19400.8485, 0.4272, 0.3245, 0.2271
P20 24140.7215–0.2160 (4), 0.1445–0.0618 (6), 0.0427–0.001 (9), < 0.0001 (5)
P21 720.9093, 0.4074, 0.1114, 0.0844, 0.0601, 0.0051, 0.0099
P22 530.9603, 0.3958, 0.0005, 2.20E-05, 1.73E-19
P23210.2578, 0.0323
P24 210.5845, 1.50E-06
P25 830.9331, 0.6834, 0.6588, 0.3732, 0.1910, 0.0139, 0.0038, 1.24E-05
P26 210.0019, 0.3110
P27 320.3674, 0.0375, 0.003
P28 1050.8412–0.5434, 0.3493, 0.1509, 0.0345, 0.0223, 0.0137, 0.0035, 5.15E-06
P29300.7889, 0.5844, 0.5531
P30 310.4557, 0.0629, 2.15E-06
P31 200.1332, 0.0563
P32200.7619, 0.2288
P33600.9291, 0.7553, 0.5283, 0.2887, 0.2235, 0.0614
P34 1450.7938–0.5069 (6), 0.1521–0.1046, 0.0361, 0.0207, 0.0196, 0.0065, 0.0042
P35300.8001, 0.4088, 0.1320
P36 430.1851, 0.0291, 0.0204, 0.0201
P37 810.9430, 0.5743, 0.5072, 0.3653, 0.2682, 0.0911, 0.0629, 5.74E-05
P38920.8732, 0.7545, 0.5775, 0.3838, 0.2843, 0.2286, 0.1660,0.0408, 0.0082
P39410.8879, 0.4693, 0.3702, 0.0140

a refers to the number of p-values < 0.05

b (k) in the p-value column indicates that there are (k) p-values in the range

c in bold are pathways significant with Fisher’s statistic

Table 4

p-values of the pathway statistics for the human metabolomics study.

CODEmFPTSARTPPCAHTBSPDMSD
P0140.79240.82840.87340.65620.83350.78250.78250.8547
P023<0.00015.00E-04<0.00017.05E-074.21E-06<0.0001<0.0001<0.0001
P0320.33640.28390.38210.22700.46630.42280.42280.3682
P0421.00E-040.0026<0.00014.72E-051.34E-05<0.0001<0.00011.00E-04
P0520.00210.06480.00110.00580.00063.00E-043.00E-040.001
P0680.00480.04950.00180.08509.83E-110.00230.00230.0044
P0720.80060.80160.78850.55780.82490.85630.85630.8167
P0820.08570.07760.08550.07820.14340.0740.0740.0743
P0960.04590.12290.02090.25460.03640.00980.00980.0279
P103<0.00010.2276<0.00010.00071.34E-050.03860.0374<0.0001
P1170.04110.02820.07410.03590.29150.05830.05830.046
P1230.26970.23830.26670.34600.20850.19390.19380.3025
P1320.04490.02110.08280.01900.04960.03010.03010.0403
P145<0.00010.0065<0.00013.70E-052.90E-08<0.0001<0.0001<0.0001
P1551.00E-040.0022<0.00010.00010.00050.02820.02813.00E-04
P1650.05040.0490.03630.66880.00030.01380.01370.0532
P1713<0.00012.00E-041.00E-040.00099.69E-11<0.0001<0.0001<0.0001
P18110.00480.05950.00180.00901.41E-060.01140.01130.0046
P1940.51140.46960.57760.75790.46410.32540.32540.54
P20241.00E-045.00E-041.00E-040.00101.53E-141.00E-041.00E-041.00E-04
P2170.01990.03260.01670.02050.01520.04340.04350.0282
P225<0.00010.0138<0.00015.46E-100<0.0001<0.0001<0.0001
P2320.05110.04090.04280.03980.08520.1060.1060.0577
P242<0.00010.11<0.00010.00407.06E-06<0.0001<0.0001<0.0001
P2583.00E-040.0468<0.00010.72870.00020.00220.00211.00E-04
P2620.00620.02940.00350.00640.00620.0020.0020.0062
P2730.00390.02950.00160.64653.22E-140.01330.01290.005
P28100.00640.05060.00260.01100.00010.00920.00910.006
P2930.81750.83810.87010.74280.78520.82950.82950.8555
P303<0.00010.0118<0.00010.00156.94E-060.03010.0298<0.0001
P3120.04750.02220.08610.02220.07040.11780.11780.0618
P3220.48580.47830.43970.33290.49870.36060.36060.4864
P3360.35950.33550.27100.30990.36640.35330.35330.361
P34140.03440.06130.02020.35072.29E-050.0170.01670.0287
P3530.39270.35340.32820.18920.39320.12910.12910.3595
P3640.0019<0.00010.04200.00480.00830.02220.02190.0023
P3780.00780.06410.00240.03150.00110.0140.01370.0044
P3890.06040.06410.03210.04650.17980.11070.1110.0581
P3940.18350.28130.14360.48234.51E-050.09110.09040.1482
a refers to the number of p-values < 0.05 b (k) in the p-value column indicates that there are (k) p-values in the range c in bold are pathways significant with Fisher’s statistic Although in most cases, the p-values are similar for the different methods, there are a few exceptions. For example, for pathway P39 none of the statistics for the pathway are significant at the 0.05-level (e.g., Fisher’s p-value is 0.1835), except for Hotelling’s statistic, which has a p-value < 0.0001. Of the four metabolites in this pathway, there is only one strong p-value, and the pair-wise correlations are 0.48, 0.51, 0.57, 0.84, 0.85, and 0.88. This seems to mirror the results from the simulation study where Hotelling’s statistic had stronger empirical powers than the others for the case of one strong change and highly correlated variables. Overall, using the pathway analyses discussed, more than half the pathways are significant at the 0.05-level (before correcting for multiple comparisons) as seen in Table 4, with several pathways that are statistically significant where fewer than half the members reached the 0.05 level. As the sample sizes are very high here, it is not surprising that many of the measures give very similar results with the exception of PCA, which in some cases performed much worse. The p-values for the top performers from the simulation for two-sided tests, Fisher’s statistic (FP), the adaptive truncated product (ARTP), and Srivastava-Du (SD) have very similar p-values.

Discussion

An extensive simulation study was performed to compare various statistics for assessing the statistical significance of a pathway. The statistics assessed were mainly either based on an aggregation of the p-values or multivariate statistics. While all the multivariate statistics require that the covariance matrix is the same for each group, most of the multivariate statistics considered do not require the inversion of the sample covariance matrix, and hence could be applied for even small sample sizes. The sample sizes and variables sizes in the simulation study were chosen to represent sizes seen in animal metabolomic studies. The empirical powers for the p-value aggregation methods (top performers were generally Fisher’s statistic and the ARTP statistic) are at least good as those from the multivariate statistics (top performer was the Srivastava-Du statistic). Furthermore, the power can be improved if appropriate one-sided statistics are implemented. With these sample sizes, permutation distributions are recommended over asymptotic approximations. Additionally, permutation statistics are recommended over test statistics that assume independent variables, such as the chi-squared statistic for Fisher’s test, since the variables within a pathway are often highly correlated. The main alternatives considered were those for changes for all metabolites in the pathway, rather than an alternative such as one metabolite is differential, while the others are not. Hence for other alternatives, the multivariate statistics may have an advantage. The p-value methods also have the advantage of being easier to adapt to various statistical designs, i.e., experimental designs that are more complicated than two independent groups.

Distribution of the pair-wise correlations from the 39 pathways for the insulin resistance data set.

(TIFF) Click here for additional data file.

Metabolomics data set used for the analysis.

(CSV) Click here for additional data file.

Empirical Power, 4 variables, two-sided tests.

(DOCX) Click here for additional data file.

Empirical Power, 8 variables, two-sided tests.

(DOCX) Click here for additional data file.

Power for single variables for the two-sample t-test with the pooled variance estimate.

(DOCX) Click here for additional data file.

Empirical Power, 4 variables, one-sided tests.

(DOCX) Click here for additional data file.

Empirical Power, 8 variables one-sided tests.

(DOCX) Click here for additional data file.

Two-sided t-tests, 4 variables, large samples: Fisher vs. Srivastava-Du.

(DOCX) Click here for additional data file.

Pathway code from human metabolomics study.

(DOCX) Click here for additional data file.
  8 in total

1.  A tail strength measure for assessing the overall univariate significance in a dataset.

Authors:  Jonathan Taylor; Robert Tibshirani
Journal:  Biostatistics       Date:  2005-12-06       Impact factor: 5.899

Review 2.  Pathway analysis of genomic data: concepts, methods, and prospects for future development.

Authors:  Vijay K Ramanan; Li Shen; Jason H Moore; Andrew J Saykin
Journal:  Trends Genet       Date:  2012-04-03       Impact factor: 11.639

3.  alpha-hydroxybutyrate is an early biomarker of insulin resistance and glucose intolerance in a nondiabetic population.

Authors:  Walter E Gall; Kirk Beebe; Kay A Lawton; Klaus-Peter Adam; Matthew W Mitchell; Pamela J Nakhle; John A Ryals; Michael V Milburn; Monica Nannipieri; Stefania Camastra; Andrea Natali; Ele Ferrannini
Journal:  PLoS One       Date:  2010-05-28       Impact factor: 3.240

4.  Pathway analysis by adaptive combination of P-values.

Authors:  Kai Yu; Qizhai Li; Andrew W Bergen; Ruth M Pfeiffer; Philip S Rosenberg; Neil Caporaso; Peter Kraft; Nilanjan Chatterjee
Journal:  Genet Epidemiol       Date:  2009-12       Impact factor: 2.135

5.  Self-contained gene-set analysis of expression data: an evaluation of existing and novel methods.

Authors:  Brooke L Fridley; Gregory D Jenkins; Joanna M Biernacka
Journal:  PLoS One       Date:  2010-09-17       Impact factor: 3.240

Review 6.  Ten years of pathway analysis: current approaches and outstanding challenges.

Authors:  Purvesh Khatri; Marina Sirota; Atul J Butte
Journal:  PLoS Comput Biol       Date:  2012-02-23       Impact factor: 4.475

7.  Comparison of methods for competitive tests of pathway analysis.

Authors:  Marina Evangelou; Augusto Rendon; Willem H Ouwehand; Lorenz Wernisch; Frank Dudbridge
Journal:  PLoS One       Date:  2012-07-31       Impact factor: 3.240

8.  Data, information, knowledge and principle: back to metabolism in KEGG.

Authors:  Minoru Kanehisa; Susumu Goto; Yoko Sato; Masayuki Kawashima; Miho Furumichi; Mao Tanabe
Journal:  Nucleic Acids Res       Date:  2013-11-07       Impact factor: 16.971

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.