Literature DB >> 30001352

A robust gene regulatory network inference method base on Kalman filter and linear regression.

Jamshid Pirgazi1, Ali Reza Khanteymoori1.   

Abstract

The reconstruction of the topology of gene regulatory networks (GRNs) using high throughput genomic data such as microarray gene expression data is an important problem in systems biology. The main challenge in gene expression data is the high number of genes and low number of samples; also the data are often impregnated with noise. In this paper, in dealing with the noisy data, Kalman filter based method that has the ability to use prior knowledge on learning the network was used. In the proposed method namely (KFLR), in the first phase by using mutual information, the noisy regulations with low correlations were removed. The proposed method utilized a new closed form solution to compute the posterior probabilities of the edges from regulators to the target gene within a hybrid framework of Bayesian model averaging and linear regression methods. In order to show the efficiency, the proposed method was compared with several well know methods. The results of the evaluation indicate that the inference accuracy was improved by the proposed method which also demonstrated better regulatory relations with the noisy data.

Entities:  

Mesh:

Year:  2018        PMID: 30001352      PMCID: PMC6044105          DOI: 10.1371/journal.pone.0200094

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The study of gene regulatory networks (GRNs) structure is important in understanding cellular function. GRNs are typically represented by graphs in which the nodes represent the genes and the edges show the regulatory or interaction between genes. There are many methods for inference of GRNS. One of these methods is computational methods. Many computational methods have been proposed in the literature to model GRNs. These methods can be classified into the co-expression based methods [1], supervised learning-based methods [2,3], model-based methods [4,5] and information theory-based methods [6,7]. Co-expression based methods have low complexity but lack inference direction of interaction. The supervised learning methods such as GENIES [8] and SIRENE [9] require information about some interactions in order to learn the models. Model-based methods can be categorized into ordinary differential equation [10], multiple linear regression [11], Boolean networks [12] and probabilistic graphical models including Bayesian Network (BN) and Dynamic Bayesian Network (DBN) [13]. They infer GRNs with high accuracy and can identify direction of interaction. However, these methods are time consuming and require many parameters to be set up, and thus cannot be used for large-scale networks. There are two suggestions for addressing this problem: Searching the optimal graph from all possible graphs and using decomposition technique in regression based methods for network structure inference. The inference of regulatory interactions for N genes is decomposed into N independent sub-problems, with sub-problems inferring the regulators of a target gene. Narimani et all proposed a new Bayesian network reverse engineering method using ordinary differential equations with the ability to include non-linearity. In this method, Expectation Propagation is used for approximate Bayesian inference [14]. Due to Bayesian network (BN) methods cannot handle large-scale networks in [15] present a novel method, namely local Bayesian network (LBN), to infer GRNs from gene expression data by using the network decomposition strategy and false-positive edge elimination scheme. There are many significant advantages to use Bayesian network model. Bayesian networks can be easily understood and allow researchers to use their domain expert knowledge for determine the Bayesian network structure. When sample size is small Bayesian networks are less influenced and they use the probability theory, which is suitable for dealing with noise in biological data. Furthermore, Bayesian networks where complete data are not available, can produce relatively accurate prediction. Although a few disadvantages exist, such as computational complexity and need to set many parameters, therefore they cannot be used for large-scale networks. To address this problem within this context, this paper presents a new method that uses Bayesian model averaging based on Kalman filter and linear regression to infer GRNs. In this method, a new solution is applied to calculate the posterior probabilities of the edges from possible regulators to the target gene, which leads to high prediction accuracy and high computational efficiency. This method is the best performer among well-known existing methods in the DREAM4 in silico challenge and IRMA Dataset [16-17]. Another important category of GRN inference methods is based on regression methods, which are used to predict one target gene based on one or more input genes such as artificial neural networks (ENFRN) [18], support vector machines (SIRENE)], rotation forest (GENIRF) [19], random forests (GENIE3) [20] and Bayesian Model Averaging for Linear Regression (BMALR) [21]. Furthermore, information theory-based methods are used for inferencing GNRs, such as conditional mutual information (CMI) [6] and mutual information (MI) [15]. This method can be used for large scale networks. MI measures the dependency between two genes. A higher mutual information value for two genes shows that one gene is related with the other. However, MI cannot distinguish indirect regulators from direct ones. Consequently, this leads to possible false positives [22]. Although CMI-based methods are able to distinguish indirect regulators from direct ones, they cannot locate the directions of interactions in the network and also in some cases underestimate the interactions strength. These network inference methods such as Context Likelihood of Relatedness (CLR) [23], Weighted Gene Co-Expression Network Analysis (WGCNA) [24], Algorithm for the Reconstruction of Accurate Cellular Network’s (ARACNE) [25], Relevance Networks (RN) [26] and Minimum-Redundancy–Maximum-Relevance Network (MRNET) [27] assume that correlation between genes expression are indicative of a regulatory interaction. In [28] for capture coarse-grained dynamics propose a new mutual information based Boolean network inference (MIBNI) method. In This method, using mutual information first selected a set of initial regulatory genes, and then by iteratively swapping a pair of genes between the selected regulatory genes and the other genes, improves the dynamics prediction accuracy. The rest of this paper is organized as follows. Details of the Kalman filter are given in section two. Conditional Mutual Information is given in section three. The proposed method is presented in section four. In section five, the results of the proposed method are shown on data collection DREAM4 and other datasets. Finally, conclusions are summarized in section six.

Kalman filter

To infer gene regulatory network, one way is to find the Bayesian network structure. This is normally achieved by maximizing the likelihood of the observed dataset (maximum likelihood) or the posterior probability of the structure given the observed data (maximum a posteriori). In this paper, because the data are time series and contain noise, the Kalman filter is used to find the Bayesian network structure [29]. Kalman filter is an algorithm that uses a series of measurements observed over time, containing statistical noise. Applying the Kalman filter for this purpose, assuming input and output, is given as: where F is the state transition model which is applied to the previous state x, G is the control matrix which is applied to the w. w is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Q. H is the observation model which maps the true state space into the observed space, D is the control matrix which is applied to the v, and v is the observation noise which is assumed to be zero mean Gaussian white noise with covariance R [29]. The details of Kalman filter is shown in Fig 1. First, prior probability p(x|y) has a random value and is related to prior knowledge as follows:
Fig 1

Kalman filter phases in the proposed method.

Prediction and updating phases can alternatively be used to calculate the posterior probability. In the prediction phase, the value of p(x|y) is obtained as follows: Instead of predicting this probability, the mean and variance, x and p, are predicted respectively. In the updating phase, the value of p(x|y) is obtained as follows. As a matter of, in this step the parameters of the mean and variance of the posterior probability are updated. This probability is achieved over time and periodically until the posterior probability is calculated. By using the following equations, the mean and variance can be obtained as follows: Where K is Kalman rate and it is calculated as follows:

Conditional mutual information

Mutual Information (MI) and Conditional Mutual Information (CMI) have been used to construct GRNs [30] owing to their ability to detect nonlinear dependencies between genes with Gaussian noise. The mutual information (MI) is a measure of the mutual dependence between two genes X and X. Thus, its value can be used to evaluate the strengths between genes. For measuring the conditional dependency between two genes X and X given another gene X, CMI can be used, which can quantify the undirected regulation. For discrete variables X and Y, MI is defined as [31]: where p(x) and p(y) are the marginal probability distributions of X and Y, respectively, p (x, y) is the joint probability distribution of X and Y, H(X,Y) is the joint entropy of X and Y, and H(X) and H(Y) are the entropies of X and Y, respectively. CMI between two variables X and Y given variable Z is defined as [31]: where H(X,Z), H(Y,Z) and H(X,Y,Z) are the joint entropies, and p(x,y|z), p(x|z) and p(y|z) are the conditional probability distributions, respectively.

Proposed method

KFLR constructs GRNs using Bayesian network and linear regression which lead to a directed graph of regulatory interactions between genes with high accuracy. This method mainly consists of three distinct phases. In the first phase prior knowledge is extracted from the data using MI, then in the next phase Bayesian network is constructed based on prior knowledge and Kalman filter, and in the last phase the network is modified using CMI. The proposed method is described in Fig 2. In the next subsection, detailed description of each of these phases will be presented.
Fig 2

Schematic diagram of proposed method.

Phase 1: Knowledge extraction with MI

In this step, the MI values between all genes are computed and the knowledge matrix is created. If MI (i, j) is smaller than a threshold, the cell (i,j) in the knowledge matrix will be zero, otherwise this cell is one.

Phase 2: Building Bayesian network using Kalman filter

For inferring gene regulatory network, the proposed idea is based on prior knowledge from the knowledge matrix, and Kalman filter is used to construct Bayesian network. The proposed method integrates a Bayesian model averaging method with a linear regression approach. A new method is used to calculate the posterior probability edges based on Kalman filter. In the proposed method, linear regression is used on the target gene and all combinations of other genes. The final score of the edge between the parent and target genes, is the sum of the all posterior probability of the linear regression models containing this edge.

Bayesian model averaging

One methods for inference of gene regulatory network is finding the structure N of Bayesian network that better explains the data. There are many methods for finding Bayesian network structure such as maximizing the likelihood of the observed data (maximum likelihood, ML) or the posterior probability of the structure N given the observed data (maximum posteriori, MAP). This paper makes use of Kalman filter in achieving the posterior probability. There exist a lot of Bayesian network structures that best describe the data when the number of observations in gene data are limited. We can find best structure using heuristic search. But the heuristic search methods have high computational complexity and do not guarantee global optimal. Thus, Bayesian model averaging can be used instead of searching for the best structure among the existing Bayesian structures. In other words, the probability of an edge (f) given the observed dataset (D) between node i and j in a structure (N) can be calculated with the posterior probability of f: This probability shows the posterior probability of f given the observed dataset (D). In this equation, if the Bayesian network N contains edge f, f (N) equals to 1, otherwise it is 0. Therefore, 100 Bayesian networks are built with different structures using Kalman filter and then the final score edge from node i to node j, can be obtained based on Bayesian structures posterior probability of this edge. In other words, in the construction of Bayesian networks using Kalman filter theory, each node X, has a probability distribution P(X|Parents(X)), that shows the effect of parent nodes on this node to be numerical. In this step, the parent sets which are obtained with prior knowledge of the first phase, are checked and genes having smaller MI degree than a threshold are not selected with gene X. In order to have an accurate estimate for posterior probability of the edge, with k-nearest neighbor (kNN) method, the network is decomposed into a set of smaller sub networks according to the relationship among genes in the network. In the graph structure, according to their shortest path distance the k nearest neighbors of each gene are selected. In this paper, the k-nearest neighbors with k = 2 containing the Markova blanket of the gene are applied for each gene in order to decompose a global network to a set of sub networks. where X is the expression level of gene i, w is a weight between gene i and j and showing the effect of gene j on gene i. If w is zero, then in the gene regulatory network there is no edge from j to i. If w is non-zero, j is one of the i’s candidate regulators (parents) and ε denotes the noise. The posterior probability for each edge calculate base on the sum of the posterior probabilities of all the sub structures containing the edge [19]. Using the following equation, the posterior probability of an edge feature f is calculated: where S is the set of all possible parent sets of X. X is the target of the edge feature f. denotes the data restricted to X and the genes in Pa. N is a sub structure that is composed of the edges from the genes in Pa, a parent set of gene X. If the sub structure N contains f, f(N) is equal to 1, otherwise it is 0.

Phase 3: Modifying the network

After gene regulatory network inference, the network is modified to achieve better results. MI method commonly cannot estimate the regulation degrees between genes. Because it does not consider the joint regulations into two or more genes, the rate of false positive edges is high. In this phase, by computing the first-order CMI (i, j|k) and second-order CMI (i, j|k, l), false positive edges are removed. By so doing, if CMI (i, j|k) (or CMI (i, j|k, l)) is smaller than a threshold α, the edge between genes i and j is removed from the network.

Experimental result

Data set

The DREAM (for ‘‘Dialogue for Reverse Engineering Assessments and Methods”) initiative organizes an annual reverse engineering competition called the DREAM challenge [27]. The goal of the DREAM4 In silico network challenge is to reverse engineer gene regulation networks from simulated steady state and time series data. There are three sub-challenges consists of five networks called in silico Size 10, In silico Size 100, and In silico Size 100 Multifactorial. In the time series data, for networks of size 10, there is 5 different time series, for networks of size 100, there is 10 different time series. Each time series has 21 time points [28]. All networks and data were generated with Gene Net Weaver (GNW) version 2.0 [32]. Network topologies were obtained by extracting sub networks from transcriptional regulatory networks of E. coli and S. cerevisiae (see S1 Data). Another dataset we have used is the IRM dataset. IRMA network is a subnetwork embedded in Saccharomyces cerevisiae which consist of 5 genes: CBF1, GAL4, SWI5, GAL80, and ASH1. Gene expression data are time-series and include switch-off data and switch-on data. The switch-off data is taken from 4 experiments and the switch-on data is taken from 5 experiments, with a total of 142 samples measured (see S1 Data) [33].

Performance metrics

The proposed method is evaluated using the area under the precision versus recall curve and receiver operating characteristic (ROC) curve for the whole set of link predictions for a network. A precision-recall (PR) curve plots fraction of retrieved instances that are relevant (Precision) versus the fraction of relevant instances that are retrieved (Recall), whereas a ROC curve plots the true positive rate versus the false positive rate [34]. To summarize these curves, the DREAM organizers proposed different statistics. AUPR and AUROC are respectively the area under the PR and ROC curve. AUPR p-value and AUROC p-value are the probability that random ordering of the potential links is given or larger of AUPR and AUROC. The overall p-values: paupr and pauroc of the five networks constituting each DREAM4 sub challenge were defined as the geometric mean of the individual p-values, as shown in Eq 15 [35]: The overall score for each method was the log-transformed geometric mean of the overall AUROC p-value and the overall AUPR p-value, as shown in Eq 16 [35]:

Performance comparison on the DREAM4 dataset

In this section, evaluation of five inferred sub network using the proposed method before and after adding noise into data has been studied and to demonstrate the performance, the proposed method has been compared with thirteen common methods in the field of gene regulatory networks construction. Methods used for comparison are as follows: , is an algorithm for inferring regulatory networks from expression data using tree-based methods. The implementation of matlab codes by its authors and with default parameters and protocols are used [18]. is an algorithm for inferring cellular regulatory networks with Bayesian model averaging for linear regression algorithm. The author’s system code is used [19]. [21], [23] and [25] algorithms: These three algorithms have been implemented by the minet package into R Language. , Bayesian Gene Regulation Model Inference, a model-based method for inferring GRNs from time-course gene expression data. BGRMI uses a Bayesian framework to calculate the probability of different models of GRNs and a heuristic search strategy to scan the model space efficiently [36]. is a method based on dynamic Bayesian network [37]. is a noise and redundancy reduction technique improves accuracy of gene regulatory network inference [5]. this method solves the network inference problem by using a feature selection technique (LARS) combined with stability selection. In the method Web-based platform is performed [38]. , this method decomposes the prediction of a gene regulatory network between p genes into p different regression problems. Each regression problem is constructed with singular value decomposition and rotation forest [17]. , in this method, first selected a set of initial regulatory genes using mutual information, and then, improves the dynamics prediction accuracy by iteratively swapping a pair of genes between the selected regulatory genes and the other genes [26]. The implementation of java codes by its authors and with default parameters and protocols are used. , in this method, expectation propagation is used for approximate Bayesian inference [14]. The implementation of C# codes by its authors and with default parameters and protocols are used. CMI2 is used to quantify the mutual information between two genes given a third one through calculating the Kullback–Leibler divergence between the postulated distributions of including and excluding the edge between the two genes [6]. The implementation of matlab codes by its authors and with default parameters and protocols are used. In the following, the results in the form of AUPR and AUROC values, ROC and PR curves are examined and an overall score is calculated for each method. As earlier mentioned, 5 sub networks in DREAM4 dataset were used for evaluation. The goal of each 5 sub network is finding the rank for edges and directional regulatory relations. Table 1 shows the AUPR and AUROC values for different methods in 5 sub networks without noise. Comparing stability against noise, Table 2 shows AUPR and AUROC values for different methods in noisy data. It should be noted that 10% Gaussian noise with mean = 0 and standard deviation = 1 was added to the data. From the results, the proposed method is robust against noise than the other methods. The results show that the proposed method has higher accuracy, because of the use of the knowledge extraction phase in network constructions and removal of many false positives edges. Also, the use of Kalman filter probability theory can thus deal with noise data in which the Kalman filter removes noisy regulations.
Table 1

AUPR and AUROC values of common GRN methods without noise.

MethodNET1NET2NET3NET4NET5
AUPRAUROCAUPRAUROCAUPRAUROCAUPRAUROCAUPRAUROC
BMALR0.1730.7450.1550.7220.2010.7450.1860.7680.1980.758
GINIE30.2280.7890.0960.6140.2300.7750.1570.7210.1680.712
MRNET0.1430.5840.0750.5790.1240.6830.1280.7080.0950.611
ARACNE0.1650.6340.1080.6110.1740.6790.1430.7090.1540.621
BGRMI0.2450.8040.1180.710.1850.6960.2130.7840.1540.643
CLR0.1790.7820.1090.6350.2380.7870.1540.7120.1630.705
G1DBN0.0890.5890.0550.6120.1550.6780.1530.7050.1170.631
NARROMI0.1220.7130.1050.6650.1920.7060.1670.7130.1860.727
TIGRESS0.1570.7380.1440.680.1720.7590.1990.7640.1980.747
GENIRF0.1740.7630.1560.7310.2120.7630.1910.7720.2020.781
MIBNI0.1620.6370.1260.7110.1820.6830.1730.7420.1730.725
FBISC0.1670.6350.1730.5980.2630.650.2280.6640.2060.685
CMI2NI0.0570.7370.0480.6160.1020.690.0630.6570.0660.691
KFLR0.1940.8120.1950.8230.2350.8030.2360.8130.2210.797
Table 2

AUPR and AUROC values of common GRN methods with noise.

MethodNET1NET2NET3NET4NET5
AUPRAUROCAUPRAUROCAUPRAUROCAUPRAUROCAUPRAUROC
BMALR0.1550.7210.1250.6890.1850.7240.1620.6920.1730.678
GINIE30.1920.7180.0580.5370.2010.7880.1350.6420.1430.612
MRNET0.0650.5820.0720.5730.1080.5890.110.6450.0980.598
ARACNE0.1420.6020.0890.6010.1220.6210.1230.6560.1330.623
BGRMI0.2080.7850.1020.6360.1540.6330.1960.7210.1230.578
CLR0.1390.7240.0650.5780.1830.7140.1210.6720.1320.678
G1DBN0.0540.5210.0430.5780.120.6020.1180.6540.0920.586
NARROMI0.1020.7030.0870.680.1820.6880.1590.6960.1720.709
TIGRESS0.1460.7220.1320.6710.1630.7410.1870.7480.1860.736
GENIRF0.1620.7120.1360.6820.1890.7430.1730.7110.1680.691
MIBNI0.1430.6090.0940.6820.1570.6090.1530.6920.1460.674
FBISC0.1540.6120.1610.5020.2630.6130.2150.6090.1890.621
CMI2NI0.0420.7020.0440.5830.0940.5980.0610.6110.0610.626
KFLR0.1890.810.1930.8210.2320.7950.2320.8070.2130.772
According to Tables 1 and 2, the rate of improvement of the KFLR in sub network 1 is also less, while the rate of improvement is higher in sub network 2,3, 4 and 5, because of extracting more false positive edges. Therefore, with the more accurate obtained knowledge in the first phase, KFLR results to better network. In fact, in the Bayesian network construction phase using the Kalman filter, each node X have one conditional probability distribution P(X|Parents(X)) which shows the effect of parents on this node numerically. In this phase, parents are selected with obtained knowledge from first phase and not allowed to select genes which are very similar to each other. So this work changes the value of relationships between one gene and its parents in comparison with inferred network by other algorithm. When the number of extracted knowledge increases, more improvement is achieved compared with other algorithm. In KFLR, the refining network using CMI coefficient is done. This phase will improve the amounts of regulatory relations between pairs of genes using biology significant relationships between them and this work improves the results of each sub network slightly. Table 3 shows p-values of AUROC and AUPR for different methods and each subnet separately. This shows that the predictions of this method is significantly better than a random guess compared to other methods. The overall scores of each method in the whole network are shown in Table 4. The results indicate that the proposed method performs better than the other methods.
Table 3

AUPR and AUROC p-values for DREAM4 challenge.

MethodNET1NET2NET3NET4NET5
P-AUPRP- AUROCP-AUPRP- AUROCP-AUPRP- AUROCP-AUPRP- AUROCP-AUPRP- AUROC
BMALR3.20E-283.30E-153.10E-342.10E-223.52E-478.40E-324.21E-414.36E-303.20E-433.67E-33
GINIE33.40E-363.20E-198.40E-212.10E-162.76E-548.70E-342.73E-345.42E-287.41E-373.79E-29
MRNET2.31E-111.98E-096.23E-226.11E-194.54E-334.21E-223.46E-305.02E-252.73E-289.93E-19
ARACNE6.32E-214.11E-201.25E-221.23E-205.03E-374.05E-255.99E-328.15E-275.31E-377.22E-28
BGRMI4.10E-372.50E-216.33E-285.43E-215.23E-395.86E-254.51E-486.31E-344.46E-334.52E-20
CLR4.50E-313.20E-182.32E-244.52E-185.72E-556.85E-363.11E-314.26E-273.72E-365.31E-28
G1DBN8.24E-108.23E-061.35E-152.23E-153.43E-361.10E-251.27E-314.32E-267.02E-278.76E-18
NARROMI9.13E-205.42E-181.76E-234.09E-201.32E-407.63E-282.21E-375.72E-273.25E-404.81E-31
TIGRESS4.30E-221.27E-207.18E-323.56E-203.86E-381.65E-324.20E-433.28E-297.26E-425.68E-33
GENIRF3.31E-293.51E-185.41E-352.32E-232.60E-484.27E-313.17E-424.82E-322.62E-442.46E-32
MIBNI6.51E-233.19E-223.18E-274.31E-214.27E-412.82E-282.21E-363.17E-281.93E-373.37E-30
FBISC1.41E-271.60E-176.31E-364.12E-192.43E-374.22E-211.81E-326.32E-234.81E-362.17E-27
CMI2NI1.28E-101.58E-172.61E-083.62E-095.09E-228.09E-182.44E-112.21E-122.55E-123.08E-16
KFLR1.63E-331.21E-277.43E-464.12E-324.43E-583.22E-392.81E-537.63E-353.61E-511.35E-37
Table 4

Score of common GRN methods and our method for DREAM4.

MethodNET1NET2NET3NET4NET5Total Score
BMALR2.10E+012.76E+013.88E+013.49E+013.75E+011.60E+02
GINIE32.70E+011.79E+014.33E+013.04E+013.23E+011.51E+02
MRNET9.67E+001.97E+012.69E+012.69E+012.28E+011.06E+02
ARACNE1.98E+012.09E+013.03E+012.87E+013.17E+011.31E+02
BGRMI2.85E+012.37E+013.13E+014.03E+012.58E+011.50E+02
CLR2.39E+012.05E+014.47E+012.84E+013.14E+011.49E+02
G1DBN7.08E+001.48E+013.02E+012.81E+012.16E+011.02E+02
NARROMI1.82E+012.11E+013.35E+013.14E+013.49E+011.39E+02
TIGRESS2.06E+012.53E+013.46E+013.54E+013.67E+011.53E+02
GENIRF2.30E+012.85E+013.90E+013.64E+013.76E+011.64E+02
MIBNI2.18E+012.34E+013.40E+013.16E+013.31E+011.44E+02
FBISC2.18E+012.68E+012.85E+012.70E+013.10E+011.35E+02
CMI2NI1.33E+018.01E+001.92E+011.11E+011.36E+016.52E+01
KFLR2.99E+013.83E+014.79E+014.33E+014.37E+012.03E+02
Recall and precision are the ratios of the numbers of correctly inferred interactions vs all interactions in the gold standard networks and the reconstructed networks respectively The Area under the PR curve (AUPR) provides an unbiased scalar estimate of the accuracies of the reconstructed GRNs. ROC curves (AUROC) is a measure of the overall performance of a model. Therefore, for better compression, ROC curves in noise data are drawn for three subnets and some methods is presented in Fig 3–5. According to all the figures, the KFLR method generally has a better result. Also, the PR curves in noise data are shown in Fig 6–8 for some methods and three subnets individually. According to all the PR figures, the KFLR approach in general has better and more accurate results.
Fig 3

ROC curves for different methods in sub network1.

Fig 5

ROC curves for different methods in sub network3.

Fig 6

PR curves for different methods in sub network1.

Fig 8

PR curves for different methods in sub network3.

Performance comparison on the IRMA dataset

The different GRN inference methods were applied to reconstruct the IRMA (In vivo Reverse-engineering and Modeling Assessment) network. Table 5 shows the AUPRs of the GRNs inferenced in the noise data and in the main data. In the main data, KFLR is competitive with BGRMI method when inferring the network from the switch-on data. In the case of the switch-off data, KFLR had the highest accuracy. But in the noise data KFLR outperform other method. These results show that KFLR performs well on in-silico datasets and on in-vivo experimental data.
Table 5

AUPRs of the In Vivo IRMA network.

DataWithout noiseWith noise
Methodswitch-on Datasetswitch-off Datasetswitch-on Datasetswitch-off Dataset
BMALR0.6340.3360.5860.308
GINIE30.620.3470.5430.289
MRNET0.4170.3240.3580.217
ARACNE0.4720.3580.4120.271
BGRMI0.9040.5740.7620.354
CLR0.4230.3720.3530.254
G1DBN0.60.3130.5210.211
NARROMI0.5180.4720.3280.352
TIGRESS0.7140.4520.5920.376
GENIRF0.6720.3270.5810.312
MIBNI0.6560.3480.5820.354
FBISC0.4780.3720.4340.292
CMI2NI0.7210.4560.5890.371
KFLR0.8960.7210.8340.709

Conclusion

In this paper, a new method was proposed to improve the accuracy of reconstructed GRN from time series gene expression data by using two approachs, i.e, the false-positive interactions deletion and the inference using model averaging. In this paper, by using CMI and MI, false-positive interactions were deleted and in the model averaging approach, Kalman filter was proposed to compute the posterior probabilities of the edges from possible regulators to the target gene with the combination of Bayesian model averaging and linear regression methods. The Kalman filter is a linear state-space model that operates recursively on noisy and time series input gene expression data to produce a statistically optimal estimate of the gene regulatory network. The results on the benchmark gene regulatory networks from the DREAM4 challenge and in Vivo IRMA Network showed that the proposed method significantly outperforms other state-of-the-art methods. Also, it was established that this method is more robust to the noisy data.

Gene expression dataset of the DREAM4 and IRMA.

(RAR) Click here for additional data file.
  35 in total

1.  Learning Bayesian networks with integration of indirect prior knowledge.

Authors:  Baikang Pei; David W Rowe; Dong-Guk Shin
Journal:  Int J Data Min Bioinform       Date:  2010       Impact factor: 0.667

2.  Lessons from the DREAM2 Challenges.

Authors:  Gustavo Stolovitzky; Robert J Prill; Andrea Califano
Journal:  Ann N Y Acad Sci       Date:  2009-03       Impact factor: 5.691

3.  SIRENE: supervised inference of regulatory networks.

Authors:  Fantine Mordelet; Jean-Philippe Vert
Journal:  Bioinformatics       Date:  2008-08-15       Impact factor: 6.937

4.  DDGni: dynamic delay gene-network inference from high-temporal data using gapped local alignment.

Authors:  Hari Krishna Yalamanchili; Bin Yan; Mulin Jun Li; Jing Qin; Zhongying Zhao; Francis Y L Chin; Junwen Wang
Journal:  Bioinformatics       Date:  2013-11-27       Impact factor: 6.937

5.  Inferring cellular regulatory networks with Bayesian model averaging for linear regression (BMALR).

Authors:  Xun Huang; Zhike Zi
Journal:  Mol Biosyst       Date:  2014-08

6.  Conditional mutual inclusive information enables accurate quantification of associations in gene regulatory networks.

Authors:  Xiujun Zhang; Juan Zhao; Jin-Kao Hao; Xing-Ming Zhao; Luonan Chen
Journal:  Nucleic Acids Res       Date:  2014-12-24       Impact factor: 16.971

7.  Cluster analysis and display of genome-wide expression patterns.

Authors:  M B Eisen; P T Spellman; P O Brown; D Botstein
Journal:  Proc Natl Acad Sci U S A       Date:  1998-12-08       Impact factor: 11.205

8.  GENIES: gene network inference engine based on supervised analysis.

Authors:  Masaaki Kotera; Yoshihiro Yamanishi; Yuki Moriya; Minoru Kanehisa; Susumu Goto
Journal:  Nucleic Acids Res       Date:  2012-05-18       Impact factor: 16.971

9.  BGRMI: A method for inferring gene regulatory networks from time-course gene expression data and its application in breast cancer research.

Authors:  Luis F Iglesias-Martinez; Walter Kolch; Tapesh Santra
Journal:  Sci Rep       Date:  2016-11-23       Impact factor: 4.379

10.  Wisdom of crowds for robust gene network inference.

Authors:  Daniel Marbach; James C Costello; Robert Küffner; Nicole M Vega; Robert J Prill; Diogo M Camacho; Kyle R Allison; Manolis Kellis; James J Collins; Gustavo Stolovitzky
Journal:  Nat Methods       Date:  2012-07-15       Impact factor: 28.547

View more
  4 in total

1.  Codependency and mutual exclusivity for gene community detection from sparse single-cell transcriptome data.

Authors:  Natsu Nakajima; Tomoatsu Hayashi; Katsunori Fujiki; Katsuhiko Shirahige; Tetsu Akiyama; Tatsuya Akutsu; Ryuichiro Nakato
Journal:  Nucleic Acids Res       Date:  2021-10-11       Impact factor: 16.971

2.  Optimal parameter identification of synthetic gene networks using harmony search algorithm.

Authors:  Wei Zhang; Wenchao Li; Jianming Zhang; Ning Wang
Journal:  PLoS One       Date:  2019-03-29       Impact factor: 3.240

Review 3.  Network Medicine in the Age of Biomedical Big Data.

Authors:  Abhijeet R Sonawane; Scott T Weiss; Kimberly Glass; Amitabh Sharma
Journal:  Front Genet       Date:  2019-04-11       Impact factor: 4.599

Review 4.  Computational systems biology in disease modeling and control, review and perspectives.

Authors:  Rongting Yue; Abhishek Dutta
Journal:  NPJ Syst Biol Appl       Date:  2022-10-03
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.