Literature DB >> 35710342

Multi-view heterogeneous molecular network representation learning for protein-protein interaction prediction.

Xiao-Rui Su1,2,3, Lun Hu4,5,6, Zhu-Hong You7, Peng-Wei Hu1,2,3, Bo-Wei Zhao1,2,3.   

Abstract

BACKGROUND: Protein-protein interaction (PPI) plays an important role in regulating cells and signals. Despite the ongoing efforts of the bioassay group, continued incomplete data limits our ability to understand the molecular roots of human disease. Therefore, it is urgent to develop a computational method to predict PPIs from the perspective of molecular system.
METHODS: In this paper, a highly efficient computational model, MTV-PPI, is proposed for PPI prediction based on a heterogeneous molecular network by learning inter-view protein sequences and intra-view interactions between molecules simultaneously. On the one hand, the inter-view feature is extracted from the protein sequence by k-mer method. On the other hand, we use a popular embedding method LINE to encode the heterogeneous molecular network to obtain the intra-view feature. Thus, the protein representation used in MTV-PPI is constructed by the aggregation of its inter-view feature and intra-view feature. Finally, random forest is integrated to predict potential PPIs.
RESULTS: To prove the effectiveness of MTV-PPI, we conduct extensive experiments on a collected heterogeneous molecular network with the accuracy of 86.55%, sensitivity of 82.49%, precision of 89.79%, AUC of 0.9301 and AUPR of 0.9308. Further comparison experiments are performed with various protein representations and classifiers to indicate the effectiveness of MTV-PPI in predicting PPIs based on a complex network.
CONCLUSION: The achieved experimental results illustrate that MTV-PPI is a promising tool for PPI prediction, which may provide a new perspective for the future interactions prediction researches based on heterogeneous molecular network.
© 2022. The Author(s).

Entities:  

Keywords:  Heterogeneous molecular network; LINE; Network representation learning; Protein sequence; Protein–protein interaction

Mesh:

Substances:

Year:  2022        PMID: 35710342      PMCID: PMC9205098          DOI: 10.1186/s12859-022-04766-z

Source DB:  PubMed          Journal:  BMC Bioinformatics        ISSN: 1471-2105            Impact factor:   3.307


Background

Protein–protein interactions (PPIs) are essential for growth, development, differentiation and apoptosis [1]. As a result, studying PPIs is an important task and has constituted a major component of cell biochemical reaction network, which targets to reveal the functions of proteins at the molecular level. In general, the interactions between proteins are detected by some high-throughput biomedical experiments, such as yeast two-hybrid screens [2], tandem affinity purification [3] and mass spectrometric protein complex identification [4]. The results achieved by them are reliable, but they cannot response the demand of booming data growth. On the other hand, they usually suffer from time-consuming and high cost. To address above limitations, it is urgent to propose a not only low-cost, but high-efficiency computational model to identify PPIs. With the development of computer technology, a large number of machine learning-based methods are proposed and widely applied to the field of bioinformatics in recent years [5-9]. The majority of these machine learning-based methods is feature extraction. At an early stage, the computational methods can only extract characteristics from limited information of protein, such as protein structures, phylogenetic profiles, literature knowledge, network topology and genome [10-13], and then given a pair of proteins, predict the probability of the interaction between two proteins. However, limited by the available of extra information, the methods at that time are hard to apply without pre-existing information. Thanks to the popularity of high-throughput sequencing technology, protein sequence data now has become the most available information. As a result, nowadays, the computational methods are basically constructed based on protein amino acid sequence. Moreover, most of existing works show that it is enough to predict PPIs by extracting features from protein sequence information for its well performance [9]. Sequence-based approaches typically represent the protein sequence as a vector by feature extraction methods and predict PPIs by obtained vectors [14, 15]. For example, Romero et al. [16] extract the protein sequence feature by the general-purpose numerical codification of polypeptides, which transforms pairs of amino acid sequences into a machine learning-friendly vector, whose element represents numerical descriptors of residues in proteins, then classify the unknown protein pairs with SVM. Shen et al. [17] develop another computational method to learn conjoint-triad feature from protein amino acids and achieve a high predictive accuracy of 83.90% when applied on a dataset containing 16,000 diverse PPIs. Although these protein sequence-based methods obtain promising results, there is still a room for improvement by integrating multi-source protein information. For instance, Chen et al. [18] construct a hybrid feature representation which is composed by three kinds of protein pair representations and then adopt a stacked generalization scheme that integrates five learning algorithms to predict PPIs. Wang et al. [19-21] explore the protein evolutionary feature from the prospective of the image processing techniques, which opens a new way of researching protein sequences. Though above computational methods finish the PPI prediction task well, these existing methods still discuss PPI prediction at only protein phase, ignoring the associations between proteins and any other molecules, such as miRNA, lncRNA, disease or drug. Therefore, it is feasible to predict PPIs from the view of molecular system. To address above limitations, we propose a systematic and comprehensive model to predict PPIs by capturing inter-view protein sequences and intra-view interactions between molecules simultaneously. We firstly collect a heterogeneous molecular network with nine proven interactions across four kinds of molecules and diseases. Then, the protein inter-view feature is extracted from its sequence by k-mer method, while the intra-view feature is obtained by encoding the heterogeneous network with popular network embedding method LINE (Large-scale Information Network Embedding). Finally, the aggregation of inter-view feature and intra-view feature is sent into Random Forest (RF) to predict potential PPIs. The contributions of this work are summarized as follows: We develop a novel multi-view heterogeneous molecular network representation learning framework, i.e., MTV-PPI, to predict potential PPIs based on both inter-view feature and intra-view feature. MTV-PPI models both protein sequences and interactions between molecules to generate high representative aggregated features that are used to predict potential PPIs. We have conducted extensive experiments on a collected heterogeneous molecular network and the experimental results demonstrate the effectiveness of MTV-PPI.

Materials and methods

As shown in Fig. 1, MTV-PPI is composed of four steps, including i) heterogeneous molecular network construction, ii) inter-view feature extraction, iii) intra-view feature extraction, and iv) PPI prediction.
Fig. 1

The overview of MTV-PPI

The overview of MTV-PPI

Heterogeneous molecular network construction

To predict PPIs from a systematical perspective, we first collect existing valuable nine protein-related association datasets to construct the heterogeneous molecular network, which is shown in Table 1.
Table 1

The statistics of associations in the heterogeneous molecular network

Type of associationsSourcesNumber
miRNA-LncRNAlncRNASNP2 [22]8374
miRNA-DiseaseHMDD [23]16,427
miRNA-ProteinmiRTarBase [24, 25]4944
LncRNA-DiseaseLncRNADisease [26], lncRNASNP2 [22]1264
Protein–ProteinSTRING [27]19,237
Protein-DiseaseDisGeNET [28]25,087
Drug-ProteinDrugBank [29]11,107
Drug-DiseaseCTD [30]18,416
LncRNA-ProteinLncRNA2Target [31]690
Total105,546
The statistics of associations in the heterogeneous molecular network As shown in Table 1, there are 19,237 validated PPIs in this collected heterogeneous molecular network, after performing the inclusion of identifier unification, de-redundancy, simplification and deletion of the irrelevant items. The statistics of constructed heterogeneous molecular network is shown in Table 2.
Table 2

The statistics of nodes in the heterogeneous molecular network

Type of nodesNumber
Protein1649
LncRNA769
miRNA1023
Disease2062
Drug1025
The statistics of nodes in the heterogeneous molecular network

Inter-view feature extraction

After constructing the network, we collect the protein sequences from STRING dataset [27] for extracting inter-view feature. However, the original sequence is composed by amino acids, which is not understandable for machine. As a result, it is necessary to embed the protein sequence into a machine understandable vector before extracting protein inter-view feature. According to the polarity of the side chain, Shen et al. [17] has categorized 20 amino acids into four groups, comprising (Ala, Val, Leu, Ile, Met, Phe, Trp, Pro), (Gly, Ser, Thr, Cys, Asn, Gln, Tyr), (Arg, Lys, His) and (Asp, Glu). Inspired by Shen, we simply encode the sequences of proteins to a 64 () dimensional vector using the method of 3-mer. At the beginning of it, the vector is initialized to 0. Then, there is a sliding window with a length of 3, which is used to scan the whole sequence of protein with a step of 1. During that processing, the amino acid sub-sequence possessed in the window is recorded to the corresponding position of the vector. After the complement of sliding, the vector is normalized, then each dimension in the vector is the frequency at which the amino acid sequence appears in the original protein sequence. The reason for constructing 64-dimensional vectors is that there are 64 possible sorts of amino acids in four. Finally, the vector obtained by 3-mer is attribute feature. The whole process is shown in Fig. 2.
Fig. 2

An illustration of the process of extracting inter-view feature

An illustration of the process of extracting inter-view feature

Intra-view feature extraction

In order to predict PPIs from a global perspective, network embedding, which targets to learn the representation of nodes from an original high-dimensional space into a low-dimensional vector space, is adopted in proposed model for extracting the intra-view feature of protein from the heterogeneous molecular network. Currently, various network embedding methods are proposed and they can be generally grouped into three categories, which are Matrix Factorization (MF)-based model [32], Random Walk (RW)-based model [33, 34], and Neural Network (NN)-based model [35, 36]. Taking both efficiency and model complexity into consideration, LINE [35] is integrated into our model to learn intra-view feature of protein. LINE maps the nodes in a large network to the vector space according to the density of their relationships, so that the closely connected nodes are projected into similar locations, and the tightness of the two nodes is measured in network. For the sake of learning local and global network structures, respectively, LINE defines the first-order proximity (see Fig. 3A) and the second-order proximity (see Fig. 3B) to consider network structures at both local and global levels. The first-order proximity in the network is the self-similarity between the two nodes. For each undirected node pair , the joint probability between node and can be simplied as follows:where denotes the first-order proximity between node and and denotes the intra-view feature of node .
Fig. 3

An illustration of first-order proximity and second-order proximity in LINE

An illustration of first-order proximity and second-order proximity in LINE The second-order proximity between a pair of nodes in a network is the similarity between their neighboring network structures. In mathematics, let denotes the first-order similarity between and all other nodes, then the second-order similarity between and is determined by and . The second-order proximity assumes that the nodes of the shared neighbor are similar to each other. Each node plays two roles: the node itself and the neighbors of other nodes. Thus, the probability that is a neighbor of is defined as:In our model, we use the above two types of proximity to optimize the intra-view features of protein nodes at the same time.

PPI prediction

After extracting protein inter-view and intra-view features, a concatenation aggregation function is adopted to generate the final protein representation. In specific, suppose the inter-view feature and intra-view feature of node are denoted as and , then the final representation for is formulated by:where denotes the final representation of , W and b are trainable parameters. In this study, the PPI prediction is viewed as a binary classification task. As a result, given a protein pair, their final representations are sent into classifier to predict if the two proteins are interacted with each other and we will discuss the effect of classifier in further section.

Performance evaluation indicators

The heterogeneous molecular network collected in this work consists of 19,237 PPIs and all of them are regarded as positive samples in MTV-PPI. To prove the effectiveness of MTV-PPI, five-fold cross-validation is adopted to train MTV-PPI. In specificity, the entire PPIs positive samples are randomly divided into five equal subsets and negative samples are randomly selected from the complement set of PPIs positive samples with an equal size for each subset. During the process of five-fold cross-validation, we take each subset as the test set and the remaining network excluding PPIs in test set as the training set, cycle five times in turn, and take the average of five times as the final performance of MTV-PPI. Several criteria are used to evaluate proposed method, including accuracy (Acc.), sensitivity (Sen.) and precision (Pre.), Area Under Curve (AUC) and Area Under Precision-Recall (AUPR). These criteria defined below are sufficient to access the quality, robustness, and predictability of the model from different perspectives.where FP, TP, FN and TN represent false positive, true positive, false negative and true negative, respectively.

Results and discussion

Baseline algorithms

For the purpose of demonstrating the effectiveness of MTV-PPI, we compare it with several state-of-the-art baseline algorithms as follows and their performances are also evaluated in the experiments. LR_PPI1 [37] is a sequence-based PPI prediction model, which applies stacked auto-encoder to encode protein sequence and then predicts PPIs. DPPI2 [38] is also a sequence-based PPI prediction model, which applies convolutional neural network combined with random projection and data augmentation to predict PPIs. WSRC_GE [39] extracts feature from protein sequence and then introduces a novel weighted sparse representation based classifier to finish PPI prediction task. LPPI3 [40] reconstructs a small scale weighted network according to protein basic information, then learns the protein network representation by DeepWalk and classifies the PPI samples by Logistic Regression (LR). PIPR4 [41] incorporates a deep residual recurrent convolutional neural network in the Siamese architecture to predict PPIs based on protein sequences in an end-to-end way.

Experiment settings

MTV-PPI integrates RF with default parameters to classify PPIs. For those baseline algorithms, we first download their source codes provided by their developers or ask the source codes from its developers and then apply them on the proposed heterogeneous molecular network under five-fold cross-validation on our machine. During this process, it should be noted that all the parameters used in these baseline algorithms are the same as their original works. Moreover, we randomly divide all approved PPIs as positive samples and then the same number of negative samples are randomly selected from the complement set of positive samples [42].

Prediction performance of proposed model

In this section, we test the proposed model under five-fold cross-validation on the heterogeneous molecular network and Table 3 reports the results of each fold and the overall performance. According to the results, it can be observed that proposed model achieves the performance with 86.55% of Acc., 82.49% of Sen., 89.79% of Pre., 0.9301 of AUC value and 0.9308 of AUPR value. In addition, we also show the standard deviation of each fold and it can be seen that proposed model is stability since the average standard deviations achieved by proposed model are only 0.005 of Acc., 0.0085 of Sen., 0.0088 of Pre., 0.005 of AUC and 0.0045 of AUPR.
Table 3

Predictive performance under each fold on heterogeneous molecular network

FoldAcc.Sen.Pre.AUCAUPR
00.87030.82640.90600.93410.9346
10.87320.83320.90560.93700.9378
20.86020.81810.89330.92340.9268
30.86170.83420.88280.92980.9270
40.86200.91240.90190.92620.9277
Overall0.8655 ± 0.00500.8249 ± 0.00850.8979 ± 0.00880.9301 ± 0.00500.9308 ± 0.0045

Best results are bolded

Predictive performance under each fold on heterogeneous molecular network Best results are bolded

Comparison with baseline models

We reimplement all baseline models on our machine and the results are shown in Table 4 and Fig. 4. Regarding the results obtained by MTV-PPI and all baseline algorithms, we find that the performances of these algorithms vary greatly and proposed method MTV-PPI achieves better results on most metrics. Compared with sequence-based algorithms (LRPPI, DPPI, PIPR and WSRCGE), MTV-PPI yields the best performance, improving the performance by approximately 7% on Acc., 5% on Sen., 15% on Pre., 0.08 on AUC and 0.08 on AUPR. The good performance is due to that MTV-PPI is capable of learning complex feature from the heterogeneous network and aggregating it with sequence-based feature. Moreover, though LPPI predicts PPIs based on network, its performance is not as good as MTV-PPI by and large. However, it achieves better result on Sen. with about 10% higher when compared with MTV-PPI and this result is also better than that of all baseline algorithms. The possible reasons for this are two folds: (i) LPPI only uses protein properties to reduce the scale of network, but these properties are not adopted to the further process of LPPI, while MTV-PPI integrates protein attribute feature into final feature, which enrich the feature to a certain extent; (ii) LPPI may lose information in the process of reducing the size of network when applied on the heterogeneous molecular network, while MTV-PPI is able to mine high-dimensional feature through on the whole heterogeneous molecular network.
Table 4

Results of various methods

MethodsAcc.Sen.Pre.AUCAUPR
LR\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\_$$\end{document}_PPI0.7717 ± 0.00660.7551 ± 0.00900.7329 ± 0.00920.8482 ± 0.00600.8411 ± 0.0058
DPPI0.8007 ± 0.00870.7623 ± 0.00990.7677 ± 0.00900.8726 ± 0.00760.8903 ± 0.0078
WSRC\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\_$$\end{document}_GE0.8225 ± 0.01050.7623 ± 0.00970.7987 ± 0.01230.9022 ± 0.00890.8975 ± 0.0086
LPPI0.8062 ± 0.01160.9275 ± 0.01240.7232 ± 0.01030.8424 ± 0.01730.8022 ± 0.0154
PIPR0.7536 ± 0.00900.7678 ± 0.01000.7456 ± 0.00980.8331 ± 0.00940.8246 ± 0.0096
MTV-PPI0.8655 ± 0.00500.8249 ± 0.00850.8979 ± 0.00880.9301 ± 0.00500.9308 ± 0.0045

Best results are bolded

Fig. 4

ROC and PR curves obtained by MTV-PPI and all baseline algorithms

Results of various methods Best results are bolded ROC and PR curves obtained by MTV-PPI and all baseline algorithms

Impact of aggregation function

The inter-view feature and intra-view feature are aggregated in a concatenation way. In order to prove the effectiveness of adopted aggregation function, we compare it with another widely used sum aggregation function [43], which is formulated by: , where W and b are trainable weights. Figure 5 reports the results obtained by above two aggregators. It should be noted that the other parts of this variant are all the same as MTV-PPI except the aggregator.
Fig. 5

Predictive performances with two different aggregators

Predictive performances with two different aggregators According to the results shown in Fig. 5, we have found that concatenation aggregator adopted in MTV-PPI is superior to sum aggregator. The possible reason is that sum aggregator tends to detect the potential interaction between two features [43], which may not suitable for our model since the features used in MTV-PPI are extracted from two separate views.

Impact of network representation learning algorithm

In MTV-PPI, the intra-view feature is learned by a kind of NN-based representation learning methods, LINE. In this section, we also implement Laplacian and DeepWalk that belong to MF-based group and RW-based group, respectively, to validate the usefulness of LINE in current task. Figure 6 summarizes the experimental results and it can be observed that neither Laplacian nor DeepWalk is as effective as LINE, which may mainly because that both of them do not directly model the network topology, since Laplacian learns the low-dimensional representations of protein nodes by MF and DeepWalk learns representations through network paths, while LINE designs two kinds of topological similarities to learn low-dimensional representations for protein nodes.
Fig. 6

Results with different network embedding algorithms

Results with different network embedding algorithms

Impact of various feature representation

As mentioned above, MTV-PPI is capable of modeling both inter-view feature and intra-view feature simultaneously. In this section, we design two variants to detect the effects of above two features, respectively. The first one only takes inter-view feature into account, while the second one predicts PPIs only by intra-view feature. We also train and test them under five-fold cross-validation. Table 5 reports their performances and Fig. 7 shows their ROC and PR curves, respectively.
Table 5

Predictive performance with different feature type

Feature TypeAcc.Sen.Pre.AUCAUPR
Inter-view feature0.7491 ± 0.00900.6945 ± 0.01090.7797 ± 0.01030.8206 ± 0.00800.8185 ± 0.0181
Intra-view feature0.8570 ± 0.00450.8130 ± 0.01050.8916 ± 0.00990.9240 ± 0.00460.9238 ± 0.0093
Aggregated feature0.8655 ± 0.00500.8249 ± 0.00850.8979 ± 0.00880.9301 ± 0.00500.9308 ± 0.0045

Best results are bolded

Fig. 7

ROC and PR curves obtained by various features

Predictive performance with different feature type Best results are bolded ROC and PR curves obtained by various features According to the results, it can be observed that the model with only inter-view feature gets the worst performance among all metrics, which indicates that it is insufficient to predict PPIs on the heterogeneous molecular network with only feature extracted from protein sequence. Compared with inter-view feature, the model with intra-view improves the performance by 10.79% on Acc., 11.85% on Sen., 11.19% on Pre., 0.1034 on AUC and 0.1053 on AUPR, which demonstrates that intra-view feature is more conductive to PPI prediction task on heterogeneous molecular network. Though intra-view feature performs much better than inter-view feature, the model with aggregated feature achieves the best performance. The reason for this is that aggregated feature contains both two features and is able to fuse two features in an appropriate proportion.

Impact of various machine learning classifiers

In proposed model, RF classifier is integrated as the default classifier. For the sake of proving the effectiveness of it, we select several state-of-the-art machine learning classifiers, including SVM [44], LR [45], Naïve Bayes (NB) [46], AdaBoost [47] and XGBoost [48], and apply them on the same heterogeneous molecular network with aggregated feature. All the other parameters are the same as original work. Table 6 and Fig. 8 shows the results of each classifier.
Table 6

Predictive performance with various classifiers

ClassifierAcc.Sen.Pre.AUCAUPR
SVM0.7103 ± 0.00780.7577 ± 0.01130.6921 ± 0.00740.7747 ± 0.00770.7686 ± 0.0074
LR0.7056 ± 0.00720.7452 ± 0.01190.6905 ± 0.00670.7733 ± 0.00780.7667 ± 0.0076
NB0.6772 ± 0.00840.7392 ± 0.00980.6578 ± 0.00900.7563 ± 0.00710.7827 ± 0.0075
AdaBoost0.6946 ± 0.00880.7306 ± 0.01150.6816 ± 0.00900.7669 ± 0.00940.7713 ± 0.0086
XGBoost0.8600 ± 0.00810.8867 ± 0.00630.8419 ± 0.01090.9326 ± 0.00510.9240 ± 0.0048
RF0.8655 ± 0.00500.8249 ± 0.00850.8979 ± 0.00880.9301 ± 0.00500.9308 ± 0.0045

Best results are bolded

Fig. 8

ROC and PR curves obtained by various classifiers

Predictive performance with various classifiers Best results are bolded ROC and PR curves obtained by various classifiers According to the results, the two linear classifiers (SVM and LR) have the similar performances in predicting PPIs, but it yields about 16% lower than default classifier (RF) among all metrics on average, which indicates that linear classifier is not suitable to process the feature extracted from such a complex network. As for the generation model, i.e. NB, it gets the worst performance with approximately 20% lower on Acc. than that of RF classifier. The possible reason for this is that NB classifier is constructed based on the assumption that each feature of the sample is independent [49], which is not suitable for proposed task. Though AdaBoost, XGBoost and RF all belong to integrated model, their performances are quite different. Among three classifiers, AdaBoost performs worst on classifying PPI samples, while XGBoost improves the performance by about 17% on Acc., 15% on Sen., 16% on Pre., 0.17 on AUC and 0.15 on AUPR. The possible reason is that XGBoost introduces regulations and the pruning strategy to better fit the positive samples, which is also the reason why XGBoost achieves high Sen. and AUC. However, RF achieves the best results on all metrics and it is more stable than others since it has smaller standard deviations. As a result, we finally select RF as default classifier of our model.

Impact of the type of heterogeneous molecular network

We have proved that heterogeneous molecular network helps to improve the performance of PPIs predictor in the above section. However, there are five types of nodes in the network used in this paper, including miRNA, lncRNA, Drug, Disease, and Protein, which makes it difficult to determine which types of nodes/edges benefit to PPI prediction. To this end, we construct five sub-networks as shown in Table 7 and apply MTV-PPI on them under five-fold cross-validation to determine which type of network is the most informative. Table 8 reports the experimental results obtained on each sub-network and it can be observed that: (i) Among five sub-networks, DrPP contributes the most to PPIs prediction as its superior performance when compared MiPP, LncPP and DiPP; (ii) Integrating miRNA into protein–protein network also significantly improves the performance of MTV-PPI; (iii) As for LncPP and DiPP, the effect of them is not obvious, even if the results on them are better than that of PP. In a word, DrPP is the most informative network for PPI prediction.
Table 7

The detail information of each subnetwork

Name# Nodes# Interactions
Protein–protein (PP)164919,237
miRNA–protein–protein (MiPP)267224,181
lncRNA–protein–protein (LncPP)241819,927
Disease–protein–protein (DiPP)371144,324
Drug–protein–protein (DrPP)267430,344
Table 8

Experimental results obtained on each sub-network

Sub-networkAcc.Sen.Pre.AUCAUPR
PP0.8348 ± 0.00650.7869 ± 0.00940.8704 ± 0.00750.9047 ± 0.00380.9144 ± 0.0041
MiPP0.8420 ± 0.00220.7936 ± 0.00550.8786 ± 0.00490.9095 ± 0.00200.9198 ± 0.0022
LncPP0.8350 ± 0.00440.7865 ± 0.00860.8711 ± 0.00810.9042 ± 0.00280.9143 ± 0.0028
DiPP0.8352 ± 0.00480.7796 ± 0.00640.8772 ± 0.00680.9053 ± 0.00310.9139 ± 0.0032
DrPP0.8537 ± 0.00340.8057 ± 0.00520.8913 ± 0.00260.9213 ± 0.00390.9291 ± 0.0040
All0.8655 ± 0.00500.8249 ± 0.00850.8979 ± 0.00880.9301 ± 0.00500.9308 ± 0.0045

Best results are bolded

The detail information of each subnetwork Experimental results obtained on each sub-network Best results are bolded

Conclusion

In this paper, we propose a computational model MTV-PPI to predict PPIs through a heterogeneous molecular network by modeling both inter-view feature and intra-view feature simultaneously. The inter-view feature is used to characterize the information of protein sequence, while intra-view feature is used to describe the network structure. MTV-PPI aggregates both two features and predict potential PPIs by RF classifier. By this way, MTV-PPI is capable of taking both protein sequence information and network structure into account. Obtained experiment results show that the aggregated feature contributes to the improvement of model performance and further experiment results indicate that MTV-PPI is a promising tool for predicting PPIs based on the heterogeneous molecular network. In further work, we are going to expand the scale of the network by adding more molecules [50], incorporate the relation semantics [51], and clustering technology [52, 53] to reduce the noises in heterogeneous network into our feature work.
  39 in total

1.  Functional organization of the yeast proteome by systematic analysis of protein complexes.

Authors:  Anne-Claude Gavin; Markus Bösche; Roland Krause; Paola Grandi; Martina Marzioch; Andreas Bauer; Jörg Schultz; Jens M Rick; Anne-Marie Michon; Cristina-Maria Cruciat; Marita Remor; Christian Höfert; Malgorzata Schelder; Miro Brajenovic; Heinz Ruffner; Alejandro Merino; Karin Klein; Manuela Hudak; David Dickson; Tatjana Rudi; Volker Gnau; Angela Bauch; Sonja Bastuck; Bettina Huhse; Christina Leutwein; Marie-Anne Heurtier; Richard R Copley; Angela Edelmann; Erich Querfurth; Vladimir Rybin; Gerard Drewes; Manfred Raida; Tewis Bouwmeester; Peer Bork; Bertrand Seraphin; Bernhard Kuster; Gitte Neubauer; Giulio Superti-Furga
Journal:  Nature       Date:  2002-01-10       Impact factor: 49.962

2.  Systematic identification of protein complexes in Saccharomyces cerevisiae by mass spectrometry.

Authors:  Yuen Ho; Albrecht Gruhler; Adrian Heilbut; Gary D Bader; Lynda Moore; Sally-Lin Adams; Anna Millar; Paul Taylor; Keiryn Bennett; Kelly Boutilier; Lingyun Yang; Cheryl Wolting; Ian Donaldson; Søren Schandorff; Juanita Shewnarane; Mai Vo; Joanne Taggart; Marilyn Goudreault; Brenda Muskat; Cris Alfarano; Danielle Dewar; Zhen Lin; Katerina Michalickova; Andrew R Willems; Holly Sassi; Peter A Nielsen; Karina J Rasmussen; Jens R Andersen; Lene E Johansen; Lykke H Hansen; Hans Jespersen; Alexandre Podtelejnikov; Eva Nielsen; Janne Crawford; Vibeke Poulsen; Birgitte D Sørensen; Jesper Matthiesen; Ronald C Hendrickson; Frank Gleeson; Tony Pawson; Michael F Moran; Daniel Durocher; Matthias Mann; Christopher W V Hogue; Daniel Figeys; Mike Tyers
Journal:  Nature       Date:  2002-01-10       Impact factor: 49.962

3.  Discovering Variable-Length Patterns in Protein Sequences for Protein-Protein Interaction Prediction.

Authors:  Keith C C Chan
Journal:  IEEE Trans Nanobioscience       Date:  2015-05-21       Impact factor: 2.935

4.  In silico prediction of physical protein interactions and characterization of interactome orphans.

Authors:  Max Kotlyar; Chiara Pastrello; Flavia Pivetta; Alessandra Lo Sardo; Christian Cumbaa; Han Li; Taline Naranian; Yun Niu; Zhiyong Ding; Fatemeh Vafaee; Fiona Broackes-Carter; Julia Petschnigg; Gordon B Mills; Andrea Jurisicova; Igor Stagljar; Roberta Maestro; Igor Jurisica
Journal:  Nat Methods       Date:  2014-11-17       Impact factor: 28.547

5.  Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.

Authors:  Chi Wang; Xueqing Liu; Yanglei Song; Jiawei Han
Journal:  KDD       Date:  2015-08

6.  HMDD v3.0: a database for experimentally supported human microRNA-disease associations.

Authors:  Zhou Huang; Jiangcheng Shi; Yuanxu Gao; Chunmei Cui; Shan Zhang; Jianwei Li; Yuan Zhou; Qinghua Cui
Journal:  Nucleic Acids Res       Date:  2019-01-08       Impact factor: 16.971

7.  DrugBank 5.0: a major update to the DrugBank database for 2018.

Authors:  David S Wishart; Yannick D Feunang; An C Guo; Elvis J Lo; Ana Marcu; Jason R Grant; Tanvir Sajed; Daniel Johnson; Carin Li; Zinat Sayeeda; Nazanin Assempour; Ithayavani Iynkkaran; Yifeng Liu; Adam Maciejewski; Nicola Gale; Alex Wilson; Lucy Chin; Ryan Cummings; Diana Le; Allison Pon; Craig Knox; Michael Wilson
Journal:  Nucleic Acids Res       Date:  2018-01-04       Impact factor: 16.971

8.  PCVMZM: Using the Probabilistic Classification Vector Machines Model Combined with a Zernike Moments Descriptor to Predict Protein-Protein Interactions from Protein Sequences.

Authors:  Yanbin Wang; Zhuhong You; Xiao Li; Xing Chen; Tonghai Jiang; Jingting Zhang
Journal:  Int J Mol Sci       Date:  2017-05-11       Impact factor: 5.923

9.  miRBase: from microRNA sequences to function.

Authors:  Ana Kozomara; Maria Birgaoanu; Sam Griffiths-Jones
Journal:  Nucleic Acids Res       Date:  2019-01-08       Impact factor: 16.971

10.  LncRNA2Target v2.0: a comprehensive database for target genes of lncRNAs in human and mouse.

Authors:  Liang Cheng; Pingping Wang; Rui Tian; Song Wang; Qinghua Guo; Meng Luo; Wenyang Zhou; Guiyou Liu; Huijie Jiang; Qinghua Jiang
Journal:  Nucleic Acids Res       Date:  2019-01-08       Impact factor: 16.971

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.