Literature DB >> 24566145

Prediction of protein-protein interaction with pairwise kernel support vector machine.

Shao-Wu Zhang1, Li-Yang Hao2, Ting-He Zhang3.   

Abstract

Protein-protein interactions (PPIs) play a key role in many cellular processes. Unfortunately, the experimental methods currently used to identify PPIs are both time-consuming and expensive. These obstacles could be overcome by developing computational approaches to predict PPIs. Here, we report two methods of amino acids feature extraction: (i) distance frequency with PCA reducing the dimension (DFPCA) and (ii) amino acid index distribution (AAID) representing the protein sequences. In order to obtain the most robust and reliable results for PPI prediction, pairwise kernel function and support vector machines (SVM) were employed to avoid the concatenation order of two feature vectors generated with two proteins. The highest prediction accuracies of AAID and DFPCA were 94% and 93.96%, respectively, using the 10 CV test, and the results of pairwise radial basis kernel function are considerably improved over those based on radial basis kernel function. Overall, the PPI prediction tool, termed PPI-PKSVM, which is freely available at http://159.226.118.31/PPI/index.html, promises to become useful in such areas as bio-analysis and drug development.

Entities:  

Mesh:

Substances:

Year:  2014        PMID: 24566145      PMCID: PMC3958907          DOI: 10.3390/ijms15023220

Source DB:  PubMed          Journal:  Int J Mol Sci        ISSN: 1422-0067            Impact factor:   5.923


Introduction

Protein–protein interactions (PPIs) play an important role in such biological processes as host immune response, the regulation of enzymes, signal transduction and mediating cell adhesion. Understanding PPIs will bring more insight to disease etiology at the molecular level and potentially simplify the discovery of novel drug targets [1]. Information about protein–protein interactions have also been used to address many biological important problems [2-5], such as prediction of protein function [2], regulatory pathways [3], signal propagation during colorectal cancer progression [4], and identification of colorectal cancer related genes [5]. Experimental methods of identifying PPIs can be roughly categorized into low- and high-throughput methods [6]. However, PPI data obtained from low-throughput methods only cover a small fraction of the complete PPI network, and high-throughput methods often produce a high frequency of false PPI information [7]. Moreover, experimental methods are expensive, time-consuming and labor-intensive. The development of reliable computational methods to facilitate the identification of PPIs could overcome these obstacles. Thus far, a number of computational approaches have been developed for the large-scale prediction of PPIs based on protein sequence, structure and evolutionary relationship in complete genomes. These methods can be roughly categorized into those that are genomic-based [8,9], structure-based [10], and sequence-based [11-26]. Genomic- and structure-based methods cannot be implemented if prior information about the proteins is not available. Sequence-based methods are more universal, but they concatenate the two feature vectors of protein P and P to represent the protein pair P–P, and the concatenation order of two feature vectors will affect the prediction results. For example, if we use feature vectors x, x to represent protein P and P, respectively, then the P–P protein pair can be expressed as x = x ⊕ x, or x = x ⊕ x. In general, however, x ⊕ x is not equal to x ⊕ x. Furthermore, PPIs have a symmetrical character; that is, the interaction of protein P with protein P equals the interaction of protein P with protein P. Under these circumstances, concatenating two feature vectors of protein P and P to represent the protein pair P–P and then using the traditional kernel k(x1, x2) to predict PPIs would not be workable. Therefore, in this paper, we introduced two kinds of feature extraction approaches, amino acid distance frequency with PCA reducing the dimension (DFPCA) and amino acid index distribution (AAID) to represent the protein sequences, followed by the use of pairwise kernel function and SVM to predict PPI.

Results and Discussion

LIBSVM [27], loaded from http://www.csie.ntu.edu.tw/~cjlin, is a library for Support Vector Machines (SVMs), and it was used to design the classifier in this paper. The kernel program of the software was modified to the pairwise kernel functions, which were formed by the RBF genomic kernel function K (x1, x2) in all experiments.

The Results of DFPCA and AADI with KII Pairwise Kernel Function SVM

In statistical prediction, the following three cross-validation methods are often used to examine a predictor for its effectiveness in practical application: independent dataset test, K-fold crossover or subsampling test, and jackknife test [28]. However, of the three test methods, the jackknife test is deemed the least arbitrary that can always yield a unique result for a given benchmark dataset as demonstrated by Equations (28)–(30) in [29]. Accordingly, the jackknife test has been increasingly and widely used by investigators to examine the quality of various predictors (see, e.g., [30-41]). However, to reduce the computational time, we adopted the 10-fold cross-validation (10 CV) test in this study as done by many investigators with SVM as the prediction engine. The four feature vector sets, Hf, Vf, Pf, and Zf, extracted with DFPCA and the five feature vector sets, LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, extracted with AAID were employed as the input feature vectors for KII pairwise radial basis kernel function (PRBF) SVM. The results of DFPCA and AAID are summarized in Table 1.
Table 1.

Results of DFPCA and AAID with PRBF SVM in 10 CV test.

Feature SetSn (%)PPV (%)ACC (%)MCC
Hf95.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765
Vf95.66 ± 2.7592.52 ± 2.4093.96 ± 1.860.8798
Pf95.78 ± 2.2392.07 ± 1.6993.76 ± 1.930.8760
Zf96.06 ± 1.2491.71 ± 3.1393.69 ± 1.860.8747
LEWP71010195.86 ± 2.2392.08 ± 4.3293.80 ± 2.420.8768
QIAN88013896.06 ± 2.8392.27 ± 1.5094.00 ± 1.220.8808
NADH01010495.82 ± 2.9892.04 ± 2.5193.76 ± 1.660.8760
NAGK73010396.06 ± 2.8392.09 ± 4.0293.90 ± 3.310.8789
AURR98011695.94 ± 2.0792.33 ± 1.4293.98 ± 1.240.8804
From Table 1, we can see that the performances of the two feature extraction approaches, i.e., amino acid distance frequency with PCA (DFPCA) and amino acid index distribution (AAID), are nearly equal when using the KII pairwise kernel SVM. The total prediction accuracies are 93.69%~94%. As previously noted, we used just five amino acid indices, including LEWP710101, QIAN880138, NADH010104, NAGK730103 and AURR980116, to produce the feature vector sets. When we tested the performance of AAID against the remaining 480 amino acid indices from AAindex, we found that the amino acid index does affect predictive results and that the total prediction accuracies of those amino acid indices were 79.4%~94%. Among our original five indices, as noted above, the performance of AAID was superior in comparison to the results from AAindex. To account for the better performance of our five indices, we point to the physicochemical and biochemical properties of amino acids. By single-linkage clustering, one of agglomerative hierarchical clustering methods, Tomii and Kanehisa [42] divided the minimum spanning of these amino acid indices into six regions: α and turn propensities, β propensity, amino acid composition, hydrophobicity, physicochemical properties, and other properties. The indices of LEWP710101, QIAN880138, NAGK730103 and AURR980116 are arranged into the region of α and turn propensities, while NADH010104 is arranged into the hydrophobicity region, indicating that the properties of α and turn propensities, and hydrophobicity contain more distinguishable information for predicting PPIs.

The Comparison of Pairwise Kernel Function with Traditional Kernel Function

In order to evaluate the performance of pairwise kernel function, we compared the results of pairwise radial basis kernel function (PRBF) and radial basis function kernel (RBF) with the same feature vector sets. For RBF, we concatenate the two feature vectors of protein P and protein P to represent the protein pair P – P; that is, feature vector x = x ⊕ x was used as the input feature vector of RBF. The results of RBF and PRBF with DFPCA in the 10CV test are listed in Table 2.
Table 2.

Results of RBF and PRBF with DFPCA in the 10 CV test.

Feature SetKernel FunctionSn (%)PPV (%)ACC (%)
HfRBF89.96 ± 0.5289.65 ± 2.1789.88 ± 1.05
PRBF95.94 ± 1.9291.98 ± 2.8893.78 ± 1.44
VfRBF90.20 ± 1.3189.33 ± 2.6089.72 ± 1.72
PRBF95.66 ± 2.7592.52 ± 2.4093.96 ± 1.86
PfRBF89.32 ± 0.8689.26 ± 2.9189.28 ± 1.44
PRBF95.78 ± 2.2392.07 ± 1.6993.76 ± 1.93
ZfRBF90.84 ± 1.8588.79 ± 2.5089.64 ± 1.18
PRBF96.06 ± 1.2491.71 ± 3.1393.69 ± 1.86
Table 2 shows that the performance of PRBF is superior to that of RBF for predicting PPI. The total prediction accuracies of PRBF are higher at 3.9%~4.48% than those of RBF.

The Comparison of DF and DFPCA Feature Extraction Approaches

For the feature extraction approach of distance frequency of amino acids grouped with their physicochemical properties, we compared the results of DF and DFPCA with PRBF SVM to test the validity of adopting PCA. The reduced feature matrix is set to retain 99.9% information of the original feature matrix by PCA. The results of DF and DFPCA with PRBF SVM in the 10CV test are listed in Table 3.
Table 3.

Results of DF and DFPCA with PRBF SVM in the 10 CV test.

Feature SetFeature Extraction ApproachSn (%)PPV (%)ACC (%)MCC
HfDF97.37 ± 2.5566.67 ± 27.874.34 ± 24.30.5485
DFPCA95.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765
VfDF97.21 ± 2.3971.40 ± 23.078.17 ± 27.10.6093
DFPCA95.66 ± 2.7592.52 ± 2.4093.96 ± 1.860.8798
PfDF97.13 ± 4.7069.48 ± 25.577.23 ± 27.20.5937
DFPCA95.78 ± 2.2392.07 ± 1.6993.76 ± 1.930.8760
ZfDF97.65 ± 4.8262.29 ± 29.569.26 ± 23.60.4680
DFPCA96.06 ± 1.2491.71 ± 3.1393.69 ± 1.860.8747
From Table 3, we can see that the performance of DFPCA is superior to that of DF. The total prediction accuracies and MCC (see Equation (16) below) of DFPCA are 15.79%~24.43% and 0.2705~0.4067 higher than those of DF, respectively. Although the sensitivities of DF are a little higher (1.43%~1.59%) than those of DFPCA for the Hf, Vf, Pf and Zf feature sets, the positive predictive values are much less than that of DFPCA (21%~29%), which means that the DFPCA approach can largely reduce the false positives. These results show that the performance of DFPCA is superior to that of DF for predicting PPI. It should be noted that feature vectors generated with either DF or DFPCA contain statistical information of amino acids in protein sequences, as well as information about amino acid position and physicochemical properties.

The Performance of the Predictive System Influenced by Randomly Sampling the Noninteracting Protein Subchain Pairs

To investigate the influence of randomly sampling the noninteracting protein subchain pairs, we randomly sampled 2510 noninteracting protein subchain pairs five times to construct five negative sets, and we used the DFPCA approach with hydrophobicity property to predict PPI in the 10CV test. The results, as shown in Table 4, indicate that random sampling of the noninteracting protein subchain pairs in order to construct negative sets has little influence on the performance of the PPI-PKSVM.
Table 4.

Effect of random sampling of the noninteracting protein subchain pairs on the performance of PPI-PKSVM with DFPCA and PRBF SVM in the 10CV test.

Sampling TimeSn (%)PPV (%)AAC (%)MCC
195.38 ± 3.3591.20 ± 3.3793.09 ± 3.450.8627
295.42 ± 1.3991.52 ± 3.2493.29 ± 1.650.8665
395.46 ± 3.0391.21 ± 1.6393.13 ± 2.290.8635
495.46 ± 3.0391.49 ± 1.7093.29 ± 2.130.8666
595.94 ± 1.9291.98 ± 2.8893.78 ± 1.440.8765

Comparison of Different Prediction Methods

To demonstrate the prediction performance of our method, we compared it with other methods [25] on a nonredundant dataset constructed by Pan and Shen [25], in which no protein pair has sequence identity higher than 25%. The number of positive links, i.e., interacting protein pairs, is 3899, which is composed of 2502 proteins, and the number of negative links, i.e., noninteracting protein pairs, is 4262, which is composed of 661 proteins. Among the prediction results of different methods shown in Table 5, the performance of PPI-PKSVM stands out as the best. When compared to Shen’s LDA-RF, the accuracy (see Equation (15) below) and MCC of LEWP710101/QIAN880138 and Hf-DFPCA are respectively 1.9%, 2%, 0.038 and 0.039 higher. These results indicate that our method is a very promising computational strategy for predicting protein–protein interaction based on the protein sequences.
Table 5.

Performance comparison of different PPI methods using Shen’s dataset a in the 10 CV test.

MethodSn (%)Sp (%)ACC (%)MCC
LEWP71010197.3 ± 0.0499.2 ± 0.0498.3 ± 0.000.966 ± 0.0006
QIAN88013897.3 ± 0.1099.1 ± 0.1098.3 ± 0.100.966 ± 0.002
NADH01010497.2 ± 0.0799.2 ± 0.0498.3 ± 0.050.965 ± 0.0007
NAGK73010397.2 ± 0.0699.2 ± 0.0498.2 ± 0.060.965 ± 0.0004
AURR98011697.3 ± 0.0499.1 ± 0.0698.2 ± 0.060.965 ± 0.0006
Hf-DFPCA97.6 ± 0.2099.1 ± 0.1098.4 ± 0.100.967 ± 0.002
Vf-DFPCA97.5 ± 0.1098.9 ± 1.0098.3 ± 0.800.965 ± 0.007
Pf-DFPCA96.9 ± 0.1099.5 ± 0.6098.2 ± 0.600.964 ± 0.004
Zf-DFPCA97.9 ± 0.9096.0 ± 0.2096.9 ± 1.100.939 ± 0.002
LDA-RF b94.2 ± 0.4098.0 ± 0.3096.4 ± 0.300.928 ± 0.006
LDA-RoF b93.7± 0.5097.6 ± 0.6095.7 ± 0.400.918 ± 0.007
LDA-SVM b89.7 ± 1.3091.5 ± 1.1090.7 ± 0.900.813 ± 0.018
AC-RF b94.0 ± 0.6096.6 ± 0.4095.5 ± 0.300.914 ± 0.007
AC-RoF b93.3 ± 0.7097.1 ± 0.7095.1 ± 0.600.910 ± 0.009
AC-SVM b94.0 ± 0.6084.9 ± 1.7089.3 ± 0.800.792 ± 0.014
PseAAC-RF b94.1 ± 0.9096.9 ± 0.3095.6 ± 0.400.912 ± 0.007
PseAAC-RoF b93.6 ± 0.9096.7 ± 0.4095.3 ± 0.500.907 ± 0.009
PseAAC-SVM b89.9 ± 0.7092.0 ± 0.4091.2 ± 0.40.821 ± 0.006

Shen’s dataset contains two subdatasets, C and D, which are available at http://www.csbio.sjtu.edu.cn/bioinf/LR_PPI/Data.htm;

These results are taken from Table 4 of the literature [25].

Experimental Section

Dataset

To construct the PPI dataset, we first obtained the subchain pair name of PPIs from the PRISM (Protein Interactions by Structural Matching) server (http://prism.ccbb.ku.edu.tr/prism/), which was used to explore protein interfaces, and we downloaded the corresponding sequences of these protein subchain pairs from the Protein Data Bank (PDB) database (http://www.rcsb.org/pdb/). According to PRISM [43], a subchain pair is defined as an interacting subchain pair if the interface residues of two protein subchains exceed 10; otherwise, the subchain pair is defined as a noninteracting subchain pair. For example, suppose a protein complex has A, B, C and D subchains. If the interface residues of AB, AC, and BD subchain pairs total more than 10, while the interface residues of AD, BC and CD subchain pairs total less than 10, then the AB, AC, and BD subchain pairs are treated as interacting subchain pairs, while the AD, BC and CD subchain pairs are treated as noninteracting subchain pairs. All interacting protein subchain pairs were used in preparing the positive dataset, and all noninteracting subchain pairs were used in preparing the negative dataset. To reduce the redundancy and homology bias for methodology development, all protein subchain pairs were screened according to the following procedures [15]. (i) Protein subchain pairs containing a protein subchain with fewer than 50 amino acids were removed; (ii) For subchain pairs having ≥40% sequence identity, only one subchain pair was kept. The ≥40% determinant may be understood as follows. Suppose protein subchain pair A is formed with protein subchains A1 and A2 and protein subchain pair B is formed with protein subchains B1 and B2. If sequence identity between protein subchains A1 and B1 and A2 and B2 is ≥40%, or sequence identity between protein subchains A1 and B2 and between A2 and B1 is ≥40%, then the two protein subchain pairs are defined as having ≥40% sequence identity. In our method, we would only retain those subchain pairs having <40% sequence identity. After these screening procedures, the resultant positive set was comprised of 2510 interacting protein subchain pairs, while the resultant negative set contained many noninteracting protein subchain pairs. To avoid unbalanced data between the positive and negative sets, we randomly sampled the 2510 noninteracting protein subchain pairs to construct the negative set. Finally, a PPI dataset consisting of 2510 PPI subchain pairs and 2510 noninteracting protein subchain pairs was constructed.

Distance Frequency of Amino Acids Grouped with Their Physicochemical Properties

The frequency of the distance between two successive amino acids, or distance frequency, was used to predict subcellular location by Matsuda et al., [44] and can be described as follows: For a protein sequence P, the distance set d between two successive letters (e.g., A) appearing in protein sequence P can be represented as: where n is number of letter As appearing in protein sequence P, d is the distance from the ith letter A to the (i + 1)th letter A, and d is calculated in a left-to-right fashion. The distance frequency vector for letter A can be defined by the following equation: where N represents the number of times that the jth distance unit appears in the d set. For example, considering the protein sequence AACDAMMADA, the distance sets of letters A, C, D and M are shown respectively as As a result, the corresponding distance frequency vectors are shown respectively as Df = [1,1,2,0,0], Df = [0,0,0,0,0], Df = [0,0,0,0,1], Df = [1,0,0,0,0]. The other 16 basic amino acid distance frequency vectors are zero vector, or V = [0,0,0,0,0]. Thus, we can use the feature vector x to encode the protein sequence P: In this work, we used the concept of distance frequency [44] and borrowed Dubchak’s idea of representing the amino acid sequence with four physicochemical properties [45] to encode the protein subchain sequence. First, according to the amino acid value given by such physicochemical properties as hydrophobicity [46], normalized van der Waals volume [47], polarity [48] and polarizability [49], the 20 natural amino acids can be divided into three groups [45], as listed in the Table 6. For Hydrophobicity, Normalized van der Waals Volume, Polarity and Polarizability, the amino acids in Group 1, Group 2 and Group 3 were expressed as H1, H2, H3; V1, V2, V3; P1, P2, P3; and Z1, Z2 and Z3, respectively. Second, each protein subchain sequence was then translated into the appropriate three-symbol sequence, depending on the particular physicochemical property, be it H1−3, V1−3, P1−3, or Z1−3. For example, suppose that the original protein sequence is MKEKEFQSKP. Then, by the set of symbols denoted above, in this case, hydrophobicity, this sequence can be translated into H3H1H1H1H1H3H1H2H1H2, and the same would be true for V1–3, P1–3, or Z1–3. Third, the distance frequency of every symbol in the translated sequence was computed. In the above example, the H1, H2, H3 distance frequency would be respectively computed for the sequence H3H1H1H1H1H3H1H2H1H2. Finally, every protein subchain sequence can be encoded by the following feature vector:
Table 6.

Amino acid groups classified according to their physicochemical value.

Physicochemical propertyGroup 1Group 2Group 3
HydrophobicityH1: R,K,E,D,Q,NH2: G,A,S,T,P,H,YH3: C,V,L,I,M,F,W
van der Waals volumeV1: G,A,S,C,T,P,DV2: N,V,E,Q,I,LV3: M,H,K,F,R,Y,W
PolarityP1: L,I,F,W,C,M,V,YP2: P,A,T,G,SP3: H,Q,R,K,N,E,D
PolarizabilityZ1: G,A,S,D,TZ2: C,P,N,V,E,Q,I,LZ3: K,M,H,F,R,Y,W
Conveniently, the feature set based on hydrophobicity, normalized van der Waals volume, polarity, and polarizability can be written as Hf, Vf, Pf and Zf, respectively. In general, the dimensions of two feature vectors generated separately by two protein subchains are unequal. To solve this issue, we enlarge the feature vector dimension of one protein subchain such that it has a feature vector dimension equal to that of another subchain. For example, given the following protein subchain pair P − P: Subchain P amino acid sequence: MKEKEFQSKP Subchain P amino acid sequence: QNSLALHKVIMVGSG If we adopt the property of hydrophobicity, then P and P amino acid sequences can be translated into the following symbol sequence, respectively. Subchain P: H3H1H1H1H1H3H1H2H1H2 Subchain P: H1H1H2H3H2H3H2H1H3H3H3H3H2H2H2 Then, the distance sets of subchains P and P are shown as: , and the distance frequency vectors of subchains P and P are as follows: where Hereinafter we will use “DF” to represent the distance frequency method by grouping amino acids with their physicochemical properties. By our use of DF to represent the protein subchain pair, we can see that the feature vector is sparse, while the vector dimension is large, when the subchain sequence is longer. To further extract the features, Principal Component Analysis (PCA) was then used to reduce the dimension, and amino acid distance frequency combined with PCA reducing the dimension is now termed DFPCA.

Amino Acid Index Distribution (AAID)

Let I1, I2, …, I, ···, I20 be the amino acid physicochemical value of the 20 natural amino acids α (A, C, D, E, F, G, H, I, K, L, M, N, P, Q, R, S, T, V, W, and Y), respectively, which can be accessed through the DBGET/LinkDB system by inputting an amino acid index (e.g., LEWP710101). An amino acid index is a set of 20 numerical values representing any of the different physicochemical and biochemical properties of amino acids. We can download these indices from the AAindex database (http://www.genome.jp/aaindex/). For a given protein sequence P whose length is L, we replace each residue in the primary sequence by its amino acid physicochemical value, which results in a numerical sequence h1, h2, …, h, …, h, (h ∈ I1, I2,…, I20). Then, we can define the following feature w of amino acid α to represent the protein sequences: Where f is the frequency of amino acid α that occurs in protein sequecne P, I is the physicochemical value of amino acid α, and the symbol • indicates the simple product. f and I are mutually independent. Obviously, w includes the physicochemical information and statistical information of amino acid α, but it loses the sequence-order information. Therefore, to let feature vectors contain more sequence-order information, we introduced the 2-order center distance d by considering the position of amino acid α, which is defined as where N is the total number of amino acid α appearing in the protein sequence P, k, (j = 1,2, ···, N) is the jth position of the amino acid α in the sequence, and k̄ is the mean of the position of amino acid α. Now feature d contains the physicochemical information, statistical information and the sequence-order information of amino acid α, but it still does not distinguish the protein pairs in some cases. For example, assume two protein pairs P – P and P – P. The sequences of protein P, P, P and P are respectively shown as: P: MPPRNKPNRR; P: MPNPRNNKPPGRKTR P: MPRRNPPNRK; P: MGTRPPRNNKPNPRK Obviously, P and P, as well as P and P, have the same w and d. If we use the orthogonal sum vector, we cannot distinguish between the P − P and P − P protein pairs. To solve this problem, the 3-order center distance t of amino acid α was introduced, which is defined as Finally, we can use a combined feature vector to represent protein sequence P by serializing above three features as The protein pair P – P can now be represented by the following feature vectors: or Generally, vector x is not equal to vector x. As such, if a query protein pair P – P is represented by x and x respectively, the prediction results may be different. In this paper, we will choose the pairwise kernel function to solve this dilemma.

Pairwise Kernel Function

Ben-Hur and Noble [13] first introduced a tensor product pairwise kernel function KI to measure the similarity between two protein pairs. The comparison between a pair (x1, x2) and another pair (x3, x4) for KI is done through the comparison of x1 with x3 and x2 with x4, on the one hand, and the comparison of x1 with x4 and x2 with x3, on the other hand, as However, the KI kernel does not consider differences between the elements of comparison pairs in the feature space; therefore, Vert [50] proposed the following metric learning pairwise kernel KII: In particular, two protein pairs might be very similar for the KII kernel, even if the patterns of the first protein pair are very different from those of the second protein pair, whereas the KI kernel could result in a large dissimilarity between the two protein pairs. It is easy to prove that the KII kernel satisfies both Mercer’s condition and the pairwise kernel function condition. In this paper, we use the KII kernel function to predict PPI.

Assessment of Prediction System

Sensitivity (S), specificity (S), positive predictive value (PPV) and total prediction accuracy (ACC) [39-41] were employed to measure the performance of PPI-PKSVM. where TP and TN are the number of correctly predicted subchain pairs of interacting proteins and noninteracting proteins, respectively, and FP and FN are the number of incorrectly predicted subchain pairs of noninteracting proteins and interacting proteins, respectively.

Conclusions

In this work, we introduced two feature extraction approaches to represent the protein sequence. One is amino acid distance frequency with PCA reducing the dimension, termed DFPCA. Another is amino acid index distribution based on the physicochemical values of amino acids, termed AAID. The pairwise kernel function SVM was employed as the classifier to predict the PPIs. From the results, we can conclude that (i) the performance of DFPCA is better than that of DF; (ii) the prediction power of PRBF is superior to RBF, suggesting that designing a rational pairwise kernel function is important for predicting PPIs; (iii) DFPCA and AAID with pairwise kernel function SVM are effective and promising approaches for predicting PPIs and may complement existing methods. Since user-friendly and publicly accessible web servers represent the future direction in the development of predictors, we have provided a web server for PPI-PKSVM, and it can be found at (http://159.226.118.31/PPI/index.html). PPI-PKSVM in its present version can be used to evaluate one protein pair. However, we will soon be developing a newer online version able to predict large numbers of PPIs.
  49 in total

1.  Learning to predict protein-protein interactions from protein sequences.

Authors:  Shawn M Gomez; William Stafford Noble; Andrey Rzhetsky
Journal:  Bioinformatics       Date:  2003-10-12       Impact factor: 6.937

2.  Kernel methods for predicting protein-protein interactions.

Authors:  Asa Ben-Hur; William Stafford Noble
Journal:  Bioinformatics       Date:  2005-06       Impact factor: 6.937

3.  MSLoc-DT: a new method for predicting the protein subcellular location of multispecies based on decision templates.

Authors:  Shao-Wu Zhang; Yan-Fang Liu; Yong Yu; Ting-He Zhang; Xiao-Nan Fan
Journal:  Anal Biochem       Date:  2013-12-21       Impact factor: 3.365

4.  Using the concept of Chou's pseudo amino acid composition for risk type prediction of human papillomaviruses.

Authors:  Maryam Esmaeili; Hassan Mohabatkar; Sasan Mohsenzadeh
Journal:  J Theor Biol       Date:  2009-12-02       Impact factor: 2.691

Review 5.  The classification and origins of protein folding patterns.

Authors:  C Chothia; A V Finkelstein
Journal:  Annu Rev Biochem       Date:  1990       Impact factor: 23.643

6.  PRISM: protein interactions by structural matching.

Authors:  Utkan Ogmen; Ozlem Keskin; A Selim Aytuna; Ruth Nussinov; Attila Gursoy
Journal:  Nucleic Acids Res       Date:  2005-07-01       Impact factor: 16.971

7.  Some remarks on protein attribute prediction and pseudo amino acid composition.

Authors:  Kuo-Chen Chou
Journal:  J Theor Biol       Date:  2010-12-17       Impact factor: 2.691

8.  Signal propagation in protein interaction network during colorectal cancer progression.

Authors:  Yang Jiang; Tao Huang; Lei Chen; Yu-Fei Gao; Yudong Cai; Kuo-Chen Chou
Journal:  Biomed Res Int       Date:  2013-03-20       Impact factor: 3.411

9.  iSNO-AAPair: incorporating amino acid pairwise coupling into PseAAC for predicting cysteine S-nitrosylation sites in proteins.

Authors:  Yan Xu; Xiao-Jian Shao; Ling-Yun Wu; Nai-Yang Deng; Kuo-Chen Chou
Journal:  PeerJ       Date:  2013-10-03       Impact factor: 2.984

10.  iEzy-drug: a web server for identifying the interaction between enzymes and drugs in cellular networking.

Authors:  Jian-Liang Min; Xuan Xiao; Kuo-Chen Chou
Journal:  Biomed Res Int       Date:  2013-11-26       Impact factor: 3.411

View more
  9 in total

1.  Identification of all-against-all protein-protein interactions based on deep hash learning.

Authors:  Yue Jiang; Yuxuan Wang; Lin Shen; Donald A Adjeroh; Zhidong Liu; Jie Lin
Journal:  BMC Bioinformatics       Date:  2022-07-08       Impact factor: 3.307

2.  Molecular science for drug development and biomedicine.

Authors:  Wei-Zhu Zhong; Shu-Feng Zhou
Journal:  Int J Mol Sci       Date:  2014-11-04       Impact factor: 5.923

3.  SPRINT: ultrafast protein-protein interaction prediction of the entire human interactome.

Authors:  Yiwei Li; Lucian Ilie
Journal:  BMC Bioinformatics       Date:  2017-11-15       Impact factor: 3.169

4.  Gene Prediction in Metagenomic Fragments with Deep Learning.

Authors:  Shao-Wu Zhang; Xiang-Yang Jin; Teng Zhang
Journal:  Biomed Res Int       Date:  2017-11-08       Impact factor: 3.411

5.  Sequence-based prediction of protein protein interaction using a deep-learning algorithm.

Authors:  Tanlin Sun; Bo Zhou; Luhua Lai; Jianfeng Pei
Journal:  BMC Bioinformatics       Date:  2017-05-25       Impact factor: 3.169

6.  A novel matrix of sequence descriptors for predicting protein-protein interactions from amino acid sequences.

Authors:  Xue Wang; Yuejin Wu; Rujing Wang; Yuanyuan Wei; Yuanmiao Gui
Journal:  PLoS One       Date:  2019-06-07       Impact factor: 3.240

Review 7.  Protein-protein interaction prediction with deep learning: A comprehensive review.

Authors:  Farzan Soleymani; Eric Paquet; Herna Viktor; Wojtek Michalowski; Davide Spinello
Journal:  Comput Struct Biotechnol J       Date:  2022-09-19       Impact factor: 6.155

8.  Exploring the Potential of Spherical Harmonics and PCVM for Compounds Activity Prediction.

Authors:  Magdalena Wiercioch
Journal:  Int J Mol Sci       Date:  2019-05-02       Impact factor: 5.923

9.  Identification of Proteins of Tobacco Mosaic Virus by Using a Method of Feature Extraction.

Authors:  Yu-Miao Chen; Xin-Ping Zu; Dan Li
Journal:  Front Genet       Date:  2020-10-09       Impact factor: 4.599

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.