Literature DB >> 30451945

Similarity-based future common neighbors model for link prediction in complex networks.

Shibao Li1, Junwei Huang2, Zhigang Zhang2, Jianhang Liu2, Tingpei Huang2, Haihua Chen2.   

Abstract

Link prediction aims to predict the existence of unknown links via the network information. However, most similarity-based algorithms only utilize the current common neighbor information and cannot get high enough prediction accuracy in evolving networks. So this paper firstly defines the future common neighbors that can turn into the common neighbors in the future. To analyse whether the future common neighbors contribute to the current link prediction, we propose the similarity-based future common neighbors (SFCN) model for link prediction, which accurately locate all the future common neighbors besides the current common neighbors in networks and effectively measure their contributions. We also design and observe three MATLAB simulation experiments. The first experiment, which adjusts two parameter weights in the SFCN model, reveals that the future common neighbors make more contributions than the current common neighbors in complex networks. And two more experiments, which compares the SFCN model with eight algorithms in five networks, demonstrate that the SFCN model has higher accuracy and better performance robustness.

Entities:  

Year:  2018        PMID: 30451945      PMCID: PMC6242980          DOI: 10.1038/s41598-018-35423-2

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Many social, biological, and food-chain systems can be well described by networks, where nodes denote individuals, biological elements, and so on, and links represent the relations between nodes. The complex networks has therefore become a popular focus of many branches of science. An attractive research topic is link prediction, whose purpose is to predict the possibility or necessity of forming links between unconnected node pairs via the information of complex networks[1,2]. Thus Link prediction can predict the existing yet unknown links (the missing links) and the links that may appear in the future (the future links)[3,4]. With the amount of data increasing nowadays, Link prediction plays a more crucial role in recommendation system, data mining, complex networks, and so on. For instance, in the protein-protein interaction network of Yeast, 80% of the molecular interactions are still unknown. Whether a link between two nodes exists must be demonstrated by field and/or laboratorial experiments[5]. However, if the accurate prediction results are applied into the laboratorial experiments instead of blindly checking all possible interactions, the costs will be sharply reduced[6]. In the scientists cooperation network, link prediction helps to find the potential cooperation between scientists[7,8]. Besides, link prediction is also employed in recommending friends for online social networks[9,10] and identifying spurious links in a noisy environment[11,12]. Until now, many indexes for link prediction have been proposed. Generally, they are classified into three main models: models based on Markov chains[13-15], models based on machine learning[16,17] and models based on the similarity of topological structure[1,18]. Though the first two have high prediction accuracy in many networks, they don’t apply to the large-scale networks due to their high computational complexity. Nevertheless, models based on similarity can avoid such problems and easily obtain the information of networks. For instance, the Common Neighbor (CN) index, which is the most widely used index, just counts the number of common neighbors between node pair. Newman[19] used this quantity in the study of collaboration networks, showing a positive correlation between the number of common neighbors and the probability that two scientists will collaborate in the future. By taking into account the common neighbors number and the degrees of two nodes, Salton et al. pointed out the Salton index[20]; Leicht, Holme and Newman proposed the LHN index[21]. To characterize the topological similarity between reactants in the metabolic network, Ravasz E. and Somera A. L. et al. proposed the hub promoted index (HPI)[22]. HPI index insists that the links adjacent to hubs are likely to be assigned high scores since the denominator is determined by the common neighbors number and the lower degree. To measure with the opposite effect of hubs, Zhou and Lü et al. put forward the hub depressed index (HDI)[23]. Furthermore, they proposed the resource allocation index (RA)[23]. Motivated by the resource allocation dynamics on complex networks, the RA index can effectively improve the accuracy by restraining the contributions of large-degree common neighbors. Additionally, Liu et al. proposed a Local Naive Bayes (LNB) model[24], which insists that different common neighbors play different roles and make different contributions. Based on the LNB model, they improved the CN, RA and AA index. As the similarity indexes can predict links in networks, we can apply the similarity indexes to evaluate the evolving mechanisms for the evolving networks[1]. Obviously, similarity-based algorithms for link prediction can predict the future links by using the current common neighbors information[25]. However, on above principles of the similarity indexes, some nodes, currently not common neighbors, can turn into the common neighbors in the future. More importantly, these nodes raise a series of new questions worth exploring. First of all, it is whether these nodes contribute to the current prediction between node pair. Although the previous algorithms have proved that the current common neighbors can promote two nodes to connect, people still doubt whether nodes, which are currently not a common neighbor but can become a common neighbor in the future, are also helpful in the current link prediction. Second, if they do make contribution, then how we can locate these nodes and measure their contribution, simultaneously. Previous algorithms can easily count the number of the current common neighbors by only analyzing the network topology. However, the nodes described above have not yet become the common neighbors, and they can get different topology structures when they are together with the node pairs and their surrounding nodes. These lead to the challenge of locating these nodes and measuring their contributions via a simple method. To address the above problems clearly, firstly, we define nodes, which are currently not common neighbors but can turn into the common neighbors in the future, as the future common neighbors and divide them into three types according to their topology structure with other nodes. Second, we propose the similarity-based future common neighbors (SFCN) model for link prediction. The SFCN model accurately finds out all the future common neighbors, besides the current common neighbors. And simultaneously, it can also measure their contributions by only using the existing similarity indexes. We also design and observe three MATLAB simulation experiments. First, we conduct a priori experiment on α and β in FWFB network. The results provide strong evidence that the future common neighbors have more positive contribution than the common neighbors in complex networks. Second, by comparing the SFCN model with eight similarity-based algorithms in five networks, we find that the SFCN model has higher prediction accuracy from the whole perspective. Third, the experiments, where we change the ratio of the training set to the probe set in five networks, also demonstrates that the SFCN model has better performance robustness. So, the proposed SFCN model has higher accuracy and performance robustness than popular algorithms, and the future common neighbors is necessary to be considered for link prediction in evolving networks.

Results

Network and problem description

A network can be represented by an undirected network G(V, E) without self-connections and multiple links between node pair. In G(V, E), V is the set of nodes, and E is the set of links. Then |V| represents the quantity of nodes in V. Define the fully connected network as U that contains (|V|(|V| − 1))/2 links. So, U-E is the set of the nonexistent links. To evaluate the prediction accuracy of algorithms, we divide the observed link set E into the training set E and the probe set E randomly. E is the known information while E is the unknown information. Obviously, E = E ∪ E, and ϕ = E ∩ E. Accurately detecting the missing links or the future links from U-E is the purpose of link prediction. Give the link between node pair (x, y) in U a score (s), which is calculated by the link prediction algorithm. All the nonexistent links are sorted in descending order according to their scores, and the links at the top are most likely to exist.

The future common neighbors

Most similarity-based algorithms for link prediction predict the future links by using the current common neighbors. However, on the prediction principles of the above similarity indexes, some nodes, which are currently not common neighbors, can turn into the common neighbors in the future. To analyze whether such nodes are factors that contribute to the current prediction between node pair, and in order to accurately locate these nodes to measure their contribution, we define them as the future common neighbors and propose the similarity-based future common neighbors model for link prediction in evolving networks. The future common neighbors are nodes that are currently not the common neighbors but can turn into the common neighbors in the future on the principle of the similarity index. According to their topology with other nodes, the future common neighbors are divided into three types shown in Fig. 1, where x and y are the target node pair for link prediction. The first future common neighbor, like node i in Fig. 1(a), has a direct link with x while no direct link with y. Currently, the similarity score between i and y is s. According to the prediction principle of the similarity algorithm, i and y may form a link in the future (the greater the s, the greater the probability of forming a link). Therefore, i has connected with y and turn into the common neighbor between x and y in a future time. The second future common neighbor, like i in Fig. 1(b), has direct link with y while no direct link with x. The third future common neighbors does not connect with both x and y, seen node i in Fig. 1(c). According to the existing similarity indexes for link prediction, if s and s are great enough, i are will form links with both x and y. Thus, the i in Fig. 1(c) are also a common neighbor between x and y in a future time.
Figure 1

Three types of the future common neighbors.

Three types of the future common neighbors.

Similarity-based future common neighbors model

Combining the future common neighbors topology with the similarity-based indexes, this paper designs the similarity-based future common neighbors model. The model is to accurately find out all the future common neighbors in complex networks and effectively measure their contributions. Taking the chaotic network in Fig. 2 as an example, for node i (i = 1, 2, 3, …, |V|), we assume that is a similarity score between x and i, and is calculated by any classical similarity-based algorithms that we mark as C2. Therefore, also symbolizes the possibility of forming a link between x and i on the principle of C2 when the network is evolving. We make r indicate whether i and y are connected (r = 1 if i and y are connected, otherwise r = 0), which can be obtained from the observed networks. Similarly, is a similarity score between i and y calculated by algorithms C2; r represents whether x and i are connected.
Figure 2

The process of identifying the future common neighbors from the chaotic network. The yellow nodes are the future common neighbors between x and y.

The process of identifying the future common neighbors from the chaotic network. The yellow nodes are the future common neighbors between x and y. The SFCN model identifies the above three types of the future common neighbors from chaotic network by employing their topological rules. (1) i is the first type of the future common neighbor only when . It is necessary to note that we set and r = 0 when i = x or i = y in order to keep non self-connections. (2) The rest rules can be deduced by analogy, i is the second type of the future common neighbor if . (3) And i is the third type of the future common neighbor if and only if . To accumulate the contributions of the future common neighbors, which meet the above rules, we construct four vectors for x and y in eqs 1, 2, 3, 4.where the superscript denotes matrix transposition, and the black highlighted parts are the row vectors or column vectors. Γ stores the connections of x to all nodes. stores the similarity scores of x to all nodes. Similarly, (Γ) stores the connections of y to all nodes. And stores the similarity scores between y and all other nodes. Therefore, we get the similarity-based future common neighbors model as eq. 5:where is the similarity score between x and y, and is calculated by any similarity algorithm that we temporarily mark as C1. C1 and C2 are two similarity algorithms, and they can be the same or different. The two free parameters, α and β, is to adjust the contributions of the current common neighbors and the future common neighbors, respectively. When α ≠ 0 and β = 0, the model only considers the current common neighbors contributions. When α = 0 and β ≠ 0, the model only considers the future common neighbors contributions. Both C1 and C2 are the. A special case, when both C1 and C2 are represented by CN algorithm and only the first class of the future common neighbors are considered, the model degenerates into the LP index. In a word, the SFCN model, employed in evolving networks, takes into account the contributions of the future common neighbors besides the current common neighbors.

Example 1

This section gives an example of how to find the future common neighbors between node pair and how to measure the contributions of three future common neighbors. Suppose C1 and C2 are the LHN and RA algorithms, respectively. Take the network in Fig. 2 as an example and treat nodes (1, 3) as the target node pair (x, y). Then we can get four vectors: From the calculation process (eq. 10), it is easy to observe that only node 4 is the first type of the future common neighbors. And the contribution of node 4 to (1, 3) is . We can also observe that only node 5 is the second type of the future common neighbors from the eq. 11: At last, we can check out that 8 and 10 are the third type of the future common neighbors through eq. 12: Therefore, the contribution of the future common neighbors is .

Evaluation Metrics

In the Experiments, we introduce two standard metrics to quantify the prediction accuracy: the AUC[26] (area under the receiver operating characteristic curve) and precision[27]. The AUC evaluate the algorithms performance according to the whole list. The AUC is comprehended as the probability that a link randomly chosen from set E has a much higher score than a link randomly chosen from nonexistent link U-E. In the n times independent comparisons, we select a link from E and U-E respectively. Define their similarity scores as S1 and S2. When S1 > S2, set n′ = n′ + 1; when S1  = S2, set n″ = n″ + 1 (n′ and n″ are initialed as 0, n = n′ + n″). So, the AUC can be defined as eq. 13: Different from the AUC, precision focuses on the links with top ranks or highest scores. It is the ratio of correct links recovered out of the top L links in the candidate list generated by each link predictor. Assume L links are accurately predicted among the top-L links. Then the precision can be defined as eq. 14:

Datasets of real networks

In order to compare the prediction accuracy of the SFCN model with the eight mainstream indexes mentioned in this paper, we do MATLAB simulation experiments in five real networks: the network of scientific communication (NS)[28], the US political blogs network (PB)[29], the protein interaction network (Yeast)[30], the neural network of C.elegans (CE)[31], the food web network of florida bay (FWFB)[32]. All datasets of the five networks can be seen in the electronic supplementary material. The basic features of those networks are summarized in Table 1.
Table 1

Details of networks.

Networks|V||E|<d><k><H><C><r>
CE29721482.4614.46461.80080.3079−0.163
FWFB12820751.7832.42191.23700.3346−0.112
NS3799144.934.821.660.798−0.082
PB1222167142.7427.35522.97070.3600−0.221
Yeast2375116935.099.84673.47560.38830.454

|V| and |E| are the number of nodes and links, respectively. is the average shortest distance between node pairs. is the average degree, and denotes the degree heterogeneity. represents the clustering coefficient. is the assortative coefficient.

Details of networks. |V| and |E| are the number of nodes and links, respectively. is the average shortest distance between node pairs. is the average degree, and denotes the degree heterogeneity. represents the clustering coefficient. is the assortative coefficient. The metrics that characterize the networks can be seen in the caption of Table 1. We find that NS, PB and CE have similar characteristics, including the high clustering coefficient. Nevertheless, for FWFB network, the relation between predator and prey makes the network have a larger average degree and a shorter average distance between node pair.

Existing similarity indexes based on topological structure

Here, we introduce eight mainstream similarity indexes to compare with the SFCN model.where α is an adjustable parameter and A is the adjacency matrix of network. (A) represents the quantity that the order length is equal to i between x and y. CN. Let Γ be the set of neighbors of x. The CN index proposes that node pair (x, y) are more likely to connect if they have more common neighbors, namely: Salton[20]. It is defined as:where k is the degree of node x. RA[23]. The RA index assumes each transmitter has a unit of resource and will equally distributed to all its neighbors, concluded as: HPI[22]. It is defined as: HDI[23]. It is defined as: Leicht-Holme-Newman index (LHN)[21]. The LHN index is defined as: LNBRA[24]. The LNBRA index is an improvement in RA index based on the LNB model, defined as:where η and R are defined as:where N Δ and N ▽ are respectively the numbers of connected and disconnected node pairs which have a common neighbor z. Local Path (LP)[33]. This index considers the number of different orders, defined as:

Experiments and performance analysis

In this section, we do three experiments and make corresponding analysis for three purposes. In the first and second experiments, the E contains 90% of links, while the remaining 10% of links constitute the E. In addition, all the following results are returned with the average over 100 independent experiments. For the first experiment, to verify whether the contribution of the future common neighbors is necessary, we conducted priori experiments on α and β in FWFB network. Step 1, since the training set (E) is known, we divide E into the sub-training set (E) and the sub-probe set (E) to learn the values of α and β in step 2. Step 2, we apply the SFCN model to the sub-training set in order to obtain the similarity scores of the sub-probe set and get the AUC that varies with α and β. In this way, it is easy to select the numerical values of α and β with high AUC for the SFCN model. The experimental results are shown in Fig. 3. Before that, it is necessary to consider the two limit problems. When α = 0, the current common neighbors in the SFCN model do not make any contribution, and only the future common neighbors make contributions. When β = 0, there are only contribution from the current common neighbors, and the future common neighbors do not make any contributions for link prediction. We can get two results from the Fig. 3. First, the AUC when β = 0 is much lower than that when β≠0. Second, the SFCN model can obtain highest AUC when α and β are adjust to a suitable value. For example, for the SFCN-CN-RA, SFCN-Salton-HDI, and SFCN-LNBRA-LHN algorithms, we should set α smaller and β larger to get a higher prediction accuracy in FWFB network. These two results illustrate the important contribution of the future common neighbors.
Figure 3

AUC sensitivity analysis of the SFCN model in FWFB network. X-axis is the α value that is taken from 0 to 15 at intervals of 3. Y-axis is the β value that is taken from 0 to 2 at intervals of 0.4.

AUC sensitivity analysis of the SFCN model in FWFB network. X-axis is the α value that is taken from 0 to 15 at intervals of 3. Y-axis is the β value that is taken from 0 to 2 at intervals of 0.4. Therefore, in the second and the third experiments later, we set α = 9 and β = 1, which meet the above condition. The second experiment is to compare the SFCN model with other eight similarity-based indexes, including the CN, HDI, HPI, LP, RA, Salton, LNBRA and LHN index. The prediction results of AUC and precision are listed in Tables 2 and 3 for details, respectively. Most comparative experiments, in the Table 2, clearly demonstrate that the SFCN model has the best or close to the best AUC, especially in the FWFB and Yeast networks. Taking the FWFB network as an example for analysis, we can see that there are 2075 links but only 128 nodes from the Table 1. And the average degree is as high as 32.422 while the average aggregation coefficient is low to 0.3346, which indicate that there are many random connections and high obscure similarity between the clusters in the FWFB network. These are the reasons why all nodes in the FWFB network have the tendency to gather and form some unknown clusters with the network evolving. The SFCN model takes into account the network evolution tendency via the principle of similarity index. In detail, the model has greatly improved the AUC in FWFB network by regarding the future common neighbors as the evolution direction. Moreover, Table 3 demonstrates that 90% of the precision results, predicted based on the SFCN model, are equal to or higher than that predicted based their original algorithm. For example, the precision results of the SFCN-HDI-RA algorithm are much higher than those of the original HDI algorithm in most network, because the contributions of the future common neighbors are taken into account.
Table 2

There are the prediction accuracy results, measured by AUC, of classic indexes and corresponding algorithms based on the SFCN model in five real networks.

AUCCEFWFBNSPBYeast
CN0.8460.6160.9890.9250.917
SFCN-CN-RA0.8760.6590.9890.9390.971
Salton0.8020.5320.9840.8800.914
SFCN-Salton-HDI0.8510.7930.9840.9370.974
RA0.8710.5980.9770.9270.924
SFCN-RA-HDI0.8720.7940.9910.9380.975
LP0.8610.6330.9800.9380.971
SFCN-LP-RA0.8720.6660.9900.9420.978
HPI0.8040.5280.9790.8550.912
SFCN-HPI-HPI0.8110.7620.9850.9010.972
SFCN-HPI-RA0.8650.7900.9890.9450.974
HDI0.7750.5270.9800.8730.914
SFCN-HDI-HDI0.8490.7820.9850.9350.976
SFCN-HDI-RA0.8890.7950.9900.9470.977
LNBRA0.8630.6590.9800.9280.920
SFCN-LNBRA-RA0.8830.7960.9930.9490.977
SFCN-LNBRA-HDI0.8810.8090.9920.9420.975
SFCN-LNBRA-LHN0.8780.8430.9890.9410.975
LHN0.7250.3900.9740.7660.906
SFCN-LHN-LHN0.8100.8910.9830.8910.974
SFCN-LHN-RA0.8760.7970.9920.9470.977
SFCN-LHN-LP0.8060.7040.9740.9280.961
SFCN-LHN-HDI0.8390.7980.9840.9360.976
Table 3

There are the prediction accuracy results, measured by precision (top-100), of classic indexes and corresponding algorithms based on the SFCN model in five real networks.

PrecisionCEFWFBNSPBYeast
CN0.1980.0940.3960.4600.678
SFCN-CN-RA0.2020.1080.4040.4640.766
Salton0.0120.0080.2900.0000.024
SFCN-Salton-HDI0.0120.0100.2600.0000.032
RA0.1240.0940.5640.2560.520
SFCN-RA-HDI0.1300.0980.5660.2780.440
LP0.1240.1120.3120.4000.654
SFCN-LP-RA0.1400.1120.3240.4100.696
HPI0.0260.0680.5560.2240.860
SFCN-HPI-HPI0.0360.3880.1920.2240.868
SFCN-HPI-RA0.1160.3820.1620.5660.896
HDI0.0320.0080.3100.0020.030
SFCN-HDI-HDI0.0860.3600.3200.5160.900
SFCN-HDI-RA0.1060.3600.2940.5900.882
LNBRA0.1310.1620.5440.2520.586
SFCN-LNBRA-RA0.1360.1660.5540.2500.580
SFCN-LNBRA-HDI0.1300.1640.5640.2780.602
SFCN-LNBRA-LHN0.1320.1540.5800.2600.586
LHN0.0000.0140.1380.0000.010
SFCN-LHN-LHN0.0000.0260.1380.0000.014
SFCN-LHN-RA0.0000.0140.1380.0000.012
SFCN-LHN-LP0.0000.0200.1380.0000.012
SFCN-LHN-HD0.0000.0180.1400.0000.010
There are the prediction accuracy results, measured by AUC, of classic indexes and corresponding algorithms based on the SFCN model in five real networks. There are the prediction accuracy results, measured by precision (top-100), of classic indexes and corresponding algorithms based on the SFCN model in five real networks. Finally, in order to explore the robustness, we change the ratio of training set to probe set in the third experiment. The lower the ratio, the more links information that should be predicted[34]. That is to say, there are less number of the known connected links and more number of the unknown links when the ratio is small. It is obviously to obtain two results from the Fig. 4. On the one hand, when the ratio is the same, the algorithms based on SFCN model have higher prediction accuracy results (measured by AUC) than their corresponding original algorithms. For instance, the SFCN-LHN-RA, SFCN-LHN-LP and SFCN-LHN-HDI algorithms have higher AUC compared with the original LHN algorithms when the ratio is the same. On the other hand, even when the ratio is low, the algorithms based on SFCN model still get high AUC, which indicates that the SFCN model has higher stability. Therefore, the SFCN model has better performance in prediction accuracy and stability even when there is few links information.
Figure 4

The AUC of different algorithms with different ratio of training sets to probe sets in real networks. X-axis is the ratio, and Y-axis is the each algorithm.

The AUC of different algorithms with different ratio of training sets to probe sets in real networks. X-axis is the ratio, and Y-axis is the each algorithm.

Discussion

Exploring what factors can provide a positive impact on link prediction is an important and challenging problem. In this paper, we firstly discover the existence of the future common neighbors, which are classified into three types according to their topological structure with other nodes. Then, to investigate whether the future common neighbors can make positive contribution for current link prediction, we propose the similarity-based future common neighbors model (SFCN), which accurately locates all the future common neighbors and effectively measure their contributions in complex networks, besides the current common neighbors. We design three simulation experiments via the MATLAB for three different purposes. First, we conduct priori experiments on α and β in FWFB network. The results provide strong evidence that the future common neighbors can make great contribution than current common neighbors in complex networks. In the second experiments, we compare the SFCN model with eight algorithms in five networks, finding that the SFCN model has higher prediction accuracy, especially the AUC in the FWFB and Yeast networks. Third, in order to verify whether the SFCN model can get great accuracy when the known link information is little, we change the ratio of the training set to the probe set in five networks. And the experiment results show that the SFCN model has better performance robustness, even when the ratio is low to 0.45, compared with eight similarity-based algorithms. Therefore, the proposed SFCN model has higher accuracy and performance robustness than popular similarity-based algorithms, and the future common neighbors make more positive contribution than the current common neighbors that is widely used nowadays. Some extensions of this work deserve further exploration. One is that we are limited to the current common neighbors and the future common neighbors in evolving networks. It is meaningful to research the contribution of the future nodes and the future links. For example, current path-based algorithms only consider the contribution of the existing paths currently, so it is significative to further exploit whether and how much the future paths, which are not existing currently but will exist after once prediction, can make a positive impact on current link prediction.

Methods

Algorithm of the SFCN model for link prediction

The adjacency matrix E is a sparse matrix of the complex network. And the pseudocode of the SFCN model is presented in algorithm 1. Algorithm of the proposed SFCN framework.

Complexity analysis

This part give a simple complexity analysis of the proposed SFCN model. The most time-consuming part occurs in computing the contribution of the future common neighbors. The time cost of () is O(|V||V|), and the time cost of () is O(|V||V|). Thus the total time cost of the future common neighbors is 3⋅O(|V||V||V|). Since complex network can be simplified as an sparse matrix, the final computational complexity is much less than 3⋅O(|V||V||V|). Data sets of five networks.zip
  16 in total

1.  Clustering and preferential attachment in growing networks.

Authors:  M E Newman
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2001-07-26

2.  Hierarchical organization of modularity in metabolic networks.

Authors:  E Ravasz; A L Somera; D A Mongru; Z N Oltvai; A L Barabási
Journal:  Science       Date:  2002-08-30       Impact factor: 47.728

3.  Finding community structure in networks using the eigenvectors of matrices.

Authors:  M E J Newman
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2006-09-11

4.  Hierarchical structure and the prediction of missing links in networks.

Authors:  Aaron Clauset; Cristopher Moore; M E J Newman
Journal:  Nature       Date:  2008-05-01       Impact factor: 49.962

5.  A truer measure of our ignorance.

Authors:  Luis A Nunes Amaral
Journal:  Proc Natl Acad Sci U S A       Date:  2008-05-12       Impact factor: 11.205

6.  Similarity index based on local paths for link prediction of complex networks.

Authors:  Linyuan Lü; Ci-Hang Jin; Tao Zhou
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2009-10-26

7.  The meaning and use of the area under a receiver operating characteristic (ROC) curve.

Authors:  J A Hanley; B J McNeil
Journal:  Radiology       Date:  1982-04       Impact factor: 11.105

8.  Measuring multiple evolution mechanisms of complex networks.

Authors:  Qian-Ming Zhang; Xiao-Ke Xu; Yu-Xiao Zhu; Tao Zhou
Journal:  Sci Rep       Date:  2015-06-11       Impact factor: 4.379

9.  Playing the role of weak clique property in link prediction: A friend recommendation model.

Authors:  Chuang Ma; Tao Zhou; Hai-Feng Zhang
Journal:  Sci Rep       Date:  2016-07-21       Impact factor: 4.379

10.  From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks.

Authors:  Carlo Vittorio Cannistraci; Gregorio Alanis-Lobato; Timothy Ravasi
Journal:  Sci Rep       Date:  2013       Impact factor: 4.379

View more
  1 in total

1.  Shall I Work with Them? A Knowledge Graph-Based Approach for Predicting Future Research Collaborations.

Authors:  Nikos Kanakaris; Nikolaos Giarelis; Ilias Siachos; Nikos Karacapilidis
Journal:  Entropy (Basel)       Date:  2021-05-25       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.