Literature DB >> 34248309

Scalable Estimation of Epidemic Thresholds via Node Sampling.

Anirban Dasgupta1, Srijan Sengupta2.   

Abstract

Infectious or contagious diseases can be transmitted from one person to another through social contact networks. In today's interconnected global society, such contagion processes can cause global public health hazards, as exemplified by the ongoing Covid-19 pandemic. It is therefore of great practical relevance to investigate the network transmission of contagious diseases from the perspective of statistical inference. An important and widely studied boundary condition for contagion processes over networks is the so-called epidemic threshold. The epidemic threshold plays a key role in determining whether a pathogen introduced into a social contact network will cause an epidemic or die out. In this paper, we investigate epidemic thresholds from the perspective of statistical network inference. We identify two major challenges that are caused by high computational and sampling complexity of the epidemic threshold. We develop two statistically accurate and computationally efficient approximation techniques to address these issues under the Chung-Lu modeling framework. The second approximation, which is based on random walk sampling, further enjoys the advantage of requiring data on a vanishingly small fraction of nodes. We establish theoretical guarantees for both methods and demonstrate their empirical superiority. © Indian Statistical Institute 2021.

Entities:  

Keywords:  Configuration model; Epidemic threshold; Epidemiology.; Networks; Random walk; Sampling

Year:  2021        PMID: 34248309      PMCID: PMC8260572          DOI: 10.1007/s13171-021-00249-0

Source DB:  PubMed          Journal:  Sankhya Ser A        ISSN: 0976-836X


Introduction

Infectious diseases are caused by pathogens, such as bacteria, viruses, fungi, and parasites. Many infectious diseases are also contagious, which means the infection can be transmitted from one person to another when there is some interaction (e.g., physical proximity) between them. Today, we live in an interconnected world where such contagious diseases could spread through social contact networks to become global public health hazards. A recent example of this phenomenon is the Covid-19 outbreak caused by the so-called novel coronavirus (SARS-CoV-2) that has spread to many countries (Huang et al. 2020; Zhu et al. 2020; Wang et al. 2020; Sun et al. 2020). This recent global outbreak has caused serious social and economic repercussions, such as massive restrictions on movement and share market decline (Chinazzi et al. 2020). It is therefore of great practical relevance to investigate the transmission of contagious diseases through social contact networks from the perspective of statistical inference. Consider an infection being transmitted through a population of n individuals. According to the susceptible-infected-recovered (SIR) model of disease spread, the pathogen can be transmitted from an infected person (I) to a susceptible person (S) with an infection rate given by β, and an infected individual becomes recovered (R) with a recovery rate given by μ. This can be modeled as a Markov chain whose state at time t is given by a vector , where denotes the state of the i individual at time t, i.e., . For the population of n individuals, the state space of this Markov chain becomes extremely large with 3 possible configurations, which makes it impractical to study the exact system. This problem was addressed in a series of three seminal papers by Kermack and McKendrick (1927, 1932, 1933). Instead of modeling the disease state of each individual at at a given point of time, they proposed compartmental models, where the goal is to model the number of individuals in a particular disease state (e.g., susceptible, infected, recovered) at a given point of time. Since their classical papers, there has been a tremendous amount of work on compartmental modeling of contagious diseases over the last ninety years (Hethcote, 2000; Van den Driessche and Watmough, 2002; Brauer and Castillo-Chavez, 2012). Compartmental models make the unrealistic assumption of homogeneity, i.e., each individual is assumed to have the same probability of interacting with any other individual. In reality, individuals interact with each other in a highly heterogeneous manner, depending upon various factors such as age, cultural norms, lifestyle, weather, etc. The contagion process can be significantly impacted by heterogeneity of interactions (Meyers et al. 2005; Rocha et al. 2011; Galvani and May, 2005; Woolhouse et al. 1997), and therefore compartmental modeling of contagious diseases can lead to substantial errors. In recent years, contact networks have emerged as a preferred alternative to compartmental models (Keeling, 2005). Here, a node represents an individual, and an edge between two nodes represent social contact between them. An edge connecting an infected node and a susceptible node represents a potential path for pathogen transmission. This framework can realistically represent the heterogeneous nature of social contacts, and therefore provide much more accurate modeling of the contagion process than compartmental models. Notable examples where the use of contact networks have led to improvements in prediction or understanding of infectious diseases include Bengtsson et al. (2015) and Kramer et al. (2016). Consider the scenario where a pathogen is introduced into a social contact network and it spreads according to an SIR model. It is of particular interest to know whether the pathogen will die out or lead to an epidemic. This is dictated by a set of boundary conditions known as the epidemic threshold, which depends on the SIR parameters β and μ as well as the network structure itself. Above the epidemic threshold, the pathogen invades and infects a finite fraction of the population. Below the epidemic threshold, the prevalence (total number of infected individuals) remains infinitesimally small in the limit of large networks (Pastor-Satorras et al. 2015). There is growing evidence that such thresholds exist in real-world host-pathogen systems, and intervention strategies are formulated and executed based on estimates of the epidemic threshold. (Dallas et al. 2018; Shulgin et al. 1998; Wallinga et al. 2005; Pourbohloul et al. 2005; Meyers et al. 2005). Fittingly, the last two decades have seen a significant emphasis on studying epidemic thresholds of contact networks from several disciplines, such as computer science, physics, and epidemiology (Newman 2002; Wang et al. 2003; Colizza and Vespignani 2007; Chakrabarti et al. 2008; Gómez et al. 2010; Wang et al. 2016, 2017). See Leitch et al. (2019) for a complete survey on the topic of epidemic thresholds. Concurrently but separately, network data has rapidly emerged as a significant area in statistics. Over the last two decades, a substantial amount of methodological advancement has been accomplished in several topics in this area, such as community detection (Bickel and Chen, 2009; Zhao et al. 2012; Rohe et al. 2011; Sengupta and Chen, 2015), model fitting and model selection (Hoff et al. 2002; Handcock et al. 2007; Krivitsky et al. 2009; Wang and Bickel, 2017; Yan et al. 2014; Bickel and Sarkar, 2016; Sengupta and Chen, 2018), hypothesis testing (Ghoshdastidar and von Luxburg 2018; Tang et al. 2017a, 2017b; Bhadra et al. 2019), and anomaly detection (Zhao et al. 2018; Sengupta, 2018; Komolafe et al. 2019), to name a few. The state-of-the-art toolbox of statistical network inference includes a range of random graph models and a suite of estimation and inference techniques. However, there has not been any work at the intersection of these two areas, in the sense that the problem of estimating epidemic thresholds has not been investigated from the perspective of statistical network inference. Furthermore, the task of computing the epidemic threshold based on existing results can be computationally infeasible for massive networks. In this paper, we address these gaps by developing a novel sampling-based method to estimate the epidemic threshold under the widely used Chung-Lu model (Aiello et al. 2000), also known as the configuration model. We prove that our proposed method has theoretical guarantees for both statistical accuracy and computational efficiency. We also provide empirical results demonstrating our method on both synthetic and real-world networks. The rest of the paper is organized as follows. In Section 2, we formally set up the problem statement and formulate our proposed methods for approximating the epidemic threshold. In Section 3, we describe the theoretical properties of our estimators. In Section 4, we report numerical results from synthetic as well as real-world networks. We conclude the paper with discussion and next steps in Section 5.

Epidemic Thresholds

Table 1 lists the common symbols used throughout the paper. Consider a set of n individuals labelled as 1,…, n, and an undirected network (with no self-loops) representing interactions between them. This can represented by an n-by-n symmetric adjacency matrix A, where A(i, j) = 1 if individuals i and j interact and A(i, j) = 0 otherwise. Consider a pathogen spreading through this contact network according to an SIR model. From existing work (Chakrabarti et al. 2008; Gómez et al. 2010; Prakash et al. 2010; Wang et al. 2016, 2017), we know that the boundary condition for the pathogen to become an epidemic is given by where λ(A) is the spectral radius of the adjacency matrix A.
Table 1

Common symbols

SymbolDefinition and description
λ(A)Spectral radius of the matrix A
diDegree of the node i of the network
δiExpected degree of the node i of the network
S(t), I(t), R(t)Number of susceptible (S), infected (I), and recovered/removed (R) individuals in the population at time t
βInfection rate: probability of transmission of a pathogen from an infected individual to a susceptible individual per effective contact (e.g. contact per unit time in continuous-time models, or per time step in discrete-time models)
μRecovery rate: probability that an infected individual will recover per unit time (in continuous-time models) or per time step (in discrete-time models)
Common symbols The left hand side of Eq. 1 is the ratio of the infection rate to the recovery rate, which is purely a function of the pathogen and independent of the network. As this ratio grows larger, an epidemic becomes more likely, as new infections outpace recoveries. The right hand side of Eq. 1 is the spectral radius of the adjacency matrix, which is purely a function of the network and independent of the pathogen. Larger the spectral radius, the more connected the network, and therefore an epidemic becomes more likely. Thus, the boundary condition in Eq. 1 connects the two aspects of the contagion process, the pathogen transmissibility which is quantified by β/μ, and the social contact network which is quantified by the spectral radius. If , the pathogen dies out, and if , the pathogen becomes an epidemic. Given a social contact network, the inverse of the spectral radius of its adjacency matrix represents the epidemic threshold for the network. Any pathogen whose transmissiblity ratio is greater than this threshold is going to cause an epidemic, whereas any pathogen whose transmissiblity ratio is less than this threshold is going to die out. Therefore, a key problem in network epidemiology is to compute the spectral radius of the social contact network.

Problem Statement and Heuristics

Realistic urban social networks that are used in modeling contagion processes have millions of nodes (Eubank et al. 2004; Barrett et al. 2008). To compute the epidemic threshold of such networks, we need to find the largest (in absolute value) eigenvalue of the adjacency matrix A. This is challenging because of two reasons. The first issue could be resolved if we could compute the epidemic threshold in a computationally efficient manner. The second issue could be resolved if we could compute the epidemic threshold only using data on a small subset of the population. In this paper, we aim to resolve both issues by developing two approximation methods for computing the spectral radius. First, from a computational perspective, eigenvalue algorithms have computational complexity of Ω(n2) or higher. For massive social contact networks with millions of nodes, this can become too burdensome. Second, from a statistical perspective, eigenvalue algorithms require the entire adjacency matrix for the full network of n individuals. It can be challenging or expensive to collect interaction data of n individuals of a massive population (e.g., an urban metropolis). Furthermore, eigenvalue algorithms typically require the full matrix to be stored in the random-access memory of the computer, which can be infeasible for massive social contact networks which are too large to be stored. To address these problems, let us look at the spectral radius, λ(A), from the perspective of random graph models. The statistical model is given by , which is short-hand for for 1 ≤ i < j ≤ n. Then λ(A) converges to λ(P) in probability under some mild conditions (Chung and Radcliffe, 2011; Benaych-Georges et al. 2019; Bordenave et al. 2020). To make a formal statement regarding this convergence, we reproduce below a slightly paraphrased version (for notational consistency) of an existing result in this context.

Lemma 1 (Theorem 1 of Chung and Radcliffe (2011)).

Let be the maximum expected degree, and suppose that for some 𝜖 > 0, for sufficiently large n. Then with probability at least 1 − 𝜖, for sufficiently large n, To make note of a somewhat subtle point: from an inferential perspective it is tempting to view the above result as a consistency result, where λ(P) is the population quantity or parameter of interest and λ(A) is its estimator. However, in the context of epidemic thresholds, we are interested in the random variable λ(A) itself, as we want to study the contagion spread conditional on a given social contact network. Therefore, in the present context, the above result should not be interpreted as a consistency result. Rather, we can use the convergence result in a different way. For massive networks, the random variable λ(A), which we wish to compute but find it infeasible to do so, is close to the parameter λ(P). Suppose we can find a random variable T(A) which also converges in probability to λ(P), and is computationally efficient since T(A) and λ(A) both converge in probability to λ(P), we can use T(A) as an accurate proxy for λ(A). This would address the first of the two issues described at the beginning of this subsection. Furthermore, if T(A) can be computed from a small subset of the data, that would also solve the second issue. This is our central heuristic, which we are going to formalize next.

The Chung-Lu Model

So far, we have not made any structural assumptions on P, we have simply considered the generic inhomogeneous random graph model. Under such a general model, it is very difficult to formulate a statistic T(A) which is cheap to compute and converges to λ(P). Therefore, we now introduce a structural assumption on P, in the form of the well-known Chung-Lu model that was introduced by Aiello et al. (2000) and subsequently studied in many papers (Chung and Lu, 2002; Chung et al. 2003; Decreusefond et al. 2012; Pinar et al. 2012; Zhang et al. 2017). For a network with n nodes, let δ = (δ1,…, δ)′ be the vector of expected degrees. Then under the Chung-Lu model, This formulation preserves E[d] = δ, where d is the degree of the i node, and is very flexible with respect to degree heterogeneity. Under model Eq. 2, note that rank(P) = 1, and we have Recall that we are looking for some computationally efficient T(A) which converges in probability to λ(P). We now know that under the Chung-Lu model, λ(P) is equal to the ratio of the second moment to the first moment of the degree distribution. Therefore, a simple estimator of λ(P) is given by the sample analogue of this ratio, i.e., We now want to demonstrate that approximating λ(A) by T1(A) provides us with very substantial computational savings with little loss of accuracy. The approximation error can be quantified as and our goal is to show that in probability, while the computational cost of T1(A) is much smaller than that of λ(A). We will show this both from a theoretical perspective and an empirical perspective. We next describe the empirical results from a simulation study, and we postpone the theoretical discussion to Section 3 for organizational clarity. We used n = 5000, and constructed a Chung-Lu random graph model where P(i, j) = 𝜃𝜃. The model parameters 𝜃1,…, 𝜃 determine the expected degrees. We used two models for generating 𝜃. In the Uniform model, 𝜃 were uniformly sampled from (0,0.25). In the PowerLaw model, 𝜃 were uniformly sampled from the PowerLaw distribution with parameters x = 0.01, β = 3. Note that the second model leads to heavy-tailed distribution. Then, we randomly generated 20 networks from the model, and computed λ(A) and T1(A). The results are reported in Table 2. We observe that the runtimes for T1(A) are orders of magnitude faster than computing the eigenvalue. The average error for T1(A) is small, and so is the standard deviation (SD) of errors. Thus, even for moderately sized networks, using T1(A) as a proxy for λ(A) can reduce the computational cost to a great extent, without much loss in accuracy. For massive networks where n is in millions, this advantage of T1(A) over λ(A) is even greater; however, the computational burden for λ(A) becomes so large that this case is difficult to illustrate using standard computing equipment.
Table 2

Computational efficiency and statistical accuracy of T1(A)

ModelMean time λ(A)Mean time T1(A)Mean errorSD error
Uniform35.62 s0.04 s0.11%0.03%
PowerLaw33.45 s0.04 s3.66%3.91%
Computational efficiency and statistical accuracy of T1(A) Thus, T1(A) provides us with a computationally efficient and statistically accurate method for finding the epidemic threshold. Comparing the results from Uniform and PowerLaw, we observe that errors are higher for the PowerLaw model. A likely explanation for this is that since the distribution is heavy tailed, the moment based estimator is less accurate. This is particularly true for larger n, since the impact of extreme values can shift the estimator heavily.

Sampling Based Approximation

The first approximation, T1(A), provides us with a computationally efficient method for finding the epidemic threshold. This addresses the first issue pointed out at the beginning of Section 2.1. However, computing T1(A) requires data on the degree of all n nodes of the network. Therefore, this does not solve the second issue pointed out at the beginning of Section 2.1. We now propose a second alternative, T2, to address the second issue. The idea behind this approximation is based on the same heuristic that was laid out in Section 2.2. Since λ(P) is a function of degree moments, we can estimate these moments using observed node degrees. In defining T1(A), we used observed degrees of all n nodes in the network. However, we can also estimate the degree moments by considering a small sample of nodes, based on random walk sampling. The algorithm for computing T2 is given in Algorithm 1. Note that we only use (t∗ + r) randomly sampled nodes for computing T2, which implies that we do not need to collect or store data on the n individuals. Therefore this method overcomes the second issue pointed out at the beginning of Section 2.1. The approximation error arising from this method can be defined as and we want to show that in probability, while the data-collection cost of T2(A) is much less than that of T1(A). In the next section, we are going to formalize this.

Theoretical Results on Approximation Errors

In this section, we are going to establish that the approximation errors e1(A) and e2(A), defined in Eqs. 4 and 5, converge to zero in probability. From Theorem 2.1 of Chung et al. (2003), we know that when holds, then for any 𝜖 > 0, Therefore, under Eq. 6, it suffices to show that, for any 𝜖 > 0, To interpret the condition given in Eq. 6, suppose that the expected degrees are all of the same order, i.e., δ = O(n) for some α ∈ (0,1). Then, the left hand side of Eq. 6 is O(n), and the right hand side is , which means the condition is satisfied for any α > 0.

Convergence of T1(A)

First, consider , and recall that . For notational convenience, define . We would like to show that, under reasonable conditions, for any 𝜖 > 0, Next, we state the theorem which will establish a sufficient condition for this to hold. Please see Appendix for a proof of the theorem.

Theorem 2.

If the average of the expected degrees goes to infinity, i.e., , and the spectral radius dominates , i.e., , then for any 𝜖 > 0, Thus, we have established that the approximation error for T1(A) goes to zero in probability. We have already observed in Section 2.2 that the runtime for T1(A) is orders of magnitude faster that the runtime for λ(A). Therefore, T1(A) is both computationally efficient and statistically accurate as an approximation of the epidemic threshold.

Convergence of T2(A)

Next, consider Algorithm 1. Let π denote the stationary distribution of the simple random walk on the given graph. Suppose the number of edges in the given graph is m. Recall that, π is given by for all v. For brevity, we define the mixing time of the graph A, denoted as tmix(A), to mean the number of steps required by the simple random walk to reach a distribution such that . Let T2(A) be the estimate returned by the Algorithm 1. We first show an easy lemma that characterizes the bias of the estimator T2(A). Please see Appendix for a proof.

Lemma 3.

If x is a node that is randomly sampled from π, and d is its degree, then Consequently if is such that and x is sampled from , then . Next, we show that the estimator vRW is actually concentrated around its expectation.

Theorem 4 (Lezaud (1998)).

Let (X) be a irreducible and reversible Markov Chain on a finite set V with Q being the transition matrix. Let π be the stationary distribution. Let be such that E[f] = 0, and 0 < E[f2] ≤ b2. Then, for any initial distribution q, any positive integer r and all 0 < γ ≤ 1, where ε(Q) = 1 − λ2(Q), λ2(Q) being the second largest eigenvalue of Q, S = ∥q/π∥2 (in the ℓ2(π) norm), and If γ ≪ b2 and ε(Q) ≪ 1, then the upper bound becomes Using the above result, we bound the sample complexity of our estimator. We first quote the following result that we use to bound λ1 of the transition matrix. Please see Appendix for a proof.

Theorem 5.

Let Q = D− 1A. Let 𝜖, δ ∈ (0,1). Algorithm 1, using and t∗≥ tmix(G) returns an estimate vRW that satisfies, with probability 1 − δ, The number of nodes that are touched by algorithm is O(t∗ + r). Note that Q = D− 1A has the same set of eigenvalues as the matrix D− 1/2AD− 1/2. For the Chung-Lu model, the eigenvalues of the matrix L = I − D− 1/2AD− 1/2 can be bounded by the following result from Chung et al. (2003).

Theorem 6.

Let L = I − D− 1/2AD− 1/2 denote the normalized Laplacian. Let A be a random graph generated from the given expected degrees model, with expected degrees {δ}, if the minimum expected degree satisfies , then with probability at least 1 − 1/n = 1 − o(1), we have that for all eigenvalues of the Laplacian of G, It follows above that ε(Q) = 1 − λ2(Q) = 1 − λ2(D− 1/2AD− 1/2) = λ(I − D− 1/2AD− 1/2) = 1 − o(1). Putting these together, we get the following corollary on the total number of node queries.

Corollary 6.1.

For a graph generated from the expected degrees model, with probability 1 − 1/n, Algorithm 1, needs to query nodes in order to get a (1 ± 𝜖) estimate of . Note , but this is a loose bound, better bounds can be derived for power law degree distributions, for instance. Thus, we have proved that the approximation error for T2(A) goes to zero in probability. In addition, Corollary 6.1 shows that the number of nodes that we need to query in order to have an accurate approximation is much smaller than n. Furthermore, computing T2 only requires node sampling and counting degrees, and therefore the runtime is much smaller than eigenvalue algorithms. Therefore, T2(A) is a computationally efficient and statistically accurate approximation of the epidemic threshold, while also requiring a much smaller data budget compared to T1(A).

Numerical Results

In this section, we characterize the empirical performance of our sampling algorithm on two synthetic networks, one generated from the Chung-Lu model and the second generated from the preferential attachment model of Barabási and Albert (1999).

Data

Our first dataset is a graph generated from the Chung-Lu model of expected degrees. We generated a powerlaw sequence (i.e. fraction of nodes with degree d is proportion to d−) with exponent β = 2.5 and then generated a graph with this sequence as the expected degrees. Table 3 notes that, as expected, the first eigenvalue λ1(A) is close to .
Table 3

Statistics of the two synthetic datasets used

DataNodesEdgesλ(A)T1(A)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\epsilon ^{\lambda -T_{1}(A)}$\end{document}𝜖λT1(A)
Chung-Lu (β = 2.5)50k72k43.8348.330.102
Chung-Lu (uniform)50k130k67.6067.460.002
Pref-Attach50k250k3732.80.128
Statistics of the two synthetic datasets used The second dataset is generated from the preferential attachment model (Barabási and Albert, 1999), where each incoming node adds 5 edges to the existing nodes, the probability of choosing a specific node as neighbor being proportional to the current degree of that node. While the preferential attachment model naturally gives rise to a directed graph, we convert the graph to an undirected one before running our algorithm. It is interesting to note that even in this case the Chung-Lu model does not hold, our first approximation, T1(A), is close to λ(A).

Implementation Details

In each of the networks, the random walk algorithm presented in Algorithm 1 was used for sampling. The random walk was started from an arbitrary node and every 10 node was sampled (to account for the mixing time) from the walk. These samples were then used to calculate T2(A). This experiment was repeated 10 times. These gave estimates . We then calculate two relative errors ∀i ∈{1,2,…,10}, We also note the following relation between the two error metrics. We denote the averages of and as and respectively. It is easy to observe that the above relation holds between the two average quantities too. We plot the averages and , along with the error-bars that reflect the standard deviation, against the actual number of nodes seen by the random walk. Note that the x-axis accurately reflect how many times the algorithm actually queried the network, not just the number of samples used. Measuring the cost of uniform node sampling in this setting, for instance, would need to keep track of how many nodes are touched by a Metropolis-Hastings walk that implements the uniform distribution.

Results

In Fig. 1 We plot the two results for mean relative error, measure by and .
Figure 1

Results on three synthetic networks

Results on three synthetic networks For the two Chung-Lu networks, the algorithm is able to get a 10% approximation to the statistic T1(A) by exploring at most 10% of the network. With more samples from the random walk, the mean relative errors settle to around 4–5%. However, once we measure the mean relative errors with respect to λ(A), it becomes clearer that the estimator T2(A) does better when the graph is closer to the assumed (i.e. Chung-Lu) model. For the Chung-Lu graph, the mean error 𝜖 essentially is very similar to , which is to be expected. For the preferential attachment graph too, it is clear that the estimate T2 is able to achieve a better than 10% relative error approximation of λ(A). Note that, if we were instead counting only the nodes whose degrees were actually used for estimation, the fraction of network used would be roughly 1–2% in all the cases, the majority of the node query cost actually goes in making the random walk mix, by using an initial burn-in period and by maintaining certain number of steps between subsequent samples.

Discussion

In this work, we investigated the problem of computing SIR epidemic thresholds of social contact networks from the perspective of statistical inference. We considered the two challenges that arise in this context, due to high computational and data-collection complexity of the spectral radius. For the Chung-Lu network generative model, the spectral radius can be characterized in terms of the degree moments. We utilized this fact to develop two approximations of the spectral radius. The first approximation is computationally efficient and statistically accurate, but requires data on observed degrees of all nodes. The second approximation retains the computationally efficiency and statistically accuracy of the first approximation, while also reducing the number of queries or the sample size quite substantially. The results seem very promising for networks arising from the Chung-Lu and preferential attachment generative models. There are several interesting and important future directions. The methods proposed in this paper have provable guarantees only under the Chung-Lu model, although it works very well under the preferential attachment model. This seems to indicate that the degree based approximation might be applicable to a wider class of models. On the other hand, this leaves open the question of developing a better “model-free” estimator, as well as asking similar questions about other network features. In this work we only considered the problem of accurate approximation of the epidemic threshold. From a statistical as well as a real-world perspective, there are several related inference questions. These include uncertainty quantification, confidence intervals, one-sample and two-sample testing, etc. Social interaction patterns vary dynamically over time, and such network dynamics can have significant impacts on the contagion process Leitch et al. (2019). In this paper we only considered static social contact networks, and in future we hope to study epidemic thresholds for time-varying or dynamic networks. Finally, we note that the formulation in Eq. 1 is an approximation of the true epidemic threshold under the so-called quenched-mean-field approximation (Pastor-Satorras et al. 2015; Karrer et al. 2014). In recent work Castellano and Pastor-Satorras (2020), it has been shown that the SIS epidemic transition occurs at some point that is intermediate between λ(A) and T1(A). In future work, we plan to extend our results to these more accurate expressions for the epidemic threshold.
  28 in total

1.  Modelling disease outbreaks in realistic urban social networks.

Authors:  Stephen Eubank; Hasan Guclu; V S Anil Kumar; Madhav V Marathe; Aravind Srinivasan; Zoltán Toroczkai; Nan Wang
Journal:  Nature       Date:  2004-05-13       Impact factor: 49.962

2.  Invasion threshold in heterogeneous metapopulation networks.

Authors:  Vittoria Colizza; Alessandro Vespignani
Journal:  Phys Rev Lett       Date:  2007-10-05       Impact factor: 9.161

3.  A nonparametric view of network models and Newman-Girvan and other modularities.

Authors:  Peter J Bickel; Aiyou Chen
Journal:  Proc Natl Acad Sci U S A       Date:  2009-11-23       Impact factor: 11.205

4.  Percolation on sparse networks.

Authors:  Brian Karrer; M E J Newman; Lenka Zdeborová
Journal:  Phys Rev Lett       Date:  2014-11-12       Impact factor: 9.161

5.  Unification of theoretical approaches for epidemic spreading on complex networks.

Authors:  Wei Wang; Ming Tang; H Eugene Stanley; Lidia A Braunstein
Journal:  Rep Prog Phys       Date:  2017-02-08

6.  Simulated epidemics in an empirical spatiotemporal network of 50,185 sexual contacts.

Authors:  Luis E C Rocha; Fredrik Liljeros; Petter Holme
Journal:  PLoS Comput Biol       Date:  2011-03-17       Impact factor: 4.475

7.  Spatial spread of the West Africa Ebola epidemic.

Authors:  Andrew M Kramer; J Tomlin Pulliam; Laura W Alexander; Andrew W Park; Pejman Rohani; John M Drake
Journal:  R Soc Open Sci       Date:  2016-08-03       Impact factor: 2.963

8.  Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study.

Authors:  Kaiyuan Sun; Jenny Chen; Cécile Viboud
Journal:  Lancet Digit Health       Date:  2020-02-20

9.  Network theory and SARS: predicting outbreak diversity.

Authors:  Lauren Ancel Meyers; Babak Pourbohloul; M E J Newman; Danuta M Skowronski; Robert C Brunham
Journal:  J Theor Biol       Date:  2005-01-07       Impact factor: 2.691

10.  Modeling control strategies of respiratory pathogens.

Authors:  Babak Pourbohloul; Lauren Ancel Meyers; Danuta M Skowronski; Mel Krajden; David M Patrick; Robert C Brunham
Journal:  Emerg Infect Dis       Date:  2005-08       Impact factor: 6.883

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.