Literature DB >> 35344547

Pooled testing of traced contacts under superspreading dynamics.

Stratis Tsirtsis1, Abir De2, Lars Lorch3, Manuel Gomez-Rodriguez1.   

Abstract

Testing is recommended for all close contacts of confirmed COVID-19 patients. However, existing pooled testing methods are oblivious to the circumstances of contagion provided by contact tracing. Here, we build upon a well-known semi-adaptive pooled testing method, Dorfman's method with imperfect tests, and derive a simple pooled testing method based on dynamic programming that is specifically designed to use information provided by contact tracing. Experiments using a variety of reproduction numbers and dispersion levels, including those estimated in the context of the COVID-19 pandemic, show that the pools found using our method result in a significantly lower number of tests than those found using Dorfman's method. Our method provides the greatest competitive advantage when the number of contacts of an infected individual is small, or the distribution of secondary infections is highly overdispersed. Moreover, it maintains this competitive advantage under imperfect contact tracing and significant levels of dilution.

Entities:  

Mesh:

Year:  2022        PMID: 35344547      PMCID: PMC8989305          DOI: 10.1371/journal.pcbi.1010008

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


This is a PLOS Computational Biology Methods paper.

Introduction

As countries around the world learn to live with COVID-19, the use of testing, contact tracing and isolation has been proven to be as important as social distancing for containing the spread of the disease [1,2]. However, as the infection levels grow, their effectiveness reaches a tipping point and quickly degrades since the health authorities lack resources to trace and test all contacts of a diagnosed individual [3]. In this context, there has been a flurry of interest in the use of pooled testing—testing pools of multiple samples simultaneously—to scale up testing under limited resources. The literature on pooled testing methods has a rich history, starting with the seminal work by Dorfman [4]. However, the majority of existing methods [4-22], including those allowing for different individual infection probabilities [7,11-15] as well as those developed and used in the context of the COVID-19 pandemic [16-22], assume statistical independence of the samples to be tested. This assumption can be seemingly justified by classical epidemiological models where the number of infections caused by a single individual follows a Poisson distribution. However, for COVID-19, there is growing evidence suggesting that the number of secondary infections caused by a single individual is overdispersed—most individuals do not infect anyone but a few superspreaders infect many in infection hotspots [23-26] (Overdispersion has been also observed in MERS and SARS [27-30]). This suggests that the infection statuses of samples from close contacts of the same infected individual may be correlated. Only very recently, a narrow line of work has relaxed the above independence assumption [31-33]. However, these works only investigate to what extent the correlation between samples influences the expected number of tests in pooled testing, rather than proposing a method to find the optimal partition of correlated samples into pools. Furthermore, their investigations build upon infection probability distributions whose parameters may be difficult to estimate using real contact tracing data, reducing their potential applicability in practice. In this work, we build upon a well-known semi-adaptive pooled testing method, Dorfman’s method with imperfect tests [7,8,34]. In Dorfman’s method, samples from multiple individuals are first pooled together and evaluated using a single test. If a pooled sample is negative, all individuals in the pooled sample are deemed negative. If the pooled sample is positive, each individual sample from the pool is then tested independently. To determine testing pools, Dorfman’s method models the probability of individual samples being positive with independent and identically distributed (i.i.d.) Bernoulli distributions. Contrary to this, we assume that: (i) the samples to be tested are all the (close) contacts of a diagnosed individual during their infectious period who are identified using contact tracing, and (ii) the number of true positive samples, i.e., secondary infections by the diagnosed individual, follows an overdispersed generalized negative binomial distribution, as commonly done in epidemiological studies quantifying the superspreading of infectious diseases [23-25,28,30]. We introduce a dynamic programming algorithm to efficiently find a partition of the contacts into pools, possibly of different sizes, that optimally trade off the average number of tests, false negatives and false positives in polynomial time. Under our assumptions, contacts are exchangeable within pools, hence the optimal pools can be filled and tested sequentially as samples from contacts become available, as for Dorfman’s method. Experiments using a variety of reproduction numbers and dispersion levels in secondary infections, including those observed for COVID-19, show that the pools found using our method result in a significantly lower average number of tests than those found using the standard Dorfman’s method. Our method provides the greatest competitive advantage when the number of contacts of an infected individual is small or the distribution of secondary infections is highly overdispersed. Moreover, it maintains this competitive advantage under imperfect contact tracing and significant levels of dilution.

Methods

Modeling overdispersion of infected contacts

Previous work has mostly built on the assumption that the number of infections X caused by a single individual follows a Poisson distribution with mean R, so X ~ Poisson(R), where R is often called the effective reproduction number. However, having equal mean and variance, the Poisson is unable to capture settings where the number of cases exhibits higher variance. Following recent work in the context of COVID-19 [24,25], we instead model X using a generalized negative binomial distribution. In a (standard) negative binomial distribution, X ~ NBin(k, p) can be interpreted as the number of successes before the k-th failure in a sequence of Bernoulli trials with success probability p. In a generalized negative binomial distribution, k > 0 can take real values and the probability mass function is given by where k is called the dispersion parameter and parameterizes higher variance of the distribution for small k. Here, we assume that the number of secondary infections X is distributed as X ~ NBin(k, p) with p = R / (k + R), hence parameterizing X via its mean and dispersion parameter k. Under this parameterization, Var[X] = R (1 + R /k), which is greater than the variance of the Poisson R for k<∞. For k→∞, the sequence of random variables Xk ~ NBin(k, R/ (k + R)) converges in distribution to X ~ Poisson(R). Furthermore, since by assumption we identify all contacts of a diagnosed individual using contact tracing, we have prior information about the maximum number of possible infections N. More specifically, we can use the following truncated negative binomial distribution in our derivations: where X ~ NBin(k, R/(k+R)) and note that P(X = n | X ≤ N) = P(X = n) / P(X ≤ N) if n ≤ N and 0 otherwise. In practice, the identification of all contacts of a diagnosed individual might not always be feasible, however, our method’s competitive performance with respect to Dorfman’s remains even if contact tracing is unable to identify all contacts of a diagnosed individual (refer to the Results section).

Pooling contacts of a positively diagnosed individual

Our goal is to identify infected individuals among all contacts of a positively diagnosed individual via testing, where . For each individual , we define the indicator random variable and, for each pool of individuals , we define the number of infected in as . Moreover, following our assumption on the distribution of the number of secondary infections, we define . Let . To account for imperfect tests, we specify the sensitivity se (i.e., true positive probability) and the specificity sp (i.e., true negative probability) of individual testing. To capture the effect of dilution when testing a pool , we adopt the model of Burns and Mauro [35] and parameterize the conditional probabilities as Here, d∈[0, 1] controls the effect that dilution has on a pooled test’s sensitivity. The right-hand side of the top equation converges to s as d→0. In the above, we implicitly assume that all infected individuals contribute equally to the concentration of viral load in a pool and the probability of a false positive pooled test is independent of the size of the pool since the concentration of the virus is zero in any case.

Dorfman testing under overdispersion of infected contacts

Dorfman testing proceeds by pooling individuals into non-overlapping partitions of and first testing the combined samples of each pool using a single test. Every member of a pool is marked as negative if their combined sample is negative. In contrast, if a combined sample of a pool is positive, each individual of the pool is subsequently tested individually to determine who exactly is marked positive in the pool. Let denote the indicator random variable for the event that individual j is marked as infected in pool after Dorfman testing. Then, its value can be expressed as i.e., it takes the value 1 if and only if the combined sample of pool is first tested positive and subsequently the sample of individual j is tested positive. In the simple case of , we have

Finding the optimal pool sizes

We first compute the expected number of tests, false negatives, and false positives due to each pool for Dorfman testing under our above model of infected contacts. Their values only depend on the pool size (refer to S1 Appendix). Hence, for a given number of contacts and pool of size , we overload notation and write and for the expected number of tests, false negatives and false positives, respectively. Let be the set of all sets of positive integers such that and . It is easy to see that every such set is a valid partition of the set of contacts into a set of pools with sizes s1, s2, …, sC. In that context, our goal is to find the sizes of the sets of pools that optimally trade off the expected number of tests, false negatives and false positives [8,35]: with , where λ1 and λ2 are given nonnegative parameters that balance the penalty incurred by false negatives and false positives. Note that the parameters λ1, λ2 can be thought of as Lagrange multipliers for the problem of minimizing the expected number of tests subject to the expected numbers of false negatives and false positives being less than two given values. For a discussion on alternative objective functions and their benefits, we refer the interested reader to [36,37]. Perhaps surprisingly, we can solve the above problem in polynomial time using a simple dynamic programming procedure. To do so, we define the following recursive functions: where . Interpreting n as the number of individuals not yet assigned to a pool, using the two recursive functions, the (sizes of the) optimal sets of pools can be recovered by computing h(n) in increasing order of n, up to the value N. Refer to S2 Appendix for pseudocode summarizing the overall procedure and a formal proof of optimality. If the testing authorities wish to manually assign a given fraction of x contacts to pools based on some other criteria (e.g., household membership [38]), the optimal sets of pools for the remaining N-x contacts can be recovered by computing h(n) in increasing order of n, up to the value N-x.

Experimental design

We perform simulations to compare our method against Dorfman’s method in terms of its ability to optimally trade off resources and false test outcomes in the presence of overdispersed distributions of secondary infections. Although it is possible to derive analytical expressions for each method’s expected numbers of tests, false negatives and false positives, we resort to simulations to fully characterize and compare their (empirical) distributions. To evaluate the performance of the two methods, we generate the infection states of a set of contacts by first fixing the number of contacts N and sampling the secondary infections n ~ qR,k,N(n), where qR,k,N(n) is a truncated negative binomial distribution as defined in Eq 1. Then, we select n of the N contacts at random and set their status to be infected. To find the optimal pool sizes given by our method, we use our dynamic programming algorithm and the expected numbers of tests, false negatives and false positives, computed assuming the same truncated negative binomial distribution of secondary infections (refer to S1 Appendix). To find the optimal pool sizes given by Dorfman’s method, we use a variation of the dynamic programming algorithm in which the expected numbers of tests, false negatives and false positives are computed assuming an i.i.d. probability of infection of for each individual contact (refer to S3 Appendix). Following the literature on COVID-19, we consider (PCR) tests with high specificity and moderate sensitivity [39]. It is worth noting that we distinguish between two types of sensitivity and specificity: analytic and clinical. The former is reflecting a test’s accuracy in a controlled laboratory environment while the latter is also affected by factors related to sample collection (e.g., stage of the disease at the time of collection, use of a throat or nasal swab) and, therefore, it is typically lower [40]. Since we focus on pooled testing of samples obtained through contact tracing, we will generally refer to clinical sensitivity and specificity unless otherwise specified. In this context, most studies report values in the range of 70% - 98% and 97% - 99% for individual tests’ (clinical) sensitivity and specificity, respectively, with the exact value differing based on the method of sample collection and the laboratory protocol followed [41-43]. Informed by these values, we set se = 0.8 and sp = 0.98. However, we provide additional results for alternative se, sp values in the Supporting information section. To set the value of the parameter d that controls the effect of dilution, we fit the parameterized expression of the conditional probability to real pooled testing data analyzed by Bateman et al. [44]. In this study, the authors report that a PCR test’s analytic sensitivity for an undiluted sample equals to 0.99, whereas this sensitivity drops to 0.93, 0.91 and 0.81 when a single infected sample is present in pools of 5, 10 and 50 respectively. The analytic specificity of the test is not reported and hence, we assume that it is also equal to 0.99. Using these values, we get an estimate of d = 0.0455 via ridge regression, which we use throughout the paper. The resulting curve showing the effect of dilution on the sensitivity of a pooled test as a function of the virus’s concentration is depicted in S1 Fig.

Results

Reduction in the average number of tests compared to Dorfman’s method

We first compare the performance of our method and Dorfman’s method in finding the pools that minimize the average number of tests (i.e., λ1 = λ2 = 0) for fixed values of the reproductive number R and the dispersion parameter k, matching estimates obtained during the early phase of the COVID-19 pandemic [24]. Table 1 summarizes the results for different numbers of contacts N of the diagnosed individuals. The results show that our method achieves a lower average number of tests across all settings and indicate greater competitive advantage when the number of contacts is small. These results hold across a variety of sensitivity and specificity values (refer to S1 and S2 Tables and S2 Fig). At the same time, the average numbers of false negatives and false positives are similar for the two methods. That said, we observe that both methods present high variance, with ours’ being generally larger. Looking at the average pool sizes chosen by each method in Fig 1A, we observe that Dorfman’s method chooses smaller pool sizes that increase with the number of contacts while the ones chosen by our method remain relatively constant. This leads to significant differences between the distributions of the number of tests performed under the two methods. For example, as shown in Fig 1B, when the number of contacts is N = 20, our method is most likely to perform about 70% less tests than Dorfman’s. However, due to the more conservative pool sizes selected by Dorfman’s method, there is a small probability that our method ends up performing more tests, sometimes even double the amount.
Table 1

Average numbers of tests, false negatives and false positives achieved by our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N.

Here, we sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, we set the sensitivity and specificity to se = 0.8, sp = 0.98. For each combination of method and parameter values, the averages and standard deviations are estimated using 10,000 samples.

NAverage # of tests per contactAverage # of false negatives per contactAverage # of false positives per contact
Dorf-ClDorf-ODDorf-ClDorf-ODDorf-ClDorf-OD
200.331 (σ: 0.250)0.245 (σ: 0.396)0.024 (σ: 0.070)0.025 (σ: 0.087)0.002 (σ: 0.009)0.003 (σ: 0.013)
500.259 (σ: 0.220)0.219 (σ: 0.324)0.016 (σ: 0.050)0.016 (σ: 0.056)0.002 (σ: 0.007)0.003 (σ: 0.009)
1000.207 (σ: 0.184)0.180 (σ: 0.239)0.009 (σ: 0.030)0.009 (σ: 0.032)0.002 (σ: 0.005)0.002 (σ: 0.006)
2000.164 (σ: 0.149)0.148 (σ: 0.201)0.005 (σ: 0.016)0.005 (σ: 0.016)0.001 (σ: 0.004)0.002 (σ: 0.005)
Fig 1

Performance of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N of a diagnosed individual during their infectious period.

Panel (A) shows the average pool size. Panel (B) shows the empirical distribution of the percentage of tests saved by using our method instead of Dorfman’s method, where we exclude the highest and lowest 5% of observations and the purple dashed lines represent average values. In both panels, we sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, we set the sensitivity and specificity to se = 0.8, sp = 0.98. For each combination of method and parameter values, the averages and quantiles in both panels are estimated using 10,000 samples.

Performance of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N of a diagnosed individual during their infectious period.

Panel (A) shows the average pool size. Panel (B) shows the empirical distribution of the percentage of tests saved by using our method instead of Dorfman’s method, where we exclude the highest and lowest 5% of observations and the purple dashed lines represent average values. In both panels, we sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, we set the sensitivity and specificity to se = 0.8, sp = 0.98. For each combination of method and parameter values, the averages and quantiles in both panels are estimated using 10,000 samples.

Average numbers of tests, false negatives and false positives achieved by our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N.

Here, we sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, we set the sensitivity and specificity to se = 0.8, sp = 0.98. For each combination of method and parameter values, the averages and standard deviations are estimated using 10,000 samples. Next, we investigate to what extent our method improves upon Dorfman’s method for other values of the reproductive number R and dispersion parameter k, including those estimated by several COVID-19 studies [23-25,45-48]. Fig 2 summarizes the results, which show that our method offers the greatest competitive advantage whenever the reproductive number R is large and the number of secondary infections is overdispersed, i.e., k → 0. The results suggest that for an infected individual with N = 100 contacts and under the estimated values of reproductive number and dispersion parameter reported in the COVID-19 literature, our method would have saved 3%-30% with respect to Dorfman’s method. Similar findings hold for a variety of values for the number of contacts N, sensitivity se and specificity sp (refer to S3–S5 Figs).
Fig 2

Percentage of tests saved by using our method instead of Dorfman’s method for different values of the reproductive number R and dispersion parameter k.

Darker colors correspond to a higher average percentage of tests saved. To generate the contour, we evaluate the average percentage of tests saved using values in [0.25, 5.0] with step 0.05 for R and in [0.05, 1.0] with step 0.05 for k. The overlaid annotations indicate the average percentage of tests saved for several estimated values of the reproductive number and dispersion parameter reported in the COVID-19 literature [23–25,45–48]. Here, we set the number of contacts to N = 100 and the sensitivity and specificity to se = 0.8, sp = 0.98. In each experiment we estimate the average using 10,000 samples.

Percentage of tests saved by using our method instead of Dorfman’s method for different values of the reproductive number R and dispersion parameter k.

Darker colors correspond to a higher average percentage of tests saved. To generate the contour, we evaluate the average percentage of tests saved using values in [0.25, 5.0] with step 0.05 for R and in [0.05, 1.0] with step 0.05 for k. The overlaid annotations indicate the average percentage of tests saved for several estimated values of the reproductive number and dispersion parameter reported in the COVID-19 literature [23-25,45-48]. Here, we set the number of contacts to N = 100 and the sensitivity and specificity to se = 0.8, sp = 0.98. In each experiment we estimate the average using 10,000 samples.

Balancing tests, false negatives and false positives

To explore the trade-off between the average number of tests that our method achieves and the false positive and negative rates, we experiment with different values of the parameters λ1, λ2 and the sensitivity se and specificity sp. Fig 3 summarizes the results, which show that to achieve lower false negative and false positive rates, more tests need to be performed. When trading off the number of tests with the number of false positives (λ1 = 0, λ2 > 0), our method gradually changes the average pool size, leading to many unique pool partitions across λ2 values. For small values of λ2, the optimal solution leads to pool sizes that mainly minimize the number of tests. For large values of λ2, the optimal solution consists of pools of two contacts. When balancing the number of tests with the number of false negatives (λ1 > 0, λ2 = 0) under the most realistic values of sensitivity and specificity, we observe that our method results in a small number of unique pool partitions across λ1 values. For small values of λ1, the optimal solution leads to pool sizes that mainly minimize the number of tests, similarly as in the previous case. For large values of λ1, our method reaches a tipping point, after which, the optimal solution corresponds to individual testing (i.e., pools of size one). In contrast, when both the sensitivity and specificity are high (se = sp = 0.99), we notice that the number of unique pool partitions increases. This indicates that when testing authorities have low tolerance for false negatives in the presence of significantly imperfect tests (i.e., when the value of λ1 is large), reducing the pool size contributes marginally towards the reduction of false negative outcomes and individual testing becomes necessary. For the exact partitions into pools given by our method as we vary the values of λ1 and λ2, refer to S4–S11 Tables. Finally, note that, for large values of λ1 (λ2), Dorfman’s method also results in pools of size one (two) and, therefore, the two methods become equivalent.
Fig 3

Average number of tests, false negative rate and false positive rate achieved by our method under different values of the parameters λ1 and λ2 and different levels of specificity se and sensitivity sp.

In each panel, we either penalize the false negative rate (i.e., we vary λ1 and set λ2 = 0) or the false positive rate (i.e., we vary λ2 and set λ1 = 0). Accordingly, for the former, we show the false negative rate vs average number of tests (in blue) and, for the latter, we show the false positive rate vs average number of tests (in purple). Here, we set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. For the exact sizes of the optimal pools corresponding to each point in the figure, refer to S4–S11 Tables.

Average number of tests, false negative rate and false positive rate achieved by our method under different values of the parameters λ1 and λ2 and different levels of specificity se and sensitivity sp.

In each panel, we either penalize the false negative rate (i.e., we vary λ1 and set λ2 = 0) or the false positive rate (i.e., we vary λ2 and set λ1 = 0). Accordingly, for the former, we show the false negative rate vs average number of tests (in blue) and, for the latter, we show the false positive rate vs average number of tests (in purple). Here, we set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. For the exact sizes of the optimal pools corresponding to each point in the figure, refer to S4–S11 Tables.

On the effect of dilution

To assess the effect of dilution on the performance of our method and Dorfman’s method at minimizing the average number of tests (i.e., λ1 = λ2 = 0), we experiment with different values of the parameter d, which controls the effect of dilution on the sensitivity of a pooled test. Fig 4 summarizes the results. As expected, Fig 4A shows that the average number of tests (false negatives) decreases (increases) as the level of dilution increases. However, we observe that our method presents a clear advantage when d < 0.6, which it achieves by favoring larger group sizes, as shown in Fig 4B. Our method never performs worse than Dorfman’s method across the entire range of dilution levels. In this context, we point out that realistic values of the parameter d might lie in the lower range of the spectrum—our estimate about the dilution parameter based on data by Bateman et al. [44] gives d = 0.0455 while other studies in the context of COVID-19 report even weaker dilution effects (e.g., Yelin et al. [49] report an analytic sensitivity of 96% for pools of size 10). Therefore, we can conclude that our method would achieve a competitive advantage over Dorfman’s method even if the dilution parameter d was slightly misspecified.
Fig 4

Performance of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the dilution parameter d.

Panel (A) shows the average numbers of tests (solid lines) and false negatives (dashed lines). Panel (B) shows the average pool size. In both panels, shaded regions represent 95% confidence intervals. Here, we set N = 100, R = 2.5, k = 0.1, se = 0.8, sp = 0.98 and, for each combination of method and parameter value, the averages are estimated using 10,000 samples.

Performance of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the dilution parameter d.

Panel (A) shows the average numbers of tests (solid lines) and false negatives (dashed lines). Panel (B) shows the average pool size. In both panels, shaded regions represent 95% confidence intervals. Here, we set N = 100, R = 2.5, k = 0.1, se = 0.8, sp = 0.98 and, for each combination of method and parameter value, the averages are estimated using 10,000 samples.

Performance in the presence of unreported contacts

So far, we assumed that all close contacts of an infected individual are identified via contact tracing. Here, we study to what extent our method would be favorable over Dorfman’s if contact tracing is incomplete, i.e., the true number of close contacts of an infected individual is underreported. As previously, we sample the infection statuses for an individual with a set of contacts , but we assume that only a random subset of fixed size is reported and tested. Fig 5 summarizes the results, which show that our method maintains its advantage at saving tests in comparison to Dorfman’s method even when half of the individual’s contacts are not reported to the contact tracing authorities. We also observe that the average percentage of tests saved by our method compared to Dorfman’s increases as the effectiveness of contact tracing declines and the number of infected individuals among the set of traced contacts becomes smaller.
Fig 5

Performance of our method and Dorfman’s method under incomplete contact tracing.

Panel (A) shows the average percentage of tests saved by using our method instead of Dorfman’s under various values of the number of traced contacts and the percentage of contacts who were successfully traced, i.e., . Panel (B) shows the number of infected contacts in . In both panels, error bars represent 95% confidence intervals. Here, we first sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 for a set of contacts and then, we compute pool sizes and evaluate both methods based on a random subset of size N. For each combination of method and parameter values, the averages in all panels are estimated using 10,000 samples.

Performance of our method and Dorfman’s method under incomplete contact tracing.

Panel (A) shows the average percentage of tests saved by using our method instead of Dorfman’s under various values of the number of traced contacts and the percentage of contacts who were successfully traced, i.e., . Panel (B) shows the number of infected contacts in . In both panels, error bars represent 95% confidence intervals. Here, we first sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 for a set of contacts and then, we compute pool sizes and evaluate both methods based on a random subset of size N. For each combination of method and parameter values, the averages in all panels are estimated using 10,000 samples.

Discussion

We have introduced a pooled testing method based on Dorfman’s method that is especially designed to use information provided by contact tracing. In comparison with Dorfman’s method, we showed through realistic simulations that our method finds pools that lead to a significant reduction in tests performed under a variety of epidemiological conditions, including those observed for the COVID-19 pandemic. Moreover, we also demonstrated that our method maintains its competitive advantage with respect to Dorfman’s method under imperfect contact tracing and significant levels of dilution. Our results have direct implications for the allocation of limited and imperfect testing resources in future pandemics whenever there exists evidence of substantial overdispersion in the number of secondary infections. However, we acknowledge that more research is needed to more accurately characterize the level of overdispersion in a pandemic, which is a prerequisite for our method to operate. In this context, it would be interesting to extend our approach using distributions other than the generalized negative binomial, which might reflect the number of secondary infections more suitably in different contact tracing scenarios. Moreover, it would be worth exploring alternative dilution models and objective functions. Another limiting factor of our method, which however holds for many pooled testing methods, is the assumption that the algorithm deciding about the partition of contacts into pools has access to the true sensitivity and specificity, which may not be trivial in practice [50]. A potential avenue for future work would be to investigate the impact of different testing methods (including ours) on the evolution of an epidemic under a limited testing capacity, using individual-based models [51-53]. Finally, to make our method applicable and beneficial for real contact tracing and pooled testing operations, it would be interesting to validate its reduced consumption of tests with respect to Dorfman’s in randomized control studies.

Effect of dilution on a pooled test’s sensitivity.

The two lines show the sensitivity of a pooled test as a function of the concentration of viral load based on the parameterized model of . The green line shows a pooled test’s analytic sensitivity (high se, sp values) which is fitted based on dilution data by Bateman et al. [44] and gives an estimate of d = 0.0455, via ridge regression. The blue line shows a pooled test’s clinical sensitivity (moderate se and high sp values) under the same value of the dilution parameter d. (TIF) Click here for additional data file.

Performance of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N, under additional levels of sensitivity se and specificity sp.

In panels (A, B) we set s = 0.7, s = 0.97, in panels (C, D) we set s = 0.9, s = 0.99 and, in panels (E, F) we set s = 0.99, s = 0.99. Panels (A, C, E) show the average pool size. Panels (B, D, F) show the empirical distribution of the percentage of tests saved by using our method instead of Dorfman’s method, where we exclude the highest and lowest 5% of observations and the purple dashed lines represent average values. In all panels, we sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24]. For each combination of method and parameter values, the averages and quantiles in all panels are estimated using 10,000 samples. (TIF) Click here for additional data file.

Percentage of tests saved by using our method instead of Dorfman’s method for different values of the reproductive number R and dispersion parameter k, under additional levels of sensitivity se, specificity sp and numbers of contacts N.

In panels (A, B), we set N = 20 and N = 50 respectively and, in both panels, we set the sensitivity and specificity to s = 0.8, s = 0.98. Darker colors correspond to a higher average percentage of tests saved. To generate the contours, we evaluate the average percentage of tests saved using values in [0.25, 5.0] with step 0.05 for R and in [0.05, 1.0] with step 0.05 for k. The overlaid annotations indicate the average percentage of tests saved for several estimated values of the reproductive number and dispersion parameter reported in the COVID-19 literature [23-25,45-48]. In each experiment, we estimate the average using 10,000 samples. (TIF) Click here for additional data file. In panels (A, B, C), we set N = 20, N = 50 and N = 100 respectively and, in all panels, we set the sensitivity and specificity to s = 0.7, s = 0.97. Darker colors correspond to a higher average percentage of tests saved. To generate the contours, we evaluate the average percentage of tests saved using values in [0.25, 5.0] with step 0.05 for R and in [0.05, 1.0] with step 0.05 for k. The overlaid annotations indicate the average percentage of tests saved for several estimated values of the reproductive number and dispersion parameter reported in the COVID-19 literature [23-25,45-48]. In each experiment, we estimate the average using 10,000 samples. (TIF) Click here for additional data file. In panels (A, B, C), we set N = 20, N = 50 and N = 100 respectively and, in all panels, we set the sensitivity and specificity to s = 0.9, s = 0.99. Darker colors correspond to a higher average percentage of tests saved. To generate the contours, we evaluate the average percentage of tests saved using values in [0.25, 5.0] with step 0.05 for R and in [0.05, 1.0] with step 0.05 for k. The overlaid annotations indicate the average percentage of tests saved for several estimated values of the reproductive number and dispersion parameter reported in the COVID-19 literature [23-25,45-48]. In each experiment, we estimate the average using 10,000 samples. (TIF) Click here for additional data file.

Average numbers of tests, false negatives and false positives of our method (Dorf-OD) and classic Dorfman’s method (Dorf-Cl) for various values of the number of contacts N, under additional levels of sensitivity se and specificity sp.

Here, we set the sensitivity and specificity to s = 0.7, s = 0.97. We sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, for each combination of method and parameter values, the averages and standard deviations are estimated using 10,000 samples. (DOCX) Click here for additional data file. Here, we set the sensitivity and specificity to s = 0.9, s = 0.99. We sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, for each combination of method and parameter values, the averages and standard deviations are estimated using 10,000 samples. (DOCX) Click here for additional data file. Here, we set the sensitivity and specificity to s = 0.99, s = 0.99. We sample the number of secondary infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1 [24] and, for each combination of method and parameter values, the averages and standard deviations are estimated using 10,000 samples. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3A, resulting by penalizing the false negative rate.

Here, under s, we vary λ while we fix λ2 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3A, resulting by penalizing the false positive rate.

Here, under s, we vary λ while we fix λ1 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3B, resulting by penalizing the false negative rate.

Here, under s, we vary λ while we fix λ2 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3B, resulting by penalizing the false positive rate.

Here, under s, we vary λ while we fix λ1 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3C, resulting by penalizing the false negative rate.

Here, under s, we vary λ while we fix λ2 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3C, resulting by penalizing the false positive rate.

Here, under s, we vary λ while we fix λ1 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3D, resulting by penalizing the false negative rate.

Here, under s, we vary λ while we fix λ2 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Pool partitions corresponding to the points of Fig 3D, resulting by penalizing the false positive rate.

Here, under s, we vary λ while we fix λ1 = 0 and, for each resulting partition, we compute the average number of tests and false negative/positive rate. We set the number of contacts to N = 100 and sample the number of positive infections from a truncated negative binomial distribution with reproductive number R = 2.5 and dispersion parameter k = 0.1. In each experiment, we estimate averages using 10,000 samples. Double entries in the first column correspond to cases where the set of contacts is partitioned into a combination of pools of two different sizes. (DOCX) Click here for additional data file.

Derivations for Dorfman testing under overdispersion (Dorf-OD).

(DOCX) Click here for additional data file.

Dynamic programming algorithm.

(DOCX) Click here for additional data file.

Derivations for classic Dorfman’s method (Dorf-Cl).

(DOCX) Click here for additional data file. 2 Dec 2021 Dear Mr. Tsirtsis, Thank you very much for submitting your manuscript "Pooled Testing of Traced Contacts Under Superspreading Dynamics" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts. Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Alex Perkins Associate Editor PLOS Computational Biology Tom Britton Deputy Editor PLOS Computational Biology *********************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: The authors present an interesting new method to optimize test pooling in the context of contact tracing procedures. The work is well executed, both from a formal and experimental analysis point of view. The paper is well written and clear. Some major remarks: - My major concern with this work, is that the impact on the actual epidemic is not investigated. If I understand it correctly, temporal (e.g. viral load progression) and contact-related aspects of the epidemic are all aggregated in the clinical sensitivity/specificity (line 170). However, missing contacts, especially in an overdispersed epidemic context, could have implications on the overall attack rate. The authors mention (in the discussion), that this could be investigated through randomized control studies, yet I would argue that this could also be analysed by using an individual-based model to investigate the impact of your testing strategy? To this end, perhaps one of these individual-based models could be used [2,3,4]? - One of your assumptions, early in the paper (line 100), where you assume that perfect tracing is possible, is quite a though one. While I understand that this assumption is necessary for your theoretical framework, I believe it would be good to acknowledge that this is not necessarily realistic and refer to the section where you empirically challenge this assumption (line 298). Some minor remarks: - Is there a reason why you use small r instead of R_0? This makes it an easier symbol to spot, and the symbol is quite commonly used in literature. - Did you consider any additional population structures next to the contact tracing contacts (e.g., households), and how would this fit in your work? - Adjacent to this, the related work section was quite complete, however I believe that the work on household-based pooling [1] could also be interesting to discuss. - I found Figure 1 (b) a bit strange and hard to interpret at first sight. It looks strange with the negative percentages, could you perhaps show the test distributions for both methods instead next to each other? - Why is Figure 2 ragged along the y-axis? Is this related to the experimental resolution you used to build this figure? References: [1] https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008688 [2] https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009149 [3] https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009146 [4] https://www.nature.com/articles/s41467-021-21747-7 Reviewer #2: Please see attached file. Reviewer #3: The paper studies the problem of pooled testing in which the goal is identify an optimal pooling scheme that balances between testing costs and misclassification errors. The novel component is that the authors assume that arrivals are all contact traced to one person. The author model this using a truncated negative binomial distribution. The authors then show that their formulation can be solved via dynamic programming. Finally, the authors conduct an extensive numerical study in which they measure the benefits of their approach compared to classical Dorfman testing. Overall the paper is well-written and generally easy to follow. I do have some concerns about the modeling assumptions in addition to some clarifications which I list below. I recommend a major revision. - Why use a truncated normal binomial and not some other mixture model? It would be interesting to observe the impact of that on the optimization outcome. The benefit of using a mixture model is that allows for more flexibility in choosing population variance which allows for the calibration of that variance using data. - Page 4/33 line 142, what are the bounds of the summation in the objective function? The reason I ask this question is that it seems that the number of summands in the objective varies as you add more pools, which in itself is a decision in this problem. - General comment on the case study. Since you are interested in minimizing a weighted sum of expected number of tests, false-negatives, and false-positives, then why don't you report the value of that objective function in your comparison of you approach with the Dorfman scheme? - General question about the Dorfman scheme. It is not clear how this was implemented in the paper. How did you partition the set of N patients into pools to be tested using a Dorfman scheme? Maybe adding a paragraph about this in the experimental design section would help clarify this. - Would it be possible to run your by only accounting for false negatives? It would be interesting to compare your approach to Dorfman testing if the only goal is to minimize negative misclassification errors. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at . Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Submitted filename: PCOMPBIOL-D-21-01826_Review.pdf Click here for additional data file. 26 Jan 2022 Submitted filename: PLOS revision response.pdf Click here for additional data file. 28 Feb 2022 Dear Mr. Tsirtsis, Thank you very much for submitting your manuscript "Pooled Testing of Traced Contacts Under Superspreading Dynamics" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. Please address the one minor comment by Reviewer 1. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Alex Perkins Associate Editor PLOS Computational Biology Tom Britton Deputy Editor PLOS Computational Biology *********************** A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: [LINK] Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: I thank the authors for their answers and improvements of the manuscript. One more remark: on my comment regarding the impact on the testing strategy on the epidemic, the authors state: "That being said, we agree with the reviewer that it would be very interesting to investigate the impact of different testing methods on the epidemic under a limited testing capacity using individual-based models." It would be good to indeed mention this in the discussion. Reviewer #2: The authors addressed all my comments. Reviewer #3: The authors addressed all of my comments and I have no further comments to add. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References: Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. 1 Mar 2022 Submitted filename: PLOS revision response 2.pdf Click here for additional data file. 10 Mar 2022 Dear Mr. Tsirtsis, We are pleased to inform you that your manuscript 'Pooled Testing of Traced Contacts Under Superspreading Dynamics' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Alex Perkins Associate Editor PLOS Computational Biology Tom Britton Deputy Editor PLOS Computational Biology *********************************************************** 24 Mar 2022 PCOMPBIOL-D-21-01826R2 Pooled Testing of Traced Contacts Under Superspreading Dynamics Dear Dr Tsirtsis, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Olena Szabo PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
  44 in total

1.  Assessing the dilution effect of specimen pooling on the sensitivity of SARS-CoV-2 PCR tests.

Authors:  Allen C Bateman; Shanna Mueller; Kyley Guenther; Peter Shult
Journal:  J Med Virol       Date:  2020-09-16       Impact factor: 2.327

2.  False Negative Tests for SARS-CoV-2 Infection - Challenges and Implications.

Authors:  Steven Woloshin; Neeraj Patel; Aaron S Kesselheim
Journal:  N Engl J Med       Date:  2020-06-05       Impact factor: 91.245

3.  Informative Dorfman screening.

Authors:  Christopher S McMahan; Joshua M Tebbs; Christopher R Bilder
Journal:  Biometrics       Date:  2011-07-15       Impact factor: 2.571

4.  Estimating the overdispersion in COVID-19 transmission using outbreak sizes outside China.

Authors:  Akira Endo; Sam Abbott; Adam J Kucharski; Sebastian Funk
Journal:  Wellcome Open Res       Date:  2020-07-10

5.  Characterizing superspreading events and age-specific infectiousness of SARS-CoV-2 transmission in Georgia, USA.

Authors:  Max S Y Lau; Bryan Grenfell; Michael Thomas; Michael Bryan; Kristin Nelson; Ben Lopman
Journal:  Proc Natl Acad Sci U S A       Date:  2020-08-20       Impact factor: 11.205

6.  Superspreading and the effect of individual variation on disease emergence.

Authors:  J O Lloyd-Smith; S J Schreiber; P E Kopp; W M Getz
Journal:  Nature       Date:  2005-11-17       Impact factor: 49.962

7.  Simple Questionnaires to Improve Pooling Strategies for SARS-CoV-2 Laboratory Testing.

Authors:  Sophie Schneitler; Philipp Jung; Florian Bub; Farah Alhussein; Sophia Benthien; Fabian K Berger; Barbara Berkó-Göttel; Janina Eisenbeis; Daphne Hahn; Alexander Halfmann; Katharina Last; Maximilian Linxweiler; Stefan Lohse; Cihan Papan; Thorsten Pfuhl; Jürgen Rissland; Sophie Roth; Uwe Schlotthauer; Jürg Utzinger; Sigrun Smola; Barbara C Gärtner; Sören L Becker
Journal:  Ann Glob Health       Date:  2020-11-18       Impact factor: 2.462

8.  Middle East Respiratory Syndrome Coronavirus Superspreading Event Involving 81 Persons, Korea 2015.

Authors:  Myoung-don Oh; Pyoeng Gyun Choe; Hong Sang Oh; Wan Beom Park; Sang-Min Lee; Jinkyeong Park; Sang Kook Lee; Jeong-Sup Song; Nam Joong Kim
Journal:  J Korean Med Sci       Date:  2015-10-16       Impact factor: 2.153

9.  Simulation of pooled-sample analysis strategies for COVID-19 mass testing.

Authors:  Andreas Deckert; Till Bärnighausen; Nicholas Na Kyei
Journal:  Bull World Health Organ       Date:  2020-07-06       Impact factor: 9.408

10.  Large-scale implementation of pooled RNA extraction and RT-PCR for SARS-CoV-2 detection.

Authors:  R Ben-Ami; A Klochendler; M Seidel; T Sido; O Gurel-Gurevich; M Yassour; E Meshorer; G Benedek; I Fogel; E Oiknine-Djian; A Gertler; Z Rotstein; B Lavi; Y Dor; D G Wolf; M Salton; Y Drier
Journal:  Clin Microbiol Infect       Date:  2020-06-23       Impact factor: 13.310

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.