Literature DB >> 36121879

Bayesian estimation of community size and overlap from random subsamples.

Erik K Johnson1, Daniel B Larremore2,3.   

Abstract

Counting the number of species, items, or genes that are shared between two groups, sets, or communities is a simple calculation when sampling is complete. However, when only partial samples are available, quantifying the overlap between two communities becomes an estimation problem. Furthermore, to calculate normalized measures of β-diversity, such as the Jaccard and Sorenson-Dice indices, one must also estimate the total sizes of the communities being compared. Previous efforts to address these problems have assumed knowledge of total community sizes and then used Bayesian methods to produce unbiased estimates with quantified uncertainty. Here, we address communities of unknown size and show that this produces systematically better estimates-both in terms of central estimates and quantification of uncertainty in those estimates. We further show how to use species, item, or gene count data to refine estimates of community size in a Bayesian joint model of community size and overlap.

Entities:  

Mesh:

Year:  2022        PMID: 36121879      PMCID: PMC9522272          DOI: 10.1371/journal.pcbi.1010451

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.779


This is a PLOS Computational Biology Methods paper.

Introduction

Quantifying the overlap between two groups, sets, or communities is a problem in many fields including genetics, ecology, and computer science. When the two communities are fully known, one can simply count the size of their intersection. However, when populations are only partially observed, due to a subsampling or stochastic sampling process, the community overlap problem becomes one of inference. In ecology, the relationship between the diversity in one community and another is called β-diversity [1], an idea which has led to the creation of numerous indices and coefficients which seek to quantify it. For example, the canonical Jaccard index [2] and the Sorenson-Dice coefficient [3, 4] have the appealing properties that (i) they are based only on the number of shared species, s, and the numbers of species in each community, R and R, and they take the values zero, when two communities are entirely unrelated, and one, when the communities are identical. However, these coefficients, as well as alternatives [5], have been shown to be biased when community sampling is incomplete [6, 7]. Furthermore, they provide no measure of statistical uncertainty because they provide only point estimates. To address these issues, improvements in the quantification of β-diversity have been made in various ways. One direction of development recognizes that the measurement of β-diversity from the presence and absence of species fundamentally relies on counting the species shared by the two communities in the context of the numbers of species in each community separately, thus cataloguing the myriad ways in which these three integers might be reasonably combined, depending on the circumstances [5]. Another set of developments has been to work with species abundance data instead of binary presence-absence measurements [8]. A third set of developments has been to place observations of both abundance and presence-absence in the context of a probabilistic sampling process [6, 7], allowing for the appropriate quantification of uncertainty through confidence intervals or credible intervals. One key feature of the β-diversity measures that quantify uncertainty is that the assumptions of their underlying statistical models must be stated explicitly. This provides transparency and also reveals assumptions which may not hold in practice. In recent work, a Bayesian approach to β-diversity estimation was introduced which provides unbiased estimates of the overlap between two stochastically sampled communities, yet this approach assumes that the two original community sizes are known a priori [7]. In practice, however, overall community sizes may be unknown, or may vary widely, making this model and others like it misspecified from the outset to an unknown degree. Thus, while incorporating appropriate uncertainty into community overlap estimation is an improvement, doing so without recognizing uncertainty or misspecification in each individual community’s size may nevertheless lead to biased, overconfident, and unreliable inferences. Here we address this problem by leveraging an additional and often available source of data in presence-absence studies: the total number of independent samples taken from each community, i.e. the sampling depth or effort. Building on the same intuition as the estimation of total species from a species accumulation curve [9], we introduce a model for β-diversity calculations which produces joint estimates of s, R, and R in a Bayesian statistical framework. Posterior samples of these quantities offer solutions to issues identified above by providing unbiased central estimates, the quantification of uncertainty via credible intervals, and the construction of Bayesian versions of the canonical Jaccard and Sorenson-Dice coefficients (as well as 20 others which are based on s, R, and R [5]). Although estimating pairwise similarity is a problem in many fields, here we present the problem in the context of estimating the genetic similarity between pairs of malaria parasites from the species Plasmodium falciparum—the most virulent of the human malaria parasites. Because terminology varies by context, in the remainder of this manuscript we use the terms community, set, and repertoire to refer to the same fundamental thing: the total number of unique species, objects, or genes, respectively, in a group of interest. Our goal in all contexts will be to estimate the number of shared species, objects, or genes, and to simultaneously estimate the sizes of each of the two communities, sets, or repertoires being compared.

P. falciparum repertoire overlap problem

During the blood stage of malaria, P. falciparum parasites replicate inside erythrocytes, and export a protein to the erythrocytic surface, called Plasmodium falciparum Erythrocyte Membrane Protein 1 (PfEMP-1). There, the PfEMP-1 will allow the infected erythocyte to bind to human endothelial cells, facilitating the sequestration of the infected erythrocyte away from free circulation. Due to this important role, var genes have been widely studied and linked to malaria’s virulence and duration of infection [10-14]. Rather than a single var gene (and thus a single PfEMP-1), each P. falciparum genome contains a repertoire of hypervariable and mutually distinct var genes [15]. The var genes differ within and between parasites, due to rapid recombination and reassortment [16, 17]. This variability in var genes, and thus in PfEMP-1, facilitates immune evasion while preserving the ability to bind to different types of endothelial receptors. Critically, the number of var genes found in each parasite’s repertoire varies considerably [18]. For instance, the reference parasite 3D7 has been measured to have 58 var genes [15] while the DD2 and RAJ116 parasites have 48 and 39, respectively [19]. Studies of P. falciparum epidemiology and evolution have generated insights by comparing the var repertoires between parasites through β-diversity calculations [20-27]. Theory suggests that if a human population has been exposed to particular var genes, then repertoires containing those var genes will have lower fitness than repertoires that are entirely unrecognized by local hosts, shaping the var population structure [23–25, 28–30]. Thus, these linked immunological, epidemiological, and evolutionary questions require careful consideration of the methods by which we estimate the extent to which var repertoires overlap. However, traditional estimates of overlap between var repertoires suffer bias due to subsampling, mirroring similar observations for β-diversity measures more broadly [6]. Due to the massive diversity and recombinant structure of var genes, var studies typically use degenerate PCR primers to target a small “tag” sequence within a single var domain called DBLα [31]. These DBLα tags are widely used to study the structure and function of var genes [13, 20, 23, 31–36], but due to limited resources and/or time, DBLα PCR data are typically a random subsample from each parasite’s var repertoire. These PCR-based subsampling procedures therefore produce both presence-absence data for various var types, and counts reflecting the number of times each present var was observed. In this context, repertoire overlap is typically called pairwise type sharing [20] and is often quantified by the the Sorenson-Dice coefficient: where n and n are the number of unique var types sampled from parasites a and b, respectively, and n is the number of sampled types shared by both parasites (i.e., the empirical overlap). When repertoires are not fully sampled (as is overwhelmingly the case in existing studies [20–23, 25, 26]) the Sorensen-Dice coefficient underestimates the true overlap between repertoires. Problematically, this downward bias increases as n and n decrease [6, 7], which prevents direct comparisons between study sites with different sampling depths. The methods introduced in this paper, while targeted more broadly at the development of β-diversity quantification, are developed in the particular context of this P. falciparum repertoire overlap problem.

Methods

Setup

Our method for inferring overlap is based on two key observations. First, not all repertoires are the same size but information about a repertoire’s size can be gleaned from the rate at which more samples identify new repertoire elements [9]. Second, the observed overlap n is a realization of a stochastic sampling process which depends on not only the true overlap but also the true repertoire sizes. These observations lead us to use a hierarchical Bayesian approach (Fig 1).
Fig 1

Diagram of the model.

Two repertoire sizes, R and R, are generated by their priors. The overlap between the repertoires, s, is then generated by the prior on the overlap given the repertoire sizes. The repertoire sizes and overlap define the two parasites, a and b, from which we sample. Sampling m items with replacement from parasite a produces count data C consisting of genes sampled from parasite a and counts per gene. Sampling m items with replacement from parasite b produces count data C consisting of genes sampled from parasite b and counts per gene.

Diagram of the model.

Two repertoire sizes, R and R, are generated by their priors. The overlap between the repertoires, s, is then generated by the prior on the overlap given the repertoire sizes. The repertoire sizes and overlap define the two parasites, a and b, from which we sample. Sampling m items with replacement from parasite a produces count data C consisting of genes sampled from parasite a and counts per gene. Sampling m items with replacement from parasite b produces count data C consisting of genes sampled from parasite b and counts per gene. In brief, we model the stochastic process that generates the observed presence-absence data (n, n, and n) which can be derived from observed sample counts (i.e. observed abundances, C, C), from two parasites with repertoire sizes R and R and overlap s. The core of this stochastic sampling process is the assumption that sampling from each repertoire is done independently, uniformly at random, and with replacement, corresponding to PCR of var gDNA without substantial primer bias. From this model, we compute the joint posterior distribution of the unknown parameters, s, R, and R. With this joint posterior distribution, p(s, R, R ∣ C, C), we can produce unbiased a posteriori point estimates of the repertoire sizes and overlap, and can quantify uncertainty in these point estimates via credible intervals. In the detailed methods that follow, we describe our choice of priors over the three parameters s, R, and R, derive the model likelihood, and review the steps required to make calculations efficient. An open-source implementaton of these methods is freely available (see Code Availability statement).

Choice of prior distributions

Due to extensive sequencing and assembly efforts [18], the repertoire sizes for thousands of P. falciparum parasites have been characterized, leading us to choose a data-informed prior distribution for repertoire sizes R and R. We assume an informative Poisson prior for R and R, fit to the repertoire sizes from 2398 parasite isolates published by Otto et al. [18]. For β-diversity studies outside of P. falciparum, alternative informative priors can be chosen. Because the repertoire overlap s can take values between 0 and min{R, R}, we use an uninformative prior for repertoire overlap s,

Computing the joint posterior distribution p(s, R, R ∣ C, C)

The posterior distribution of the parameters given the count data is a product of three terms a calculation shown in detail in S1 Text. The rest of this section is devoted to computing each of these terms, noting that that last two are mathematically identical, but derived from different data. To compute p(R ∣ C), the distribution of repertoire size given count data for a fixed but arbitrary total sampling effort m = m = m, we first calculate the likelihood of observing count data C given a repertoire size R, i.e., p(C ∣ R). Knowing how to compute p(C ∣ R), allows us to calculate p(R ∣ C) via Bayes’ rule where p(R) is the prior on repertoire size and the sum in the denominator should be computed over the support of p(R). For the unbounded support of the Poisson prior used here, we restrict the sum to only those terms above the numerical precision of the computer. In S2 Text, we prove that where the c are the number of times each of the n sampled var types were observed and the f are the multiplicities of the unique numbers in . For instance, suppose the count data consists of five unique var types with counts then there are three (Q = 3) unique numbers amongst the c: 1, 2, and 3. Further, 1’s multiplicity in {1, 1, 2, 2, 3} is 2, 2’s is 2, and 3’s is 1 so (f1, f2, f3) = (2, 2, 1). With the likelihood p(C ∣ R) in hand, it is straightforward to calculate the posterior p(R ∣ C) via Eq (3). And, thus, we can calculate the second and third terms in Eq (2). Conveniently, the remaining term of Eq (2) p(s ∣ n, n, n, R, R) has been derived in the literature [7], but only under the restriction that R = R = 60. We therefore rederive this quantity for general but fixed R and R, summarizing the main steps here. Using Bayes’ rule, we can write where p(s ∣ R, R) is a user-specified prior described above. The other term, p(n ∣ n, n, s, R, R), can be computed by considering the probability that two subsets of size n and n will have an intersection of size n, given that they have been drawn uniformly from sets of total size R and R whose intersection is size s. To do so, we use the hypergeometric distribution, , which is the distribution of the number of “special” objects drawn after n uniform draws with replacement from a set of R objects, s of which are “special.” With this distribution in mind, note that observing n shared var genes can be thought of as a two-step process. First, draw n var genes from parasite a’s R total in which s are special because they are shared with parasite b. The number of shared vars drawn is a random variable . Second, draw n genes from parasite b’s R total in which s are special because they are shared by both parasites and were drawn from parasite a. The number of shared vars captured after sampling from both parasites, n, will be distributed according to . To generate a particular empirical overlap n, first step 1 must happen and then, independently, step 2 must happen. We therefore multiply these two hypergeometric probabilities. However, because these two steps may occur for any value of the intermediate variable s, we sum over all possible values of s Plugging this into Eq (6) allows us to compute p(s ∣ n, n, n, R, R).

Inference method summary

We now have all the pieces in place to compute p(s, R, R ∣ C, C): where the first three terms are the user-specified priors. With this joint posterior distribution, we can compute unbiased Bayesian estimates of s, R, and R as expectations over the posterior: Moreover, and importantly, we can compute unbiased Bayesian estimates of any functional combination of s, R, and R such as Bayesian versions of the Jaccard index [2], the Sorensen-Dice coefficient [4], other coefficients based on s, R, and R [5], and the directional pairwise-type-sharing measures of He et al. [29]. For all of these measures, in addition to the point estimates, the ability to draw from the joint posterior distribution Eq (9) enables one to compute credible intervals to quantify uncertainty.

Generation of simulated data

To facilitate numerical experiments in which we tested our inference method’s ability to recover accurate estimates of s, R, and R, we generated synthetic data via simulation as follows. First, we selected a value of overlap s between 0 and 70, so that analyses could be stratified according to overlap. Next, we drew repertoire sizes R and R independently from the prior distribution, ensuring that R ≤ s and R ≤ s, redrawing as necessary. Next, we drew from the model (Fig 1) a set of m and m samples from repertoires of sizes R and R, respectively, with specified overlap s, to generate count data histograms C and C. This procedure therefore stochastically created synthetic count data for a specified overlap s and sampling depth m = m = m, allowing us to test our method’s accuracy and uncertainty quantification under various scenarios.

Results

Inference

We first investigated how increasing the total number of independent samples improves our ability to correctly estimate the total size of a single repertoire (or generally, community), by which we specifically mean the number of unique constituent genes (or generically, species or objects). To do so, we conducted numerical experiments where we presumed a repertoire size and then simulated samples from it to produce count data. An example of such an experiment shows how posterior estimates approach the true repertoire size as sampling effort increases (Fig 2). Here, because we focus on a single repertoire in isolation, we drop a and b subscripts for the moment, referring to simply sampling effort m, repertoire size R, and count data C.
Fig 2

Repertoire size posterior estimates improve with increased sampling effort.

For a single repertoire with true size R = 52, the posterior distribution p(R ∣ C) is plotted for different sampling efforts m (see legend). For each value of m, count data C were generated by drawing m genes uniformly with replacement from a repertoire of 52 genes. As sampling effort increases, the posterior p(R ∣ C) concentrates around the true repertoire size 52. The m = 0 curve is the Poisson prior on repertoire size, p(R).

Repertoire size posterior estimates improve with increased sampling effort.

For a single repertoire with true size R = 52, the posterior distribution p(R ∣ C) is plotted for different sampling efforts m (see legend). For each value of m, count data C were generated by drawing m genes uniformly with replacement from a repertoire of 52 genes. As sampling effort increases, the posterior p(R ∣ C) concentrates around the true repertoire size 52. The m = 0 curve is the Poisson prior on repertoire size, p(R). This experiment illustrates two related points. First, there is valuable information in knowing the total sampling effort m, even if some samples were duplicate observations of previously observed genes, simply because those sample counts inform repertoire size estimates. Second, increasing the sampling effort concentrates p(R ∣ C) around the true repertoire size, concretely linking sampling effort to estimation of not only repertoire size, but through decreased uncertainty, eventual overlap estimates as well. Next we examined whether the , , and estimates in Eqs (10)–(12) are accurate across a range of sampling efforts m in two steps. First, we simulated the sampling process for various values of s, R, and R to produce synthetic count data C and C with varying levels of overlap between the observed samples. Then, we evaluated our ability to recover s, R, and R by applying Eqs (10)–(12) to the synthetic data. We found that the overlap and repertoire estimates accurately reproduce the true parameter values, provided that sampling effort is sufficiently large. Furthermore, as sampling effort increases, estimates become increasingly accurate (Fig 3).
Fig 3

Accuracy of estimates across a range of true parameter values and sampling efforts.

For each overlap value s between 0 and 70, we performed three independent simulations to generate synthetic count data (Methods). Estimates of s (A,B,C) and R (D,E,F) from the resulting count data, using our statistical model, are shown. Estimates are shown for sampling efforts m = m = m = 50, 96, 192 across left, middle, and right columns, respectively. Dashed black lines represent perfect unbiased inference.

Accuracy of estimates across a range of true parameter values and sampling efforts.

For each overlap value s between 0 and 70, we performed three independent simulations to generate synthetic count data (Methods). Estimates of s (A,B,C) and R (D,E,F) from the resulting count data, using our statistical model, are shown. Estimates are shown for sampling efforts m = m = m = 50, 96, 192 across left, middle, and right columns, respectively. Dashed black lines represent perfect unbiased inference. However, we also observed that when the sampling effort is small but repertoires are large and highly overlapping (e.g. m = 50 and s > 50), underestimates the true values (Fig 3A). This phenomenon is due to a more general property of Bayesian inference: when there are fewer samples from which to infer, the prior distribution exerts a stronger effect on inferences. Here, the Poisson prior over repertoire sizes assigns low probability to repertoire sizes as large as 70 (p(R ≥ 70) = 0.03), and thus, in the absence of a large sampling effort to overwhelm that prior, the surprisingly large repertoire sizes and overlaps require substantially more samples to establish. In real data from P. falciparum, repertoires (and thus repertoire overlaps) larger than 60 are rarely observed [18, 26], decreasing the potential impact of this issue for the study of repertoire overlap between individual parasites (though not for the study of overlap between infections containing multiple parasites; see Discussion.

Uncertainty

Bayesian methods also allow us to quantify uncertainty via credible intervals (CIs). To measure how well our CIs capture the true parameter values, we computed 95% highest density posterior intervals for parameter estimates in simulated data, where true values were known. As expected, uncertainty decreased as sampling effort increased, and approximately 95% of the 95% CIs captured the true parameter values, as designed (Fig 4). For instance, for sampling efforts of m = 50, m = 96, and m = 192, the proportions of the 95% CIs containing the true s were 0.975, 0.975, and 0.965, respectively. For the same three sampling efforts, the proportions of the 95% CIs that contained the true repertoire size R were 0.920, 0.950, and 0.955, respectively.
Fig 4

Credible intervals quantify uncertainty in overlap estimates.

For each overlap value s between 0 and 70, we performed one simulation to generate synthetic count data (Methods). Estimates from the resulting count data, using our statistical model, of s (A,B,C), and error in R and R (D,E,F) are shown. Estimates (dots) and 95% credible intervals (lines) are shown for sampling efforts m = 50, 96, 192 in left, middle, and right columns, respectively.

Credible intervals quantify uncertainty in overlap estimates.

For each overlap value s between 0 and 70, we performed one simulation to generate synthetic count data (Methods). Estimates from the resulting count data, using our statistical model, of s (A,B,C), and error in R and R (D,E,F) are shown. Estimates (dots) and 95% credible intervals (lines) are shown for sampling efforts m = 50, 96, 192 in left, middle, and right columns, respectively.

Improving β-diversity indices

Over 20 different indices of β diversity have been proposed which algebraically combine empirical estimates of R, R, and s [5], including the well known Jaccard index and the Sorenson-Dice coefficient. The Sorenson-Dice coefficient is defined as the ratio of repertoire overlap to the average of the repertoires sizes, Typically, in the absence of more sophisticated estimates of R, R, and s, empirical values are used, However, the joint posterior distribution Eq (9) over s, R, and R opens the door to a Bayesian reformulation of the Sorenson-Dice coefficient as with similar generalizations for the Jaccard coefficient or other combinations of s, R, and R [5]. This Bayesian Sorenson-Dice coefficient averages the values of the typical Sorenson-Dice coefficient over joint posterior estimates of s, R, and R. We investigated the performance of the Bayesian Sorenson-Dice coefficient and its empirical counterpart by once more simulating the sampling process under known conditions and applying both formulas. As in our estimates of repertoire overlap, we again found that Bayesian Sorenson-Dice estimates produce consistent and unbiased estimates with correct quantification of uncertainty via credible intervals (Fig 5), except when sampling effort is low (m = 50) while true repertoire overlap is extremely high (s > 50). Furthermore, the Bayesian estimates track the true Sorenson-Dice values better than direct empirical estimates across overlap values and sampling efforts; direct empirical estimates are biased more and more downward as sampling effort decreases and as true overlap increases (Fig 5). While this illustrates how the Bayesian framework herein may be used to improve classical and commonly used estimators via Eq (15), an identical approach may be used to compute Bayesian Jaccard coefficients, or other algebraic combinations of s, R, and R [5].
Fig 5

Bayesian vs empirical Sorensen-Dice estimates.

For each overlap value s between 0 and 70, we performed one independent simulation to generate synthetic count data (Methods) and estimated the Sorensen-Dice coefficient using estimates from our Bayesian framework as well as from the raw empirical data. The error in the Bayesian Sorensen-Dice estimate, (Eq (15)), and accompanying 95% credible intervals are shown. The often-used empirical Sorensen-Dice estimate, (Eq (14)), is also shown. The dashed black line at 0 represents the true Sorensen-Dice coefficient (Eq (13)).

Bayesian vs empirical Sorensen-Dice estimates.

For each overlap value s between 0 and 70, we performed one independent simulation to generate synthetic count data (Methods) and estimated the Sorensen-Dice coefficient using estimates from our Bayesian framework as well as from the raw empirical data. The error in the Bayesian Sorensen-Dice estimate, (Eq (15)), and accompanying 95% credible intervals are shown. The often-used empirical Sorensen-Dice estimate, (Eq (14)), is also shown. The dashed black line at 0 represents the true Sorensen-Dice coefficient (Eq (13)).

Sample size calculations

Sample size calculations ask how many samples are needed to produce eventual estimates with a pre-specified level of (or upper bound on) statistical uncertainty. Such questions, while critical in the ethical study of human subjects, are also important when budgeting for studies in which additional samples require time, reagents, and funding. To assist in sample size calculations, we used simulations to quantify the relationship between increases in sampling effort and decreases in the typical width of the credible interval around the repertoire overlap estimate estimate (Eq (10)). For many overlap-sampling effort pairs, (s, m), we performed 300 independent replicates in which we generated synthetic data, computed the posterior distribution for s, and calculated the width of the 95% CI. We found that, as expected, increased sampling effort leads to decreased uncertainty across all values of overlap s (Fig 6). However, we also found that overlap plays a role as well, with larger overlap causing wider CIs. For instance, after m = 200 samples, a CI for overlap s = 70 is typically of width 8, while a CI for overlap s = 30 is typically of width 4. After m = 300 samples from each repertoire, median CI widths are 4 or lower for all overlap values. In short, it is easier to show with high confidence that two samples do not overlap than to show that they are highly overlapping.
Fig 6

Quantifying the decrease in uncertainty from increased sequencing.

Constant s curves show the median 95% credible interval (CI) width for the s estimate, , as a function of the sampling effort m = m = m. For each (s, m)-duplet, the median is across 300 count data generation simulations. This plot illustrates the intuition that additional laboratory efforts (increasing m) lead to higher accuracy (smaller CIs).

Quantifying the decrease in uncertainty from increased sequencing.

Constant s curves show the median 95% credible interval (CI) width for the s estimate, , as a function of the sampling effort m = m = m. For each (s, m)-duplet, the median is across 300 count data generation simulations. This plot illustrates the intuition that additional laboratory efforts (increasing m) lead to higher accuracy (smaller CIs).

Discussion

This manuscript presents a Bayesian solution to estimating the overlap between two communities, repertoires, or sets, when only subsamples are available. Importantly, because the total community sizes bear on the inference of overlap, this method jointly estimates community sizes and overlap from the quantitative accumulation of evidence, improving inferences. Samples from the joint posterior distribution can be used to quantify uncertainty via credible intervals, or can be used in Bayesian versions of the Jaccard index, Sorenson-Dice coefficient, and other algebraic combinations of set sizes and intersections. By showing how the inclusion of total sampling effort can improve inferences, this study demonstrates the value of recording and reporting not only presence-absence, but abundance as well—even when the true abundances are uniformly equal, as in the study of P. falciparum’s var gene families. In addition to the analysis of existing data, this approach can also be used prospectively to perform sample size calculations. Importantly, context-specific sample sizes can be estimated by including additional information in the Bayesian prior. For instance, in the context of malaria’s var genes, it is known that parasites from South America tend to have smaller repertoires [37, 38] than samples from other regions [18]—information which can be expressed through the prior distribution to influence (and in this case, decrease) sampling needs. Because additional sampling has financial and complexity costs, this allows researchers to weigh accuracy requirements against laboratory costs in the contexts of a particular study. Beyond the study of P. falciparum, the approach introduced in this work lands in between two existing classes of β-diversity measures in the ecology literature. One class of methods measures β-diversity in terms of species presence or absence [5], while the other further includes species abundance [6]. The present work uses abundance measurements (which we call count data) in order to improve presence-absence-based β-diversity estimates, but does not construct abundance-based similarity measures per se. By drawing inferences from both, this work also aligns with past efforts which rely in principle on an idea that one may draw inferences both from what is observed and what is not observed [6, 7]. The tradeoffs for improved inferences are twofold. First, our approach requires abundance data (i.e., count data C) instead of presence/absence totals n, n, and n. This limits the retrospective analysis of past work or meta-analyses to only those studies that meet a greater data-sharing burden. However, we also note that, as proven in S2 Text, full count data are not necessary: the posterior p(s, R, R ∣ C, C) can still be computed exactly when only the sampling efforts (m and m) and the presence/absence values (n, n, and n) are known. The second tradeoff for improved inference is that one must specify a prior distribution for the total community sizes. In the case of the var gene repertoires of P. falciparum, data-informed prior distributions can be created for both global [18] or local [38] estimates. In this light, one may view past work on Bayesian methods for repertoire overlap [7, 24] as specifying point priors at a particular fixed repertoire size. In general, the choice of an appropriate prior is left to the user, which may require users to make explicit their prior beliefs about community size. There are limitations to our approach which relate to our assumptions about the sampling process which generates the count data. Specifically, we have assumed throughout this work that each time a new sample is generated, this sample is drawn independently and uniformly from a population in which unique genes, species, or objects are identically represented. Thus, unlike abundance based measures [6] which assume that some species are more likely to be sampled than others, we assumed each species’ selection is equiprobable. In the sampling of var gene sequences, for instance, methodological artifacts such as PCR primer bias may cause non-uniform sampling. One avenue for future work could be to extend our rigorous probabilistic modeling to the non-uniform sampling regime. Another limitation, particularly for the study of P. falciparum, is that bulk sequencing methods may sample from multiple distinct parasite genomes when an individual’s multiplicity of infection (MOI) is greater than one. Unfortunately, even if MOI is known, it is unclear how one should alter the prior P(R) for samples from that individual, due to the fact that the two or more parasite genomes within a single host may, themselves, be overlapping to an unspecified degree. This may be possible to address with further assumptions and associated priors in future work, but as a consequence, the methods presented here are valid for the analysis of P. falciparum only when MOI equals one.

Factorization of the joint posterior distribution.

(PDF) Click here for additional data file.

Theorems enabling efficient computations.

(PDF) Click here for additional data file. 25 Oct 2021 Dear Dr. Larremore, Thank you very much for submitting your manuscript "Bayesian estimation of population size and overlap from random subsamples" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. Both the reviewers are overall very positive about the manuscript. One important point, raised by Reviewer #1, concerns the overlap of material with Larremore, Plos CB (2019). Some sentences are perfectly overlapping (e.g. "Of the diverse multigene families of P. falciparum, the var family is the most heavily studied because of its direct links to both malaria’s duration of infection and its virulence "). The overlap seems to be limited to introductory parts, and therefore do not affect the originality of the results. I suggest however to rephrase those parts that are strongly overlapping with the previous paper. More importantly, as raised by both the reviewers, the code should be made available at review stage. We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts. Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Jacopo Grilli Associate Editor PLOS Computational Biology Nina Fefferman Deputy Editor PLOS Computational Biology *********************** Both the reviewers are overall very positive about the manuscript. One important point, raised by Reviewer #1, concerns the overlap of material with Larremore, Plos CB (2019). Some sentences are perfectly overlapping (e.g. "Of the diverse multigene families of P. falciparum, the var family is the most heavily studied because of its direct links to both malaria’s duration of infection and its virulence "). The overlap seems to be limited to introductory parts, and therefore do not affect the originality of the results. I suggest however to rephrase those parts that are strongly overlapping with the previous paper. More importantly, as raised by both the reviewers, the code should be made available at review stage. Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: This study looks at the issue of estimating the overlap between two sets when only partial samples are available from each, and - unlike previous work (Larremore 2019) - when the true repertoire sizes of both sets are unknown. The authors take a statistical modelling approach, starting by constructing a generative model and then describing how this can be used to calculate likelihoods and perform Bayesian inference. The final method is demonstrated to have nice properties, including reducing bias in cases where there is sufficient data to overwhelm the prior, and of course having a measure of uncertainty that is missing from traditional point estimates. Overall I think this is a strong paper. The method is very clearly described, and the general gist of the method could be followed even by those without a strong statistical background. There are clear advantages to the Bayesian formulation of the problem, and the extension presented here opens the previously described method up to a wider class of problems. The general nature of this problem also means it could find application in many areas. Other than some very minor points (see below), my only major criticism is around code availability. It is stated in the paper that an open-source implementation of the methods is freely available, and we’re pointed to the code availability statement, which points us to a GitHub repository, which appears to be basically empty (I could just see a LICENSE, an empty README and a .gitignore at time of review). Given the great effort the authors have gone to to ensure there are no unnecessary mathematical obstacles to the user it seems a shame not to have a nicely packaged tool that can be used out of the box. There is some pretty heavy re-use of material from the earlier 2019 paper (e.g. the P. falciparum section). I didn't calculate the overlap between these two sets of text, but in some cases I suspect it's upwards of 90%, depending on the estimation method used. I leave it to the editor to decide how big of a problem this is for this particular journal. My recommendation is to accept with minor changes, which include the points below and also tidying up the code availability. Minor points: 1. In the description immediately before formula (A6), consider using j to index the second series (i.e. “let f_j be the number of times u_j appears in C”). Otherwise it gives the sense that f_i somehow corresponds to c_i, when in fact these are different indices. 2. Typo: In formula (A7) a central dot is used as shorthand for multiplication, and also to mean the sequence of all values from c_2 to c_n. I think the latter should be a centre-justified ellipsis. 3. The derivation of the formula p(R | n), i.e. using just the number of unique groups and not complete counts, is mathematically correct but overly complicated and probably unnecessary. The distribution of unique items in a multinomial sample is a reasonably standard expression that has been described before, see for example (https://arxiv.org/abs/1602.05822), section “The distribution of unique items”. Citing this common result would simplify the derivation here and avoid accidentally taking credit for past work. 4. In the “Inference” section, there is the statement “The true value of R is always contained within the inferred distributions”. This might be picky, but I don’t really like this statement. It’s not clear what “contained within” really means here, as even something in the 99.99th centile is contained within the distribution. Something along the lines of “confidence increases with increasing m” seems more appropriate. 5. Grammatical mistake in the line “By drawing inferences both from this work also aligns with past efforts “. Reviewer #2: Johnson & Larremore proposed a new Bayesian estimation of beta-diversity that jointly estimates population sizes and the overlap. They showed that the estimates are unbiased when sampling efforts are large enough compared to the population sizes. This is an extension of Larremore (2019) PloS Comp Bio where population sizes are a fixed known quantity. In general, I think the paper is clearly written, including the results. I still have several questions though regarding the definition and the scope of application. 1) It seems that population size and repertoire size are used interchangeably in the article when describing R. However, what R really refers to is not population size, but the number of species in the population (as defined in the introduction). In ecology and evolution, population size has a very different meaning—usually in the order of 10^4-10^7. Clearly, in this article, R really refers to different species/genes per population. I think the authors should be consistent in their definition of R, and make it relevant to ecologists. 2) It would be nice if the proposed measure can be applied with data from published papers to check how SD_bayesian performs, and whether it changes any conclusions from the studies. It would be better if it’s applied to ecological data where beta-diversity is calculated. 3) On page 8, the authors stated that “In real data from P. falciparum , repertoires (and thus repertoire overlaps) larger than 60 are rarely observed [28, 15], decreasing the potential impact of this issue”. However, many patients contain multiple infections of parasites, where the total isolate size is much larger than 60. It is worthwhile to discuss how that would influence the overlap estimation when the number of infections per isolate is unknown. 4) R = Ra+Rb, and m = ma+mb, or m = ma=mb? It is not clear to me from the manuscript, how the difference between Ra, Rb or ma/mb would influence the results. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No: Code availability points to a GitHub repository, which is empty at time of review and hence the code is not made available with the paper. This may be a simple mistake, for example updating the repository on a local branch but failing to push to main. Reviewer #2: No: The link to the github repo does not contain any codes yet ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at . Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols 13 Apr 2022 Submitted filename: Revisions - Johson & Larremore - Bayesian - PLOS Comp Biol.pdf Click here for additional data file. 28 Jul 2022 Dear Dr. Larremore, We are pleased to inform you that your manuscript 'Bayesian estimation of community size and overlap from random subsamples' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note also that reviewer 1 noted an incorrect link to GitHub; please fix this. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Jacopo Grilli Associate Editor PLOS Computational Biology Nina Fefferman Deputy Editor PLOS Computational Biology Jason A. Papin Editor-in-Chief PLOS Computational Biology *********************************************************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: I thank the authors for taking time to make the necessary changes to the paper and code. I am satisfied with all changes, except for one mistake - the github link in code availability points to the wrong repos (vaccine efficacy rather than beta diversity). Reviewer #2: The authors have addressed all my concerns from my previous review comments. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 25 Aug 2022 PCOMPBIOL-D-21-01287R1 Bayesian estimation of community size and overlap from random subsamples Dear Dr Larremore, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Agnes Pap PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
  27 in total

1.  Var gene diversity in Plasmodium falciparum is generated by frequent recombination events.

Authors:  H M Taylor; S A Kyes; C I Newbold
Journal:  Mol Biochem Parasitol       Date:  2000-10       Impact factor: 1.759

2.  Immune characterization of Plasmodium falciparum parasites with a shared genetic signature in a region of decreasing transmission.

Authors:  Amy K Bei; Ababacar Diouf; Kazutoyo Miura; Daniel B Larremore; Ulf Ribacke; Gregory Tullo; Eli L Moss; Daniel E Neafsey; Rachel F Daniels; Amir E Zeituni; Iguosadolo Nosamiefan; Sarah K Volkman; Ambroise D Ahouidi; Daouda Ndiaye; Tandakha Dieye; Souleymane Mboup; Caroline O Buckee; Carole A Long; Dyann F Wirth
Journal:  Infect Immun       Date:  2014-11-03       Impact factor: 3.441

3.  A restricted subset of var genes mediates adherence of Plasmodium falciparum-infected erythrocytes to brain endothelial cells.

Authors:  Marion Avril; Abhai K Tripathi; Andrew J Brazier; Cheryl Andisi; Joel H Janes; Vijaya L Soma; David J Sullivan; Peter C Bull; Monique F Stins; Joseph D Smith
Journal:  Proc Natl Acad Sci U S A       Date:  2012-05-22       Impact factor: 11.205

4.  Plasmodium falciparum erythrocyte membrane protein 1 domain cassettes 8 and 13 are associated with severe malaria in children.

Authors:  Thomas Lavstsen; Louise Turner; Fredy Saguti; Pamela Magistrado; Thomas S Rask; Jakob S Jespersen; Christian W Wang; Sanne S Berger; Vito Baraka; Andrea M Marquard; Andaine Seguin-Orlando; Eske Willerslev; M Thomas P Gilbert; John Lusingu; Thor G Theander
Journal:  Proc Natl Acad Sci U S A       Date:  2012-05-22       Impact factor: 11.205

5.  A subset of group A-like var genes encodes the malaria parasite ligands for binding to human brain endothelial cells.

Authors:  Antoine Claessens; Yvonne Adams; Ashfaq Ghumra; Gabriella Lindergard; Caitlin C Buchan; Cheryl Andisi; Peter C Bull; Sachel Mok; Archna P Gupta; Christian W Wang; Louise Turner; Mònica Arman; Ahmed Raza; Zbynek Bozdech; J Alexandra Rowe
Journal:  Proc Natl Acad Sci U S A       Date:  2012-05-22       Impact factor: 11.205

6.  PfEMP1-DBL1alpha amino acid motifs in severe disease states of Plasmodium falciparum malaria.

Authors:  Johan Normark; Daniel Nilsson; Ulf Ribacke; Gerhard Winter; Kirsten Moll; Craig E Wheelock; Justus Bayarugaba; Fred Kironde; Thomas G Egwang; Qijun Chen; Björn Andersson; Mats Wahlgren
Journal:  Proc Natl Acad Sci U S A       Date:  2007-09-25       Impact factor: 11.205

7.  Genome sequence of the human malaria parasite Plasmodium falciparum.

Authors:  Malcolm J Gardner; Neil Hall; Eula Fung; Owen White; Matthew Berriman; Richard W Hyman; Jane M Carlton; Arnab Pain; Karen E Nelson; Sharen Bowman; Ian T Paulsen; Keith James; Jonathan A Eisen; Kim Rutherford; Steven L Salzberg; Alister Craig; Sue Kyes; Man-Suen Chan; Vishvanath Nene; Shamira J Shallom; Bernard Suh; Jeremy Peterson; Sam Angiuoli; Mihaela Pertea; Jonathan Allen; Jeremy Selengut; Daniel Haft; Michael W Mather; Akhil B Vaidya; David M A Martin; Alan H Fairlamb; Martin J Fraunholz; David S Roos; Stuart A Ralph; Geoffrey I McFadden; Leda M Cummings; G Mani Subramanian; Chris Mungall; J Craig Venter; Daniel J Carucci; Stephen L Hoffman; Chris Newbold; Ronald W Davis; Claire M Fraser; Bart Barrell
Journal:  Nature       Date:  2002-10-03       Impact factor: 49.962

8.  Networks of genetic similarity reveal non-neutral processes shape strain structure in Plasmodium falciparum.

Authors:  Qixin He; Shai Pilosof; Kathryn E Tiedje; Shazia Ruybal-Pesántez; Yael Artzy-Randrup; Edward B Baskerville; Karen P Day; Mercedes Pascual
Journal:  Nat Commun       Date:  2018-05-08       Impact factor: 14.919

9.  Bayes-optimal estimation of overlap between populations of fixed size.

Authors:  Daniel B Larremore
Journal:  PLoS Comput Biol       Date:  2019-03-29       Impact factor: 4.475

10.  Inferring malaria parasite population structure from serological networks.

Authors:  Caroline O Buckee; Peter C Bull; Sunetra Gupta
Journal:  Proc Biol Sci       Date:  2009-02-07       Impact factor: 5.349

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.