Literature DB >> 25747459

Estimating effective population size from temporally spaced samples with a novel, efficient maximum-likelihood algorithm.

Tin-Yu J Hui1, Austin Burt2.   

Abstract

The effective population size [Formula: see text] is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating [Formula: see text] have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator [Formula: see text] for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying [Formula: see text] is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator [Formula: see text], and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of [Formula: see text] to several million, hence allowing the estimation of larger [Formula: see text]. Finally, we demonstrate how this algorithm can cope with nonconstant [Formula: see text] scenarios and be used as a likelihood-ratio test to test for the equality of [Formula: see text] throughout the sampling horizon. An R package "NB" is now available for download to implement the method described in this article.
Copyright © 2015 by the Genetics Society of America.

Entities:  

Keywords:  effective population size; genetic drift; maximum-likelihood estimation

Mesh:

Year:  2015        PMID: 25747459      PMCID: PMC4423369          DOI: 10.1534/genetics.115.174904

Source DB:  PubMed          Journal:  Genetics        ISSN: 0016-6731            Impact factor:   4.562


THE effective size of a population is a key concept in population genetics that links together such seemingly disparate quantities as the equilibrium levels of genetic variation and linkage disequilibrium, the size of temporal changes in allele frequencies, the probability of fixation of a new mutation, and others (Charlesworth and Charlesworth 2010). Often is estimated from information on mutation rates and standing levels of nucleotide variation (Charlesworth 2009). In most species there is some level of population differentiation (i.e., individuals from geographically distant areas are more genetically different than those from the same location), and in this case standing levels of genetic variation within a local population give estimates of the effective population size summed across all subpopulations of the species (Charlesworth and Charlesworth 2010). Standing levels of variation also reflect effective population sizes over many thousands or millions of generations. For some purposes it is more interesting to estimate the current (or recent) size of a local subpopulation. In these circumstances it is common to use fluctuations in allele frequencies over multiple generations to estimate , with larger fluctuations indicating a smaller variance effective population size (Krimbas and Tsakas 1971; Waples 1989). This follows from the fact that the variance of genetic drift experienced in a population is a function of and can be quantified under the Wright–Fisher model. The variance of genetic drift in one generation is for a diploid population with effective population size and initial allele frequency p [for haploid populations, ]. Hence it is possible to estimate the effective population size of a closed population by investigating the magnitude of temporal changes in allele frequencies. One approach to estimating from temporal samples is to use F-statistics (Krimbas and Tsakas 1971; Nei and Tajima 1981; Pollak 1983; Waples 1989; Jorde and Ryman 2007). F-statistics can be obtained by calculating the standardized variance of gene frequency change, after sampling error is taken into consideration. The F-statistics are moment-based estimators, making them easy to compute. They tend to be slightly biased upward in general and suffer from large bias when rare alleles are used (Waples 1989; Wang 2001). Another class of temporal estimators uses the likelihood approach. Williamson and Slatkin (1999) proposed the full-likelihood approach in estimating (see also Anderson ; Wang 2001; Berthier ). There are several advantages of using maximum likelihood over the F-statistics. For example, the maximum-likelihood estimator has a lower variance and smaller bias, resulting in more precise estimates (Williamson and Slatkin 1999; Wang 2001). It also allows a more flexible sampling scheme, that allele frequency data can be collected from more than two time points. On the downside, the likelihood methods are computationally demanding compared to the F-statistics, because they make use of the full distributional information of the allele frequency across generations. Numerical maximization of the likelihood function is usually involved, and the associated computational difficulties increase with more loci and longer sampling horizons. As a result, the likelihood methods are not suitable for populations with large . The used in previous simulation studies were limited to between 20 and 100 diploid individuals only (Williamson and Slatkin 1999; Anderson ; Wang 2001; Berthier ). It is unfortunate that computing difficulties limit the use of the current likelihood approaches, despite their precision and rigorous statistical basis. Waples (1989) and Pollak (1983) both commented that indirect (genetic) methods for estimating are necessary only if is large, but this is precisely the case in which the temporal methods are less reliable. This study aims to provide an alternative likelihood-based estimator that solves the problems in the current likelihood methods. Therefore the new estimator should (1) be computationally compact, (2) be able to work with a wide range of , and (3) be mostly unbiased and have at least the same degree of precision as the other methods.

Theory

The full-likelihood model and MLNE

The full-likelihood model was developed by Williamson and Slatkin (1999) and is used as the basic model in this article. The full-likelihood function for two samples is(Williamson and Slatkin 1999, equation 4), where are the sampled allele counts and are the underlying true allele frequencies. For sampling, it is assumed that the samples are taken with replacement; hence and are binomially and independently distributed with n being the number of sampled diploid individuals:The probability is calculated using the forward transition matrix M. Each element of M, , is the probability of the population drifting from the state having i copies to j copies of an allele. Under the Wright–Fisher model, the transition matrix for biallelic loci can be obtained from a binomial distribution. As the possible number of alleles runs from 0 to , the dimension of the transition matrix M is . Clearly a computational issue arises here. For a moderately large , say , the dimension of the transition matrix becomes 20,0012 (which is ∼400 million), and this is the number of transition probabilities that needs to be calculated to fill in the matrix M. Furthermore, if the two samples were taken from t generations apart, M has to be multiplied by itself t times to get the transition probabilities for t generations ahead. For large it may not be feasible to compute every element in the matrix M and multiply a matrix of such a size, even with the advance of computing power. For the likelihood function itself, and are nuisance (unobserved, latent state) parameters that need to be marginalized out by summing over all possible combinations of and . For more than two samples, the likelihood function becomes(Williamson and Slatkin 1999, equation 6), where are the underlying true allele frequencies and treated as nuisance parameters. To marginalize out the underlying allele frequencies, we need to sum over all possible values of , and the number of summations equals the number of nuisance parameters. Closed-form expressions of the summations may not exist, and they are calculated numerically in this case. Although the form of the likelihood function is explicit, it is computationally intensive to evaluate and maximize it. While no software appears to be available for the full-likelihood model, the software package MLNE was created to implement the pseudolikelihood approach proposed by Wang (2001) and Wang and Whitlock (2003). The pseudolikelihood omits some of the insignificant transition probabilities in the matrix M and hence reduces computational effort. However, it is still computationally demanding and the computation time increases rapidly with increasing (Wang 2001). Currently, the upper bound for that MLNE can handle is ∼38,000 on a 64-bit Windows machine with 16 GB of memory.

A continuous approximation

While the Wright–Fisher model assumes discrete allele frequencies, Fisher (1922) first applied differential equations to model the dynamics of allele frequencies over time. Kimura (1955) derived the complete solution of the differential equation, using the method of moments. The core assumption of the continuous approximation is that is sufficiently large that the moments of the continuous distribution converge to the exact moments. To visualize the model, the process can be represented as a hidden Markov model (Figure 1) (a similar diagram appeared in Anderson ). Here are the underlying true allele frequencies according to the Wright–Fisher model, and are observations from the system. We define as allele counts; hence they are positive integers running from (assuming the species is diploid). We aim to derive the joint relationship among all the observations and then infer the parameter governing the process. We investigate the components in this likelihood and hence derive our estimator . As with the Wright–Fisher model, this model also assumes nonoverlapping generations, an isolated population, and constant effective population size . Other genetic forces including selection and mutation are assumed to be insignificant relative to genetic drift (Waples 1989; Williamson and Slatkin 1999; Wang 2001).
Figure 1

Hidden Markov model representing the structure of the process. is the sequence of true allele frequencies propagating according to the Wright–Fisher model but they are unobserved. are the realizations or the sampled allele frequencies.

Hidden Markov model representing the structure of the process. is the sequence of true allele frequencies propagating according to the Wright–Fisher model but they are unobserved. are the realizations or the sampled allele frequencies.

Two samples:

In the two-sample model, we assume only two samples are obtained. In a later section the model is extended to handle multiple sampling events. The likelihood function is the joint density of our two observations and isThis is the simplest form of the likelihood function for our parameter of interest , given our observed values. We can see that is the initial observed allele count and has no relationship with . Therefore does not play a role in maximizing the likelihood and can be treated as a constant. We can then rewrite the likelihood function as follows:By considering the unobserved nuisance parameters, the likelihood function becomesEquation 6 is the continuous analogy of Equation 1, with summations being replaced by integrals. The terms of the likelihood function have the same meaning as in Equation 1: is the sampling allele counts at generation t, is the transition probability that plays the same role as the Wright–Fisher matrix in the full-likelihood model, and the last term is the distribution of initial allele frequency conditioning on the initial observation. The integrals are to “sum over” all possible values of the underlying allele frequencies. In the following paragraphs we evaluate each part of the likelihood function and finally derive the general formula for the likelihood function. The starting allele frequency is unknown in general. We may assume is uniformly distributed [equivalent to before any observations are taken, because it brings no additional parameters to the system (Williamson and Slatkin 1999). If is sampled binomially from under Equation 2, then by Bayes’ rule, the conditional distribution of follows a beta distribution (e.g., chap. 7.2.3 in Casella and Berger 2002):In fact, has the same role as in the full-likelihood model in Equation 1. Next, for the transition probability , a continuous distribution is used to model allele frequency instead of the discrete transition matrix in the full-likelihood model. The probability density function of given under genetic drift iswhere δ is called the “drift parameter” that controls the amount of drift:The drift parameter is a function of and the sampling interval t. It is derived from the continuous model of genetic drift by Kimura (1955) for sufficiently large and is a popular method to model the change in allele frequency due to genetic drift (Kitakado ; Song ). For the special case of , δ reduces to After obtaining the formulas for and , the integral with respect to in the likelihood function (Equation 6) can be calculated in advance. Let us rewrite the likelihood function:The inner integral forms a hierarchical process that is distributed as beta given the initial observation and also follows another beta distribution conditioning on . An exact solution may not exist for this type of integral. Here we propose to use another beta distribution to approximate the integral. The parameters in this new beta distribution can be obtained by matching the first two moments:The goodness of fit of this approximation is examined in the . The final piece of the likelihood function is , which is the sampling allele count given the underlying true allele frequency . If the samples are taken with replacement, then it is binomially distributed as described in Equation 2. Now, putting all the elements together, the likelihood function becomeswhere is a beta function. This integral has a closed-form solution with being a beta distribution and the binomial sampling of . The resultant probability mass function is a beta-binomial distribution with three parameters: . We can see from Equations 10 and 12 that the integrals (which play the same role as the summations in the full-likelihood model) can be evaluated separately with either a closed-form expression or an approximate solution, yielding a much simplified likelihood. The relationship between the two samples and is now established through a beta-binomial distribution. We define as the value of at which the likelihood function attains its maximum, conditioning on the observations. Hence is the maximum-likelihood estimator (MLE) of the parameter . For many unlinked loci, the joint likelihood is calculated as the product of each of the individual likelihoods for the loci.

Three or more samples

The likelihood model can be extended to more than two sampling events, as shown in Figure 1. Here we assume samples are taken from successive generations, giving a sequence of observations . Similar to equation 4, the likelihood function is the joint density of the observations:If we let be all the observations up to time i,We prefer Equation 14 because it illustrates the time dependency among the observations. Again contains no information about and can be neglected. By using the same argument as in the two-sample case, each is a beta-binomial distribution. The parameters within each beta-binomial distribution are functions of δ and the preceding observations. The remaining question becomes how the parameters in each beta-binomial distribution are obtained. The calculation of the parameters can be generalized by the following four recurring equations,with initial valueswith i runs from . As a result, each of the (given all previous observations) follows a beta-binomial distribution, with parametersMoreover, the underlying allele frequency given all observations up to i follows a beta distribution:Since the sample sizes and time steps are known, the only parameter remaining in the system is , the effective population size. The whole likelihood function is the product of multiple beta-binomial distributions. Therefore the MLE can be obtained by choosing a value of that maximizes the likelihood function.

Computer Simulations

The first objective of the simulation study was to compare the performance of the proposed estimator with those of the existing methods. The MLNE routine (Wang and Whitlock 2003) and the statistic (Nei and Tajima 1981; Waples 1989) were used as benchmarks. In each iteration, we first simulated the allele frequencies with known across t generations according to the Wright–Fisher model. Multiple independent biallelic loci were run at a time, and samples were then taken with replacement with a sample size of n diploid individuals (a total of alleles), as described in Equation 2. Initial allele frequencies were drawn from the uniform distribution. The three methods were then applied to produce three estimates. For , the likelihood function was formed using either Equation 5 or Equation 14, depending on the number of sampling events, and the likelihood function was maximized numerically. The lower and upper bounds for searching for the maxima were taken to be 50 and , respectively. For MLNE the upper bounds for were restricted to 38,000 because of computing limitations. estimates were calculated within the MLNE package. The asymptotic 95% confidence intervals (C.I.) for MLNE and were also calculated by finding the range of in which the log-likelihood dropped by 2 units from its maximum value. Simulations were repeated 1000 times for each parameter setting. Simulations were run in R (R Core Team 2013). Summary statistics for the three estimators are shown in Table 1. was chosen to be 1000 or 5000. Sample sizes (per generation) were fixed to be 10% of the true population size. Table 1 shows that all three methods slightly overestimated the underlying , while had the smallest bias in all cases investigated. In the two-sample scenario there was little difference among the three methods; however, consistently had the smallest variance and bias. For three samples, the differences of the three methods became more pronounced so that the likelihood methods (MLNE and ) outperformed their moment-based counterpart in terms of having smaller standard deviation and bias. The standard deviation of -based estimates was often twice that of the likelihood estimates. This result is consistent with the idea that the likelihood methods are better able to combine data from more than two samples. Within the likelihood family, the mean width of the 95% C.I. was also calculated. The C.I. using is slightly narrower than MLNE given the same significance level, with similar coverage. In short, all the examined scenarios suggested that was superior to the MLNE and estimators.
Table 1

Simulation results

True NenMethodMean (SD)2.5%97.5%Mean C.I. widthCoverage
Two samples (sample at t = 0, 8)
1000100Fc1,059.7 (253.5)699.81,657.8
MLNE1,080.7 (260.7)711.31,695.41,283.3960
NB^1,033.2 (247.3)684.11,604.81,195.5956
5000500Fc5,272.4 (1,164.5)3,534.18,056.8
MLNE5,276.7 (1,166.7)3,539.98,083.96,046.3970
NB^5,217.1 (1,149.6)3,501.67,958.15,957.4967
Three samples (sample at t = 0, 4, 8)
1000100Fc1,107.8 (638.8)661.82,050.7
MLNE1,076.6 (243.9)734.91,704.61,134.2957
NB^1,030.9 (226.8)709.41,605.41,054.0960
5000500Fc5,567.7 (2,038.2)3,165.910708.0
MLNE5,254.0 (1,153.4)3,530.28,198.15,427.4950
NB^5,202.0 (1,138.5)3,495.98,008.45,352.2953

For each parameter setting, 1000 replicate populations were simulated and all three methods are used to estimate . The true , sample size per generation, and number of temporal samples are shown. A total of 500 unlinked loci are used in each run and the initial allele frequencies are sampled from the uniform distribution. The mean, standard deviation, 2.5 and 97.5 percentiles of the 1000 runs are reported. For MLNE and , the mean width of the 95% confidence interval (C.I.) is also computed. The last column shows the number of C.I.’s (of 1000 simulations) that cover the true value .

For each parameter setting, 1000 replicate populations were simulated and all three methods are used to estimate . The true , sample size per generation, and number of temporal samples are shown. A total of 500 unlinked loci are used in each run and the initial allele frequencies are sampled from the uniform distribution. The mean, standard deviation, 2.5 and 97.5 percentiles of the 1000 runs are reported. For MLNE and , the mean width of the 95% confidence interval (C.I.) is also computed. The last column shows the number of C.I.’s (of 1000 simulations) that cover the true value . A second set of simulations examined the bias and consistency of the new estimator for a range of values. As the central assumption of the method is that is sufficiently large for a continuous approximation to be valid, it is interesting to investigate the performance of the estimator over a wide range of , including small values. A plot of the bias against true is found in Figure 2. For the smaller values of , slightly underestimated the population size by <2%, while for and onward was slightly biased upward by no more than 2%. This graph supports that is unbiased throughout a wide range of true from 50. Thus, the new estimator provides an inferential statistic that is not available through prior methods.
Figure 2

Plot of bias of the estimator against true . The bias (solid line) is quantified as the percentage difference relative to the true . Sample size was 10% of the true with 1000 loci. Two samples were taken 10 generations apart. The bias approaches 0 (red dotted line) if the estimator is unbiased.

Plot of bias of the estimator against true . The bias (solid line) is quantified as the percentage difference relative to the true . Sample size was 10% of the true with 1000 loci. Two samples were taken 10 generations apart. The bias approaches 0 (red dotted line) if the estimator is unbiased.

Nonconstant and Likelihood-Ratio Tests

Given three or more samples over time, we can consider the possibility that is different in each sampling interval. This can be done through modifying Equation 15 to allow nonconstant δ. It is also possible to use the same approach to fit a dynamic model to the data. For example, Wang (2001) demonstrated fitting an exponential growth model with two parameters: initial and growth rate. In general, a likelihood-ratio test (LRT) can be constructed to compare models and hypotheses. The test statistic is twice the difference in the log-likelihood values under the null and alternative hypotheses and is asymptotically chi-square distributed with degrees of freedom equal to the difference in the number of parameters between the two models. The following simulated example illustrates how a LRT is constructed. Consider a three-sample case with samples taken at , and 8, in which we wish to test whether is constant throughout the sampling period. This can be done by setting up the following hypotheses: H0: is constant vs. H1: there are two distinct ’s for the period between and and between and We can fit two models representing the two hypotheses to the data, one with a single and the other with two different ’s. Under the null hypothesis (i.e., given H0 is true), the test statistic asymptotically follows a chi-square distribution with 1 d.f. This can be verified by simulating 5000 replicates as shown in Figure 3.
Figure 3

Histogram of the likelihood-ratio test statistic under H0 for 5000 simulations. Three temporal samples were drawn in each replicate. The red line represents the theoretical density of a chi-square distribution with 1 d.f.

Histogram of the likelihood-ratio test statistic under H0 for 5000 simulations. Three temporal samples were drawn in each replicate. The red line represents the theoretical density of a chi-square distribution with 1 d.f. The statistical power of the test can be exemplified by setting up a specific alternative hypothesis. For example, if the underlying population drops from 10,000 in to 1000 in , then the power of the test is the probability of rejecting the null hypothesis. There are several parameters controlling the power, one of which is the sample size, n (Figure 4). In the particular example shown, a sample size of is required to attain a power of 80%.
Figure 4

Statistical power against sample size. A specific H1 was chosen as described in the text, with 1000 independent loci.

Statistical power against sample size. A specific H1 was chosen as described in the text, with 1000 independent loci.

Computational Effort

With the use of the beta and binomial distributions in modeling genetic drift and sampling events, closed-form solutions for the integrals in Equations 11 and 12 are obtained. As a result, the likelihood function (Equation 14) is much simplified and no longer involves summations over all the nuisance parameters as in the full-likelihood model (Equation 1). The comparison of the computation time between MLNE and is shown in Figure 5. For the MLNE package, increases in lead to increases in the number of elements in the transition matrix and therefore in the computing time (Williamson and Slatkin 1999; Wang 2001). For , continuous approximation is used and the structure of the transition probabilities is approximately the same for all . Hence the computing time remains low for any . For both MLNE and , computing time increases with the number of loci used in a similar fashion, but remains several thousand times faster than MLNE. The speed advantage of also becomes more distinct with increasing sampling interval, because does not require calculation of the power of the transition matrix. It should be noted that the two methods are not coded in the same programming language (Fortran for MLNE and R for ), so these results should not be considered a direct comparison of the two algorithms. However, it likely underestimates the speed advantage of over MLNE because R is a script language, which is typically slower than a compiled language like Fortran. Nevertheless, the new method speeds up estimation by a factor of 1000–10,000 for large without sacrificing accuracy.
Figure 5

Comparison of computational effort (in seconds) between MLNE and . A shows the computational time against true . of 50,000 was not run for MLNE because this exceeds the limits of the software. B shows the computational time against the number of loci used in each iteration. C plots the computing time against sampling interval.

Comparison of computational effort (in seconds) between MLNE and . A shows the computational time against true . of 50,000 was not run for MLNE because this exceeds the limits of the software. B shows the computational time against the number of loci used in each iteration. C plots the computing time against sampling interval.

Real Example

A real data set from Cuveliers was used to demonstrate the use of . Six temporal samples spanning >10 generations were collected between 1957 and 2007 to infer the effective population size of North Sea sole. The sample sizes were ∼135–220 individuals per generation with 11 microsatellite markers being genotyped. The number of alleles in these loci ranges from 13 to 39. We used to estimate the overall Ne throughout the entire sampling horizon. The effective population size during the period was estimated to be 2512 with finite 95% confidence limits of 1661 and 4365. The published estimate using MLNE (Wang 2001) was 2169 (C.I. = 1221–5744), while the estimate from the F-statistic (Waples 1989) was 2247 (C.I. = 1127–8370). The complete result can be found in table 2 of Cuveliers , p. 3561). We found that all three estimates mostly overlap with each other, indicating the consistency among the temporal estimators. The estimate from is slightly larger than those obtained by MLNE and F-statistics, but it is the most precise one with the narrowest confidence interval. also showed a significant reduction in computing time; it is ∼600 times faster than MLNE in this particular example.

Discussion

The model

In theory, the full-likelihood model (Williamson and Slatkin 1999) for estimating from temporal samples should be the most accurate but is not practical because of computational limitations. MLNE, as a derivation of the full-likelihood model, intentionally omits some of the smaller transition probabilities to enhance computational feasibility. The estimator is also an approximation to the full likelihood, but makes use of the continuous approximation to simplify the calculations. Previous studies by Williamson and Slatkin (1999) and Wang (2001) showed that the maximum-likelihood methods are more accurate and precise than the F-statistics, and this article further confirms that is no exception. The comparison between MLNE and showed that is a better alternative to MLNE in a moderately large scenario. In our examined cases produces a smaller variance and narrower confidence interval than MLNE, yielding a more precise estimate of . The bias of is also negligible, indicating that the approximations hold over a wide range of true . Perhaps the most important feature of is in relaxing the upper bound. Since the dimension of the Wright–Fisher transition matrix is determined by , MLNE stops the calculation when exceeds a certain value. The current threshold on my workstation is ∼38,000 while the user manual from MLNE suggests 50,000. This upper bound also applies to the calculation of the upper confidence interval, making the practical range of true even smaller. relaxes this bound to over several million without causing computational issues. As a result, precise estimation of contemporary can be applied to more species. Another distinct advantage is the computing speed, which is increased by a factor of ≥1000 in most scenarios. Most calculations in are done within seconds. Field biologists may not appreciate this improvement as most of their time is spent on data collection; however, with the anticipated advance in DNA sequencing technology, large amounts of loci can be sequenced at a time with low cost. The ability of existing software to handle such a data set is questionable. Furthermore, with the increasing popularity of the use of computer simulation in population genetics (such as ms by Hudson 2002), in which the computing time is multiplied by the number of repeated simulations, provides an efficient algorithm to help scientists evaluate their simulations rapidly and accurately.

Usage

As discussed above, is designed for moderately large populations and this explains why our simulations focused in these scenarios. Although we showed that is unbiased even for small values of , we suggest using the full-likelihood method for the extremely small problem (when < 100). In determining sample size, it has to be viewed relative to the true of the population. It is shown in our simulations that sampling 10% of the individuals is able to estimate accurately, with the use of ∼500 independent loci. Interested readers can refer to Waples (1989) and Wang (2001) for more details about the effect of sampling effort on temporal methods. Excluding rare alleles is not unusual in population genetics studies. For instance, LDNE (Waples and Do 2008) imposes several cutoffs for rare alleles. Wang (2001) showed that the moment-based F-statistics induces bias with rare alleles, while the likelihood methods are less sensitive to small allele frequency as they make use of the full distributional information of the Wright–Fisher model. We analyzed empirically the goodness of fit of the beta distribution in modeling allele frequency in the . We showed that the approximation is indistinguishable from the true continuous model when frequent alleles are used, and it still holds when the observed allele frequency is down to ∼0.05. As a result we suggest that in most cases it is safe to include alleles with observed allele frequency >5%. In the review by Luikart they emphasized the desirability of developing new methods that are able to distinguish between moderate and large and that future development of estimators should allow for the possibility of genotyping many loci. The methods developed here allow for expansion in these two directions, both for estimating effective population sizes and for testing for significant differences (or trends) in population sizes from temporally spaced samples.

R package

An R package “NB” has been created to implement as described in this article. The package allows more flexibility, including unevenly temporal-spaced samples and nonconstant sample size. As demonstrated in our worked example, multiallelic loci are accepted in the R package through the use of Dirichlet-multinomial distribution. It also contains a sample data set and a help document to describe the usage of the package. It is available for download at http://cran.r-project.org/web/packages/NB/.
  16 in total

1.  Monte Carlo evaluation of the likelihood for N(e) from temporally spaced samples.

Authors:  E C Anderson; E G Williamson; E A Thompson
Journal:  Genetics       Date:  2000-12       Impact factor: 4.562

2.  A pseudo-likelihood method for estimating effective population size from temporally spaced samples.

Authors:  J Wang
Journal:  Genet Res       Date:  2001-12       Impact factor: 1.588

3.  Using maximum likelihood to estimate population size from temporal changes in allele frequencies.

Authors:  E G Williamson; M Slatkin
Journal:  Genetics       Date:  1999-06       Impact factor: 4.562

4.  Estimating effective population size and migration rates from genetic samples over space and time.

Authors:  Jinliang Wang; Michael C Whitlock
Journal:  Genetics       Date:  2003-01       Impact factor: 4.562

5.  Genetic drift and estimation of effective population size.

Authors:  M Nei; F Tajima
Journal:  Genetics       Date:  1981-07       Impact factor: 4.562

6.  A new method for estimating the effective population size from allele frequency changes.

Authors:  E Pollak
Journal:  Genetics       Date:  1983-07       Impact factor: 4.562

7.  Simultaneous estimation of mixing rates and genetic drift under successive sampling of genetic markers with application to the mud crab (Scylla paramamosain) in Japan.

Authors:  Toshihide Kitakado; Shuichi Kitada; Yasuhiro Obata; Hirohisa Kishino
Journal:  Genetics       Date:  2006-08       Impact factor: 4.562

8.  Temporal genetic stability and high effective population size despite fisheries-induced life-history trait evolution in the North Sea sole.

Authors:  E L Cuveliers; F A M Volckaert; A D Rijnsdorp; M H D Larmuseau; G E Maes
Journal:  Mol Ecol       Date:  2011-07-25       Impact factor: 6.185

9.  ldne: a program for estimating effective population size from data on linkage disequilibrium.

Authors:  Robin S Waples; Chi DO
Journal:  Mol Ecol Resour       Date:  2008-07       Impact factor: 7.090

10.  A generalized approach for estimating effective population size from temporal changes in allele frequency.

Authors:  R S Waples
Journal:  Genetics       Date:  1989-02       Impact factor: 4.562

View more
  10 in total

Review 1.  Prediction and estimation of effective population size.

Authors:  J Wang; E Santiago; A Caballero
Journal:  Heredity (Edinb)       Date:  2016-06-29       Impact factor: 3.821

2.  Spatio-temporal population genetic structure, relative to demographic and ecological characteristics, in the freshwater snail Biomphalaria pfeifferi in Man, western Côte d'Ivoire.

Authors:  Yves-Nathan T Tian-Bi; Jean-Noël K Konan; Abdourahamane Sangaré; Enrique Ortega-Abboud; Jürg Utzinger; Eliézer K N'Goran; Philippe Jarne
Journal:  Genetica       Date:  2018-11-29       Impact factor: 1.082

Review 3.  Statistical Inference in the Wright-Fisher Model Using Allele Frequency Data.

Authors:  Paula Tataru; Maria Simonsen; Thomas Bataillon; Asger Hobolth
Journal:  Syst Biol       Date:  2017-01-01       Impact factor: 15.683

4.  Genomic Signatures of North American Soybean Improvement Inform Diversity Enrichment Strategies and Clarify the Impact of Hybridization.

Authors:  Justin N Vaughn; Zenglu Li
Journal:  G3 (Bethesda)       Date:  2016-09-08       Impact factor: 3.154

5.  Inference of Selection from Genetic Time Series Using Various Parametric Approximations to the Wright-Fisher Model.

Authors:  Cyriel Paris; Bertrand Servin; Simon Boitard
Journal:  G3 (Bethesda)       Date:  2019-12-03       Impact factor: 3.154

6.  Gene Banks as Reservoirs to Detect Recent Selection: The Example of the Asturiana de los Valles Bovine Breed.

Authors:  Simon Boitard; Cyriel Paris; Natalia Sevane; Bertrand Servin; Kenza Bazi-Kabbaj; Susana Dunner
Journal:  Front Genet       Date:  2021-02-02       Impact factor: 4.599

7.  Estimating the Effective Population Size from Temporal Allele Frequency Changes in Experimental Evolution.

Authors:  Ágnes Jónás; Thomas Taus; Carolin Kosiol; Christian Schlötterer; Andreas Futschik
Journal:  Genetics       Date:  2016-08-19       Impact factor: 4.562

8.  Inferring sex-specific demographic history from SNP data.

Authors:  Florian Clemente; Mathieu Gautier; Renaud Vitalis
Journal:  PLoS Genet       Date:  2018-01-31       Impact factor: 5.917

9.  Cytonuclear discordance in the Florida Everglades invasive Burmese python (Python bivittatus) population reveals possible hybridization with the Indian python (P. molurus).

Authors:  Margaret E Hunter; Nathan A Johnson; Brian J Smith; Michelle C Davis; John S S Butterfield; Ray W Snow; Kristen M Hart
Journal:  Ecol Evol       Date:  2018-08-19       Impact factor: 2.912

10.  The Limits to Estimating Population-Genetic Parameters with Temporal Data.

Authors:  Michael Lynch; Wei-Chin Ho
Journal:  Genome Biol Evol       Date:  2020-04-01       Impact factor: 3.416

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.