Abstract
A new method for the estimation of migration rates and effective population sizes is described. It uses a maximum-likelihood framework based on coalescence theory. The parameters are estimated by Metropolis-Hastings importance sampling. In a two-population model this method estimates four parameters: the effective population size and the immigration rate for each population relative to the mutation rate. Summarizing over loci can be done by assuming either that the mutation rate is the same for all loci or that the mutation rates are gamma distributed among loci but the same for all sites of a locus. The estimates are as good as or better than those from an optimized F_{ST}-based measure. The program is available on the World Wide Web at http://evolution.genetics.washington.edu/lamarc.html/.
SEVERAL methods for the estimation of migration rates between subpopulations have been proposed. We can subdivide them into two very different approaches: (1) marking individuals and tracking their individual movements and then extrapolating these individual movements to migration rates; or (2) surveying genetic markers in the populations of interest and calculating a migration rate from allele frequencies or sequence differences. Of course, one should be aware that these two approaches do not estimate the same quantity: approach 1 estimates the actual “instantaneous” migration rate of individuals, whereas approach 2 reflects an average over a time period whose length is determined by the rate of mutation per generation of the locus under study or by the time scale of genetic drift. The genetic approach tends to gives a lower estimate than the individual migration rates approach because the method is looking at changes that become established in the subpopulation gene pool.
Current estimation methods for genetic data are methods such as those related to F_{ST} (e.g., Slatkin 1991; Slatkin and Hudson 1991), the rare allele approach of Slatkin (1985), a maximum-likelihood method using gene frequency distributions (Rannala and Hartigan 1996; Tuftoet al. 1996), and approaches based on coalescent theory (Kingman 1982a,b), such as the cladistic approach of Slatkin and Maddison (1989), the method outlined in Wakeley (1998), and maximum likelihood using coalescent theory (Nath and Griffiths 1993, 1996). We describe here a new method using a maximum-likelihood- and coalescent theory-based approach.
Our method integrates over all possible genealogies and migration events. This should be superior to pairwise estimators such as those using F_{ST} or related statistics (cf. Felsenstein 1992b) and also more powerful than the cladistic approach of Slatkin and Maddison (1989), which needs to know the true genealogy. Our approach estimates similar parameters as the program of Bahlo and Griffiths (http://www.maths.monash.edu.au/~mbahlo/mpg/gtree.html/), which is based on the work of Griffiths and Tavaré (1994) and Nath and Griffiths (1996). It differs in the way we search the genealogy space, and our methods support mutation models for different types of data: the infinite allele model for electrophoretic markers, a one-step model for microsatellites, and a finite-sites model for nucleotide sequences. The sequence model is more useful than the infinite sites model, which forces the researcher to discard data when there are multiple mutations at the same sites.
MODEL
We propose a method to make a maximum-likelihood estimate of population parameters for geographically subdivided populations. The general outline of such estimates involves extending coalescence theory (Kingman 1982a,b) to include migration events. Migration models with coalescents were first developed by Takahata (1988) and Takahata and Slatkin (1990) for two gene copies and discussed more generally for n gene copies in two populations by Hudson (1990), with generalization to multiple populations and different models of migration by Notohara (1990). Nath and Griffiths (1993, 1996) used this migration-coalescent process for maximum-likelihood estimation of one parameter, the effective number of migrants γ‒=4Nem , where N_{e} is the effective population size and m is the migration rate per generation.
Figure 1.
—Two-population model with four parameters: m_{1} is the migration rate per generation from population 2 to 1, m_{2} from population 1 to 2, and is the effective population size.
Our migration model consists of two populations. These have effective population sizes Ne(1) and Ne(2) , which need not be equal. The populations exchange migrants at rates m_{1} and m_{2} per generation (Figure 1).
Because we observe only genetic differences and do not know the mutation rate μ, we must absorb μ into the parameters so that we use as our parameters
ϴi=4Ne(i)μ,
and
Mi=miμ,
and occasionally write
γi=ϴiMi=4Ne(i)mi.
We make the following assumptions: that diploid individuals in each population reproduce according to any standard population genetic model that converges to the same diffusion equation as a Wright-Fisher model, that populations have a constant population size through time so that they do not grow or shrink, and that the mutation rate μ is constant. These are also the assumptions used by Kingman (1982a) for coalescence theory. We use the convention with sequence data that μ is the rate of mutation per generation per site, whereas with microsatellite or enzyme electrophoretic data we take μ to be the rate of mutation per generation per locus. To emphasize this distinction between sites and loci, we use a capital Θ in the first case and a small θ for the latter ones. The mutation rate μ can vary among loci according to a gamma distribution. In addition we also need to assume that the two populations exchange migrants with constant rates per generation.
The likelihood formula for this family of methods (Felsenstein 1988, 1992a; Kuhner et al. 1995, 1998) is based on the product between the genealogy likelihood Prob(D|G) (e.g., Felsenstein 1973; Swoffordet al. 1996) and the prior probability of the coalescent genealogy Prob(G∣P) (Kingman 1982a,b)
L(P)=∑GProb(D∣G)Prob(G∣P),
(1)
where the parameters P are
P=[ϴ1,ϴ2,M1,M2].
Takahata and Slatkin (1990) found that Kingman’s results could not be extended to cases with geographical structure. They were unable to obtain the distribution of times to coalescence in the presence of migration. We have avoided this difficulty by having the genealogy G specify not only the coalescences but also the times and places of migration events. With this information in G, its probability density becomes for two populations a product of terms for the intervals between events in the genealogy
Prob(G∣P)=∏i=12([2ϴi]viMiw.i)∏j=1Tpj,
(2)
where
pj=exp(−uj∑i=12(kjiMi+kji(kji−1)ϴi)).
(3)
Prob(G∣P) is obtained by computing for each interval the probability that nothing happens during the interval times the probability of the particular event at the bottom (beginning) of the interval. The variable v_{i} is the total number of coalescences in subpopulation i and w_{.i} is the sum of the numbers of migration events to subpopulation i over all time intervals T. Furthermore, p_{j} is the probability that no event occurs during time interval j, u_{j} is the length of time interval j, and k_{ji} is the number of lineages in subpopulation i during time interval j.
If we extend the parameter estimation from a single locus to many unlinked loci, we face the additional problem that, although μ may be constant per locus, it can vary between loci. Neglecting variation of μ, combining L loci using the likelihood framework is simple:
L(P‒)=∏l=1LLl(P).
(4)
On the other hand, if we assume that μ follows a gamma distribution and therefore can vary between loci, we need to include an additional parameter, the shape parameter α of this distribution. The other parameter of this gamma distribution is a scale parameter that can be conveniently chosen to be μ‒∕α . By parameterizing in this way, 1/α is the squared coefficient of variation of μ‒ . We cannot estimate the mean mutation rate μ‒ of this gamma distribution directly. As N_{e} is the same for all loci, we can use an arbitrary Θ (we have chosen the effective population size of the first population) and scale the other parameters relative to it. We replace μ in the parameters Θ_{i} and Mi with a new variable, the mutation rate x. Θ_{ξ} is the mutation rate x scaled by 4Ne(1) . We integrate over all possible values of Θ_{ξ} instead of x. This does not change the shape of our likelihood surface, because Ne(1) in Θ_{ξ} is constant, so that we have
L(P‒,α)=∏lL(1Γ(α)βα∫0∞e−ϴξ∕βϴξα−1Ll(P′)dϴξ),
(5)
where
β=ϴ1α,P′=[ϴξ,ϴ2ϴξϴ1,M1(ξ),M2(ξ)],ϴξ=4Ne(1)x,
and
Mi(ξ)=γiϴiϴ1ϴξ.
This integral over all mutation rate values is implemented using a discrete gamma approximation with 100 intervals (cf. Yang 1994).
Note that the gamma distribution described here is the distribution of μ across loci, not the more commonly used gamma distribution of rates across sites within a locus. If needed the latter can be approximated by Hidden Markov model methods in the case of DNA sequences.
IMPLEMENTATION
To calculate the likelihood L(P) for the parameters P using (1) we ought to sum over all possible genealogies, including all topologies with all possible branch lengths and with all possible migration scenarios. This is not possible, as even with two tips there are an infinite number of possible sequences of migration events with different branch length. We have to resort to an approximation of the likelihood function using a Metropolis-Hastings Markov chain Monte Carlo approach (Metropoliset al. 19531; Hammersley and Handscomb 1964, Hastings 1970; Kuhneret al. 1995). The derivation of the importance sampling function is shown in the appendix.
The parameter estimation is simple in principle: we have to find the maximum of the likelihood function, which has five or six dimensions, depending on whether μ is allowed to vary between loci. This is done by a modified damped Newton-Raphson method (Dahlquist and Björk 1974) using explicit equations for the first and second derivatives.
The genealogy sampler is of more interest. A large sample of genealogies G_{1}, G_{2},...,G_{m} is drawn from a distribution whose density is proportional to Prob(D|G) Prob(G∣P0) . This sampling scheme is biased toward genealogies that contribute more to the likelihood with a given parameter set P0 . We find the relative likelihood at other P values by dividing by the density from which we sampled the genealogies (see also appendix). If m is the number of sampled genealogies, we get
L(P)L(P0)≅1m∑imProb(D∣Gi)Prob(Gi∣P)Prob(D∣Gi)Prob(Gi∣P0)≅1m∑imProb(Gi∣P)Prob(Gi∣P0).
(6)
In the program Coalesce (Kuhneret al. 1995) a region of internal nodes in the genealogy is erased and rebuilt according to the coalescent model. This scheme is not applicable to migration models. We adopt a new scheme that is much more general and that can be used in any extension to the coalescent model. The first genealogy for each locus is constructed with a UPGMA method (as implemented by M. K. Kuhner and J. Yamato in Felsenstein 1993), and the minimal necessary migration events are inserted using a Fitch parsimony algorithm (Fitch 1971). The times for coalescent events or migration events on this first genealogy are calculated with (3) using a uniform random number for p_{j} and solving for u_{j}, and parameter values, which are either guessed by the researcher or calculated using an F_{ST}-based method (see appendix).
The genealogies are sampled as follows (cf. Figure 2): A coalescent node or tip z on this genealogy is chosen at random, the lineage below it is dissolved, and the node is used as the starting point to simulate the ancestry using a migration-coalescent process through a series of time intervals (Figure 2). The other lineages do not change and are taken as given and form the partial genealogy G_{p}. For the further description of the rearrangement process we use “up” and “top” for being or moving closer to the tips of the genealogy and “down” and “bottom” for being or moving toward the root of the genealogy. Starting at the chosen node z (see Figure 2), we draw a new time interval by solving for u_{j} in
pj′=exp(−uj[Mi+2kji′ϴi])
(7)
after substituting a uniform deviate for pj′ , where kji′ is the number of lineages of population i in the partial genealogy G_{p} during the time interval j. If this new time is further back in time than the bottom of the time interval j on the partial genealogy G_{p}, the active lineage advances down to that time and the simulation process starts again. When the newly drawn time is in the active time interval, an event, either a migration or coalescence, is drawn according to their relative probabilities of occurrence, e.g.,
Prob(Migration event in populationi)=MiMi+2kji′∕ϴi.
(8)
If a migration event occurs the process moves down to the time when the migration event happened. Below a migration event the active lineage is in the other population and the migration-coalescent process continues (see also Figure 2C). After a coalescent event occurs the process stops. The transformation from one genealogy into another allows change from any genealogy over several steps into any other and therefore allows the process to search the whole genealogy space. On the other hand, the resimulation of a single line guarantees that newly found genealogy is similar to the old genealogy.
Figure 2.
—Transition from an old genealogy G_{o} to a new genealogy G_{n}. Dotted lines are the times of coalescences or migrations. (A) Genealogy G_{o} with migration. The black bar marks a migration from the white population to the black or vice versa; z is the node to be picked. (B) Partial genealogy G_{p} after drawing the coalescent node z at random and dissolving the branch to the next coalescent below. (C) Simulation of the coalescent with migration. One possible outcome with three consecutive steps is shown: (1) using Equation 7 a new time interval is drawn and a migration event from white to black is also drawn (Equation 8) and the lineage is extended down to that event; (2) a new time interval is drawn: it extends too far back, so the lineage advances down to the time j; (3) a new time interval is drawn with a coalescent event at its bottom end. The process stops at time k. (D) the final configuration G_{n}.
In cases where all the lineages have not coalesced by the time of the root of the partial genealogy, simulation on the active lineage and the bottom lineage of G_{p} continues until the lineages coalesce (Figure 3). When simulating on two lineages we draw new time intervals using (3) instead of (7). There is one exception to this rule at the bottom of the genealogy: if by chance the dissolved lineage is the bottommost lineage of one of the populations on G_{o}, we keep simulating with a single active lineage below the root of G_{p} until the former root of G_{o} is reached and then start to simulate on both of the two remaining lines. The lineage between the root of G_{p} and the old root on G_{o} on the partial genealogy has a known history, so there is no need to simulate this history again.
Figure 3.
—Simulation on one or two lineages at the bottom of a genealogy. Striped lineages are active. On these lineages the migration-coalescent process is used to find new time intervals and events. (A) old genealogy G_{o}, with 1 indicating the branch to be cut to get genealogy B, and 2 indicating the branch to be cut to get C. (B) The root R is below the cut-point D, and so below the root we need to calculate the migrations and coalescences for two lineages, because there is no previous information present. (C) In this genealogy a single lineage simulation is needed, using the information present between R and D. Below D we need to simulate on both remaining lineages.
A change to the newly found genealogy G_{n} is accepted with probability
r=min(1,rMrH),
(9)
where r_{M} is the Metropolis term
rM=Prob(D∣Gn)Prob(Gn∣P)Prob(D∣Go)Prob(Go∣P)
(10)
and r_{H} is the Hastings term
rH=Prob(Go∣Gn)Prob(Gn∣Go).
(11)
This acceptance probability r is based on the work of Hastings (1970). In our specific implementation it can be simplified. The scheme for changing the genealogy creates a new genealogy from an old one by first randomly removing a branch with all its possible migration events, which creates a partial genealogy G_{p}. The probability of a particular change is only governed by the number of coalescent nodes in the genealogy, and this number is constant so that the probabilities Prob(G_{p}|G_{n}) and Prob(G_{p}|G_{o}) are equal. For a specific triplet of original, partial, and new genealogies G_{o}, G_{p}, and G_{n}
Prob(Gn∣Go)=Prob(Gp∣Go)Prob(Gn∣Gp)
(12)
holds. The resimulation process (7)-(8) is driven only by the parameters P and so Prob(G|G_{p}) is proportional to Prob(G∣P) so that
rH=Prob(Gp∣Gn)Prob(Go∣Gp)Prob(Gp∣Go)Prob(Gn∣Gp)=Prob(Go∣P)Prob(Gn∣P).
(13)
This results in cancellation of the Prob(G∣P) terms in r, so that we have
r=min(1,Prob(D∣Gn)Prob(D∣Go)).
(14)
If the genealogy is accepted, it is the starting point for a new cycle; otherwise the previous one continues to be used. After having sampled a large number of genealogies we estimate the parameters by maximizing (6). In theory a single long Markov chain would be enough to find the genealogies that contribute most to the likelihood, but if we start from a very bad P0 we can need an excessively long time to find this region. If the likelihood of a new genealogy is only slightly better than that of the old one, the search is moving nearly randomly and can spend a long time in regions that do not contribute much to the final likelihood. The sampler will move more quickly if the likelihood surface is steep. If we start with bad parameters there is some chance that the likelihood surface is locally very flat. To overcome this, we restart a new Markov chain with the improved P and the last genealogy of the most recent chain. Our implementation uses a number of short chains in which the parameter estimates are based only on a few genealogies to find regions with good parameter values. It then uses an arbitrary number of long chains to get good estimates of the parameters. All but the last chain are discarded and not used for the final result. Kuhner et al. (1998) have not found a decrease in performance compared to the scheme of taking several chains into account for the final result. If we run this genealogy sampling process long enough and with enough chains, the sampler will find its way to the region with the biggest contribution to the likelihood and therefore approximate the maximum-likelihood estimate of our parameters P . The main problem with importance sampling schemes is that little is known about how we can be sure that we have sampled enough genealogies from the region that contributes most to our likelihood function. It is common practice in our group to run 10 “short” chains and then 2 “long” chains. Using random values for the first parameter set, we have found that for a given data set the results converge to a value that is not dependent on these initial parameter values when the total numbers of sampled genealogies exceed 30,000 genealogies (data not shown). These are 10 short chains with 1000 genealogies and 2 long chains with 10,000 genealogies sampled. It seems to be sufficient to deliver acceptable estimates for simulated data and averages of those, although we would recommend choosing a better start parameter than a random guess and running the program at least twice as long. The number of genealogies needed for an accurate estimate is certainly dependent on many factors, such as starting parameters, first genealogy, number of sampled individuals, variability in the data set, and number of migration events.
TABLE 1
Simulation of 100 single-locus datasets with 25 sampled individuals for each population and 500 or 1000 bp, respectively, for different known parameter combinations
TABLE 2
Influence of number of sites and number of loci on parameter estimates
SIMULATION RESULTS
We have simulated genealogies with known true parameters PT using a coalescent with migration and evolved data according to our data models along the branches starting from the root of the genealogy (Hudson 1983). The data that resulted at the tips were used as the input data for our method. The mutation rate was held constant for all simulations except one, in which we used a gamma-distributed mutation rate with shape parameter α = 1.0. For all other simulations we used 10 short chains sampling 1000 genealogies, keeping 100 of these genealogies for the parameter estimation and 3 long chains sampling 10,000 genealogies, using 1000 of them. The last chain delivered the parameters shown in the tables, and all other chains were discarded.
For the comparison with an F_{ST}-based estimator (see appendix) we used a symmetrical parameter setup for the simulations, Θ_{1} = Θ_{2} and γ_{1} = γ_{2}. However, the parameters that were estimated were not constrained to be symmetric.
Our simulations generally recover the true Θ_{T} very well (Table 1). With 1000 bp the means are better than with 500 bp. The means for γ are not close to the true γ_{T}. With low Θ_{T} values γ is grossly overestimated. The upward bias in γ is huge with 500-bp data and shrinks when longer sequences are used (Tables 1 and 2).
Sequences in two populations can be similar to each other due to migration or due to low variation caused by a small Θ. The simulated datasets with low Θ sometimes show by chance no variation at all in short sequences. In these cases one would expect the estimate of Θ to be 0.0 and the estimate of M to be indeterminate. The likelihood surface for such data is very flat and some of the runs (Table 1) failed to find a maximum. These cases were omitted from the tables. If the migration parameter γ is high, it is consistently underestimated with high Θ. This is caused by our allowing only a limited number of migration events per genealogy in the Markov chain Monte Carlo runs, which were 1000 migration events per genealogy, so that solutions with higher numbers of migration events were not proposed. Our method delivers similar estimates for Θ and similar or better estimates for γ than an F_{ST}-based estimator (Table 3).
The number of cases in which the F_{ST}-based estimator (Table 3) is usable is often dramatically lower than that with the ML estimator (Table 1; see also appendix). For example, with Θ_{T} = 0.001 and γ_{T} = 10 the F_{ST}-based estimator could be used in only 45 out of 100 runs. This behavior seems not to improve with longer sequences. Increasing the number of sites and the number of loci helps to reduce the variance of the ML estimates (Table 2 and Figure 4). Estimates of γ from datasets with very long sequences are biased and are smaller than γ_{T}. This is the result of low acceptance rates during a run, when the starting genealogy is so well defined by the data that in our simulation runs the chains were too short and too few different migration scenarios were tried. The estimation of nonsymmetrical true parameters works quite well (Table 4). We encounter similar biases as with symmetrical parameters: with low Θ the γ estimates are biased upward and with high Θ they are biased downward.
TABLE 3
F_{ST}-based estimator with three parameters (Θ ≡ Θ_{1} ≡ Θ_{2}, M_{1}, M_{2}, see also appendix)
The assumption that mutation rate varies according to a gamma distribution adds more noise to the estimation. All biases are similar to the ones shown in the tables. The estimation of the shape parameter 1/α is certainly not very precise with only a few loci (Figure 5). One of the runs shown in Figure 5 obviously contains almost no information about 1/α and its log-likelihood curve is nearly flat, so that almost any value of 1/α can be accepted for this dataset. The increase of the log-likelihood values with very small 1/α and far from the maximum is due to imprecisions in the calculation using the discrete gamma approximation. In evaluations using a more exact but much slower quadrature scheme (Sikorskiet al. 1984) the peaks of the log-likelihood curves are at similar values, but log-likelihood for very low 1/α values are monotonically decreasing and not, as with the discrete gamma approximation, slightly increasing.
DISCUSSION
The estimation of the migration parameter γ seems to be rather imprecise, and this is true for both migrate and F_{ST} (e.g., Tables 1 and 3). The standard deviations for γ are large for single-locus estimates. More sites per locus give better estimates but the improvement of the variance is small, and given the obstacles of sequencing long stretches of DNA for several individuals one might better invest in the investigation of an additional unlinked locus. Each new locus has its own genealogy that is not correlated with that of the other loci and so increases the power of estimating parameters more, as we can see in Table 2.
There is considerable bias in the estimates of γ when datasets either have no or little genetic variation or have very high variation. Datasets with almost no variation inflate M and produce an upward bias. The likelihood surfaces become very flat because of the lack of variation, and almost any migration parameter value is possible. These problems can be overcome by adding more variation, which for low true population size Θ_{T} means either adding more base pairs or adding more individuals. With high Θ_{T} and high true migration parameter γ_{T}, the estimates are biased downward because of our need to truncate the number of migration events in the reconstructed genealogies at some high arbitrary number. If we did not bias against high numbers of migration events, the program could add more and more migration events and would eventually crash the computer by running out of memory or would run too long.
In single runs we can also see a “fatal attraction” to 0.0 if the true parameters are very close to 0.0. The genealogy sampler is using the current parameter values to propose coalescences and migration events. If one or more of these parameters are very close to 0.0, then it can be seen by inspecting (2) that as a result our procedure will either most often propose an immediate coalescent event if one of the Θ is close to 0.0 or rarely propose an immigration event with very low γ. If a chain never proposes any instances of an event, its rate of occurrence will be estimated to 0.0, and this situation may persist indefinitely, a fatal attraction. Even if the parameters are prevented from becoming exactly zero this means that it may take large amounts of simulation to escape these values.
Because long sequences define the genealogy very well, our Markov chain Monte Carlo sampler will often reject newly proposed genealogies and only a few different genealogies are sampled for the parameter estimation. Therefore, the program does not readily explore the whole migration-genealogy space and tends to stick to the good starting genealogy. This generates a bias in M because we have started with a genealogy that has a minimal number of migration events on it (Table 2). We may be able to overcome this mixing problem by adding a “heating” scheme to our importance sampling. Currently the only way to overcome this is to run the chains much longer than the defaults (for practical guidance see Beerli 1997).
Figure 4.
—Example of estimation of population parameters for two populations with 1, 3, and 10 loci. The graphs are cross-sections through the parameter space passing through the peak of the likelihood surface. The axes are on a logarithmic scale. The dashed lines give the true values of the parameter values: Θ_{1} = 0.05, Θ_{2} = 0.005, γ_{1} = 10.0, γ_{2} = 1.0. The gray area is an approximate 95% confidence set based on a likelihood-ratio test criterion defined as the range of parameter values with log-likelihood values equal to or higher than the maximum -0.5 × 9.4877 .
Summing over loci assuming that the mutation rate is constant between loci delivers much better estimates of the parameters than single-locus estimation. This model is rather unnatural for real data, especially microsatellite data or electrophoretic data. Summing over loci using a gamma-distributed mutation rate and estimating the shape parameter 1/α increases the variation of the parameters but allows more realistic estimation of the parameters with several loci.
TABLE 4
Simulation with unequal known parameters of 100 two-locus datasets with 25 sampled individuals for each population and 500 bp per locus
Conclusion: Our method based on coalescents with migration delivers similar or better estimates than the F_{ST} method based on the expectations derived by Maynard Smith (1970), which for certain data also produces fine results. Qualitative comparison with other methods than the F_{ST} measure, for example Rannala and Hartigan (1996), or the cladistic approach of Slatkin and Maddison (1989; see also Hudsonet al. 1992, their Table 1) shows that all these methods have a tendency to overestimate γ when Θ is not very high and all methods have quite large variances. The maximum-likelihood estimators have smaller variances than the other approaches. But the biggest drawback of the F_{ST} methods compared to ML methods seems to be their inability to estimate population size and migration rate for data where the homozygosity between populations happens to be equal to or less than the homozygosity within a population (see also Hudsonet al. 1992).
Direct comparison of the different methods is currently difficult because each one estimates a different number of parameters or uses a different migration model. We expand our model to accept more populations and different migration schemes. This should then facilitate direct comparison. We believe that taking the history of mutation into account and summing over all possible genealogies gains the most information from the data. We believe that our approach and the approaches of Nath and Griffiths (1996) and Bahlo and Griffiths (http://www.maths.monash.edu.au/~mbahlo/mpg/gtree.html/) will produce better estimates than other approaches.
Figure 5.
—Log-likelihood curves of shape parameter 1/α. The true 1/α is 1.0. The curves were evaluated with Θ and γ parameters from the maximum-likelihood estimate. Left, 10 runs with 10 loci each. Right, sum of the log-likelihood curves over 10 runs with 10 loci each.
The methods described here are available in the program Migrate over the World Wide Web at http://evolution.genetics.washington.edu/lamarc.html/. We distribute it as C source code and also as binaries for several platforms including PowerMacintoshes and Windows 95/98 for Intel-compatible processors.
APPENDIX
Derivation of the importance sampling function: We want to calculate Equation 1 but would need to sum over all possible genealogies. This function can be transformed into an importance sampling function by assuming that L(P) is an expectation and we sample from a distribution whose density is g instead of the correct density, f,
L(P)=Prob(D∣P)=∑GProb(G∣P)Prob(D∣G)
(A1)
=Ef(h)=∑Ghf
(A2)
=∑Ghfgg=Eg(hfg).
(A3)
Suppose that
g=Prob(G∣P0)Prob(D∣G)∑GProb(G∣P0)Prob(D∣G),
(A4)
h=Prob(D∣G),
(A5)
and
f=Prob(G∣P)∑GProb(G∣P)=Prob(G∣P),
(A6)
because
∑GProb(G∣P)=1;
(A7)
then we have
Ef(h)=∑GProb(G∣P)∑GProb(G∣P)Prob(D∣G)=∑GProb(G∣P)Prob(D∣G)
(A8)
=Eg(fgh)=Eg[(Prob(G∣P)×Prob(D∣G)∕(Prob(G∣P0)Prob(D∣G)∑GProb(G∣P0)Prob(D∣G))],
(A9)
so that
L(P)=L(P0)Eg(Prob(G∣P)Prob(D∣G)Prob(G∣P0)Prob(D∣G)),
(A10)
where
L(P0)=∑GProb(G∣P0)Prob(D∣G).
The expectation can be estimated by its average over the simulation
L(P)L(P0)≅1m∑imProb(Gi∣P)Prob(Gi∣P0).
(A11)
In Markov chain Monte Carlo approaches, the goal is to sample from the posterior and concentrate the sampling on regions with higher probabilities (Hammersley and Handscomb 1964; Chib and Greenberg 1995; Kasset al. 1998). In our scheme we are approximating the target function Prob(G∣P) Prob(D|G) with Prob(G∣P0) Prob(D|G); this may be a rather crude approximation when P0 is very different from P , but after several updating chains the target and sampling P distribution are very similar. When Θ and Θ_{0} are very similar, our approach is nearly optimal if it is run long enough to avoid problems of autocorrelation. Sampling from the prior alone, Prob(G∣P) for example, is very inefficient as was shown by J.F. in 1988 (J. Felsenstein, unpublished data).
Data models: Our migration estimation divides naturally into two parts: calculation of Prob(G∣P) and Prob(D|G). This makes it easy to implement models for different kinds of data: any data model influences only the latter calculation, which is the genealogy likelihood calculation.
For sequences we are using the model of change originated by one of us (J.F.) as implemented in PHYLIP 3.2 in 1984, described in Kishino and Hasegawa (1989) and described as F84 by Swofford et al. (1996). It is a variant of the Kimura two-parameter model, which allows for different transition and transversion ratios and variable base frequencies. In migrate the model is the same as in PHYLIP 3.5 and includes also rate variation among sites (Felsenstein and Churchill 1996), but the inclusion of rate variation between sites was not tested.
For microsatellite data, we have implemented a one-step mutation model in which the probability of making a net change of i steps in time interval u is
Prob(i∣u)=∑k=0∞e−u(u∕2)i+2k(i+k)!k!.
(A12)
In addition to the stepwise mutation model a Brownian motion approximation is available; this fast approximation to the exact model is described elsewhere.
Our “infinite” allele model is approximated with a (k + 1)-allele model. The observed alleles are A=(a1,a2,…,ak) . All unobserved alleles are pooled into a_{k}_{+1}.
Prob(ai∣ai,u)=e−ufai,Prob(ai∣aj,u)=(1−e−u)faj,faz={1k+1ifaz∈A,1−kk+1ifaz∉A.}
(A13)
F_{ST} calculations: We use a calculation based on FW(i) , the homozygosity within the population i and F_{B}, the homozygosity between populations. For sequences, F_{W} and F_{B} can be calculated using the heterozygosity calculations described in Slatkin and Hudson (1991). The original recurrence equations F_{W} and F_{B} were developed by Maynard Smith (1970) and Maruyama (1970). We use an equation based on Nei and Feldman (1972). The exact equations are simplified by removing quadratic terms like μ^{2}, m^{2}, and μm and divisions by number of individuals in a population (e.g., m/N). For two populations in equilibrium we get the equation system
FW(1)=12N1+(1−2μ−2m1−12N1)FW(1)+2m1FBFW(2)=12N2+(1−2μ−2m2−12N2)FW(2)+2m2FBFB=FB(1−2μ−m1−m2)+m1FW(1)+2m2FW(2),
(A14)
where μ is the mutation rate, m_{i} is the immigration rate into population i, and N_{i} is the subpopulation size. To fit these calculations into our framework we replace 4Nμ with Θ and m/μ with M . We cannot solve for the four parameters we would M need for an accurate comparison with our likelihood method. We can solve the equation system (A14) by assuming either that the population sizes are the same in both subpopulations (SN) or that the migration rates are symmetric (SM). With the SN approach solving for ϴ≡ϴ1≡ϴ2,M1 , and M2 we get
ϴ=2−FW(1)−FW(2)2FB+FW(1)+FW(2),M1=2FBFW(1)+FW(1)−FW(2)−2FB(FW(1)−FB)(FW(1)+FW(2)−2),M2=2FBFW(2)+FW(2)−FW(1)−2FB(FW(2)−FB)(FW(1)+FW(2)−2).
(A15)
The SM approach with Θ_{1}, Θ_{2}, and M ≡ M_{1} ≡ M_{2} results in
ϴ1=(1−FW(1))(FW(1)+FW(2)−2FB(FW(1))2+FW(1)FW(2)−2FB2ϴ2=(1−FW(2))(FW(1)+FW(2)−2FB)(FW(2))2+FW(1)FW(2)−2FB2M=2FBFW(1)+FW(2)−2FB.
(A16)
Comparing (A15) and (A16) one can recognize that the estimation of Mi in (A15) breaks down when FB≥FW(i) . In (A16) the M can be reasonably estimated only if 2FB≤FW(1)+FW(2) . For the comparison with our maximum-likelihood estimator (Table 3) we used SN because of our emphasis on estimation of migration rates. There are differences between the two F_{ST} methods even when the true parameters are symmetrical (data not shown), but we do not have the impression that one F_{ST} method works better than the other over the range we are using for the comparison with our migrate method (Table 5).
TABLE 5
F_{ST}-based parameter estimation
Acknowledgments
We thank Mary K. Kuhner and Jon Yamato for many discussions, much help in debugging, and comments on the manuscript. This work was funded by National Science Foundation grant no. BIR 9527687 and National Institutes of Health grant no. GM51929, both to J.F., and in part by a Swiss National Funds fellowship to P.B.
- Received January 5, 1998.
- Accepted February 23, 1999.
- Copyright © 1999 by the Genetics Society of America