A new method for the estimation of migration rates and effective population sizes is described. It uses a maximum-likelihood framework based on coalescence theory. The parameters are estimated by Metropolis-Hastings importance sampling. In a two-population model this method estimates four parameters: the effective population size and the immigration rate for each population relative to the mutation rate. Summarizing over loci can be done by assuming either that the mutation rate is the same for all loci or that the mutation rates are gamma distributed among loci but the same for all sites of a locus. The estimates are as good as or better than those from an optimized FST-based measure. The program is available on the World Wide Web at http://evolution.genetics.washington.edu/lamarc.html/.
SEVERAL methods for the estimation of migration rates between subpopulations have been proposed. We can subdivide them into two very different approaches: (1) marking individuals and tracking their individual movements and then extrapolating these individual movements to migration rates; or (2) surveying genetic markers in the populations of interest and calculating a migration rate from allele frequencies or sequence differences. Of course, one should be aware that these two approaches do not estimate the same quantity: approach 1 estimates the actual “instantaneous” migration rate of individuals, whereas approach 2 reflects an average over a time period whose length is determined by the rate of mutation per generation of the locus under study or by the time scale of genetic drift. The genetic approach tends to gives a lower estimate than the individual migration rates approach because the method is looking at changes that become established in the subpopulation gene pool.
Current estimation methods for genetic data are methods such as those related to FST (e.g., Slatkin 1991; Slatkin and Hudson 1991), the rare allele approach of Slatkin (1985), a maximum-likelihood method using gene frequency distributions (Rannala and Hartigan 1996; Tuftoet al. 1996), and approaches based on coalescent theory (Kingman 1982a,b), such as the cladistic approach of Slatkin and Maddison (1989), the method outlined in Wakeley (1998), and maximum likelihood using coalescent theory (Nath and Griffiths 1993, 1996). We describe here a new method using a maximum-likelihood- and coalescent theory-based approach.
Our method integrates over all possible genealogies and migration events. This should be superior to pairwise estimators such as those using FST or related statistics (cf. Felsenstein 1992b) and also more powerful than the cladistic approach of Slatkin and Maddison (1989), which needs to know the true genealogy. Our approach estimates similar parameters as the program of Bahlo and Griffiths (http://www.maths.monash.edu.au/~mbahlo/mpg/gtree.html/), which is based on the work of Griffiths and Tavaré (1994) and Nath and Griffiths (1996). It differs in the way we search the genealogy space, and our methods support mutation models for different types of data: the infinite allele model for electrophoretic markers, a one-step model for microsatellites, and a finite-sites model for nucleotide sequences. The sequence model is more useful than the infinite sites model, which forces the researcher to discard data when there are multiple mutations at the same sites.
We propose a method to make a maximum-likelihood estimate of population parameters for geographically subdivided populations. The general outline of such estimates involves extending coalescence theory (Kingman 1982a,b) to include migration events. Migration models with coalescents were first developed by Takahata (1988) and Takahata and Slatkin (1990) for two gene copies and discussed more generally for n gene copies in two populations by Hudson (1990), with generalization to multiple populations and different models of migration by Notohara (1990). Nath and Griffiths (1993, 1996) used this migration-coalescent process for maximum-likelihood estimation of one parameter, the effective number of migrants , where Ne is the effective population size and m is the migration rate per generation.
Our migration model consists of two populations. These have effective population sizes and , which need not be equal. The populations exchange migrants at rates m1 and m2 per generation (Figure 1).
Because we observe only genetic differences and do not know the mutation rate μ, we must absorb μ into the parameters so that we use as our parameters and and occasionally write We make the following assumptions: that diploid individuals in each population reproduce according to any standard population genetic model that converges to the same diffusion equation as a Wright-Fisher model, that populations have a constant population size through time so that they do not grow or shrink, and that the mutation rate μ is constant. These are also the assumptions used by Kingman (1982a) for coalescence theory. We use the convention with sequence data that μ is the rate of mutation per generation per site, whereas with microsatellite or enzyme electrophoretic data we take μ to be the rate of mutation per generation per locus. To emphasize this distinction between sites and loci, we use a capital Θ in the first case and a small θ for the latter ones. The mutation rate μ can vary among loci according to a gamma distribution. In addition we also need to assume that the two populations exchange migrants with constant rates per generation.
The likelihood formula for this family of methods (Felsenstein 1988, 1992a; Kuhner et al. 1995, 1998) is based on the product between the genealogy likelihood Prob(D|G) (e.g., Felsenstein 1973; Swoffordet al. 1996) and the prior probability of the coalescent genealogy (Kingman 1982a,b) (1) where the parameters are
Takahata and Slatkin (1990) found that Kingman’s results could not be extended to cases with geographical structure. They were unable to obtain the distribution of times to coalescence in the presence of migration. We have avoided this difficulty by having the genealogy G specify not only the coalescences but also the times and places of migration events. With this information in G, its probability density becomes for two populations a product of terms for the intervals between events in the genealogy (2) where (3) is obtained by computing for each interval the probability that nothing happens during the interval times the probability of the particular event at the bottom (beginning) of the interval. The variable vi is the total number of coalescences in subpopulation i and w.i is the sum of the numbers of migration events to subpopulation i over all time intervals T. Furthermore, pj is the probability that no event occurs during time interval j, uj is the length of time interval j, and kji is the number of lineages in subpopulation i during time interval j.
If we extend the parameter estimation from a single locus to many unlinked loci, we face the additional problem that, although μ may be constant per locus, it can vary between loci. Neglecting variation of μ, combining loci using the likelihood framework is simple: (4) On the other hand, if we assume that μ follows a gamma distribution and therefore can vary between loci, we need to include an additional parameter, the shape parameter α of this distribution. The other parameter of this gamma distribution is a scale parameter that can be conveniently chosen to be . By parameterizing in this way, 1/α is the squared coefficient of variation of . We cannot estimate the mean mutation rate of this gamma distribution directly. As Ne is the same for all loci, we can use an arbitrary Θ (we have chosen the effective population size of the first population) and scale the other parameters relative to it. We replace μ in the parameters Θi and with a new variable, the mutation rate x. Θξ is the mutation rate x scaled by . We integrate over all possible values of Θξ instead of x. This does not change the shape of our likelihood surface, because in Θξ is constant, so that we have (5) where and This integral over all mutation rate values is implemented using a discrete gamma approximation with 100 intervals (cf. Yang 1994).
Note that the gamma distribution described here is the distribution of μ across loci, not the more commonly used gamma distribution of rates across sites within a locus. If needed the latter can be approximated by Hidden Markov model methods in the case of DNA sequences.
To calculate the likelihood for the parameters using (1) we ought to sum over all possible genealogies, including all topologies with all possible branch lengths and with all possible migration scenarios. This is not possible, as even with two tips there are an infinite number of possible sequences of migration events with different branch length. We have to resort to an approximation of the likelihood function using a Metropolis-Hastings Markov chain Monte Carlo approach (Metropoliset al. 19531; Hammersley and Handscomb 1964, Hastings 1970; Kuhneret al. 1995). The derivation of the importance sampling function is shown in the appendix.
The parameter estimation is simple in principle: we have to find the maximum of the likelihood function, which has five or six dimensions, depending on whether μ is allowed to vary between loci. This is done by a modified damped Newton-Raphson method (Dahlquist and Björk 1974) using explicit equations for the first and second derivatives.
The genealogy sampler is of more interest. A large sample of genealogies G1, G2,...,Gm is drawn from a distribution whose density is proportional to Prob(D|G) . This sampling scheme is biased toward genealogies that contribute more to the likelihood with a given parameter set . We find the relative likelihood at other values by dividing by the density from which we sampled the genealogies (see also appendix). If m is the number of sampled genealogies, we get (6)
In the program Coalesce (Kuhneret al. 1995) a region of internal nodes in the genealogy is erased and rebuilt according to the coalescent model. This scheme is not applicable to migration models. We adopt a new scheme that is much more general and that can be used in any extension to the coalescent model. The first genealogy for each locus is constructed with a UPGMA method (as implemented by M. K. Kuhner and J. Yamato in Felsenstein 1993), and the minimal necessary migration events are inserted using a Fitch parsimony algorithm (Fitch 1971). The times for coalescent events or migration events on this first genealogy are calculated with (3) using a uniform random number for pj and solving for uj, and parameter values, which are either guessed by the researcher or calculated using an FST-based method (see appendix).
The genealogies are sampled as follows (cf. Figure 2): A coalescent node or tip z on this genealogy is chosen at random, the lineage below it is dissolved, and the node is used as the starting point to simulate the ancestry using a migration-coalescent process through a series of time intervals (Figure 2). The other lineages do not change and are taken as given and form the partial genealogy Gp. For the further description of the rearrangement process we use “up” and “top” for being or moving closer to the tips of the genealogy and “down” and “bottom” for being or moving toward the root of the genealogy. Starting at the chosen node z (see Figure 2), we draw a new time interval by solving for uj in (7) after substituting a uniform deviate for , where is the number of lineages of population i in the partial genealogy Gp during the time interval j. If this new time is further back in time than the bottom of the time interval j on the partial genealogy Gp, the active lineage advances down to that time and the simulation process starts again. When the newly drawn time is in the active time interval, an event, either a migration or coalescence, is drawn according to their relative probabilities of occurrence, e.g., (8) If a migration event occurs the process moves down to the time when the migration event happened. Below a migration event the active lineage is in the other population and the migration-coalescent process continues (see also Figure 2C). After a coalescent event occurs the process stops. The transformation from one genealogy into another allows change from any genealogy over several steps into any other and therefore allows the process to search the whole genealogy space. On the other hand, the resimulation of a single line guarantees that newly found genealogy is similar to the old genealogy.
In cases where all the lineages have not coalesced by the time of the root of the partial genealogy, simulation on the active lineage and the bottom lineage of Gp continues until the lineages coalesce (Figure 3). When simulating on two lineages we draw new time intervals using (3) instead of (7). There is one exception to this rule at the bottom of the genealogy: if by chance the dissolved lineage is the bottommost lineage of one of the populations on Go, we keep simulating with a single active lineage below the root of Gp until the former root of Go is reached and then start to simulate on both of the two remaining lines. The lineage between the root of Gp and the old root on Go on the partial genealogy has a known history, so there is no need to simulate this history again.
A change to the newly found genealogy Gn is accepted with probability (9) where rM is the Metropolis term (10) and rH is the Hastings term (11) This acceptance probability r is based on the work of Hastings (1970). In our specific implementation it can be simplified. The scheme for changing the genealogy creates a new genealogy from an old one by first randomly removing a branch with all its possible migration events, which creates a partial genealogy Gp. The probability of a particular change is only governed by the number of coalescent nodes in the genealogy, and this number is constant so that the probabilities Prob(Gp|Gn) and Prob(Gp|Go) are equal. For a specific triplet of original, partial, and new genealogies Go, Gp, and Gn (12) holds. The resimulation process (7)-(8) is driven only by the parameters and so Prob(G|Gp) is proportional to so that (13)
This results in cancellation of the terms in r, so that we have (14)
If the genealogy is accepted, it is the starting point for a new cycle; otherwise the previous one continues to be used. After having sampled a large number of genealogies we estimate the parameters by maximizing (6). In theory a single long Markov chain would be enough to find the genealogies that contribute most to the likelihood, but if we start from a very bad we can need an excessively long time to find this region. If the likelihood of a new genealogy is only slightly better than that of the old one, the search is moving nearly randomly and can spend a long time in regions that do not contribute much to the final likelihood. The sampler will move more quickly if the likelihood surface is steep. If we start with bad parameters there is some chance that the likelihood surface is locally very flat. To overcome this, we restart a new Markov chain with the improved and the last genealogy of the most recent chain. Our implementation uses a number of short chains in which the parameter estimates are based only on a few genealogies to find regions with good parameter values. It then uses an arbitrary number of long chains to get good estimates of the parameters. All but the last chain are discarded and not used for the final result. Kuhner et al. (1998) have not found a decrease in performance compared to the scheme of taking several chains into account for the final result. If we run this genealogy sampling process long enough and with enough chains, the sampler will find its way to the region with the biggest contribution to the likelihood and therefore approximate the maximum-likelihood estimate of our parameters . The main problem with importance sampling schemes is that little is known about how we can be sure that we have sampled enough genealogies from the region that contributes most to our likelihood function. It is common practice in our group to run 10 “short” chains and then 2 “long” chains. Using random values for the first parameter set, we have found that for a given data set the results converge to a value that is not dependent on these initial parameter values when the total numbers of sampled genealogies exceed 30,000 genealogies (data not shown). These are 10 short chains with 1000 genealogies and 2 long chains with 10,000 genealogies sampled. It seems to be sufficient to deliver acceptable estimates for simulated data and averages of those, although we would recommend choosing a better start parameter than a random guess and running the program at least twice as long. The number of genealogies needed for an accurate estimate is certainly dependent on many factors, such as starting parameters, first genealogy, number of sampled individuals, variability in the data set, and number of migration events.
We have simulated genealogies with known true parameters using a coalescent with migration and evolved data according to our data models along the branches starting from the root of the genealogy (Hudson 1983). The data that resulted at the tips were used as the input data for our method. The mutation rate was held constant for all simulations except one, in which we used a gamma-distributed mutation rate with shape parameter α = 1.0. For all other simulations we used 10 short chains sampling 1000 genealogies, keeping 100 of these genealogies for the parameter estimation and 3 long chains sampling 10,000 genealogies, using 1000 of them. The last chain delivered the parameters shown in the tables, and all other chains were discarded.
For the comparison with an FST-based estimator (see appendix) we used a symmetrical parameter setup for the simulations, Θ1 = Θ2 and γ1 = γ2. However, the parameters that were estimated were not constrained to be symmetric.
Our simulations generally recover the true ΘT very well (Table 1). With 1000 bp the means are better than with 500 bp. The means for γ are not close to the true γT. With low ΘT values γ is grossly overestimated. The upward bias in γ is huge with 500-bp data and shrinks when longer sequences are used (Tables 1 and 2).
Sequences in two populations can be similar to each other due to migration or due to low variation caused by a small Θ. The simulated datasets with low Θ sometimes show by chance no variation at all in short sequences. In these cases one would expect the estimate of Θ to be 0.0 and the estimate of to be indeterminate. The likelihood surface for such data is very flat and some of the runs (Table 1) failed to find a maximum. These cases were omitted from the tables. If the migration parameter γ is high, it is consistently underestimated with high Θ. This is caused by our allowing only a limited number of migration events per genealogy in the Markov chain Monte Carlo runs, which were 1000 migration events per genealogy, so that solutions with higher numbers of migration events were not proposed. Our method delivers similar estimates for Θ and similar or better estimates for γ than an FST-based estimator (Table 3).
The number of cases in which the FST-based estimator (Table 3) is usable is often dramatically lower than that with the ML estimator (Table 1; see also appendix). For example, with ΘT = 0.001 and γT = 10 the FST-based estimator could be used in only 45 out of 100 runs. This behavior seems not to improve with longer sequences. Increasing the number of sites and the number of loci helps to reduce the variance of the ML estimates (Table 2 and Figure 4). Estimates of γ from datasets with very long sequences are biased and are smaller than γT. This is the result of low acceptance rates during a run, when the starting genealogy is so well defined by the data that in our simulation runs the chains were too short and too few different migration scenarios were tried. The estimation of nonsymmetrical true parameters works quite well (Table 4). We encounter similar biases as with symmetrical parameters: with low Θ the γ estimates are biased upward and with high Θ they are biased downward.
The assumption that mutation rate varies according to a gamma distribution adds more noise to the estimation. All biases are similar to the ones shown in the tables. The estimation of the shape parameter 1/α is certainly not very precise with only a few loci (Figure 5). One of the runs shown in Figure 5 obviously contains almost no information about 1/α and its log-likelihood curve is nearly flat, so that almost any value of 1/α can be accepted for this dataset. The increase of the log-likelihood values with very small 1/α and far from the maximum is due to imprecisions in the calculation using the discrete gamma approximation. In evaluations using a more exact but much slower quadrature scheme (Sikorskiet al. 1984) the peaks of the log-likelihood curves are at similar values, but log-likelihood for very low 1/α values are monotonically decreasing and not, as with the discrete gamma approximation, slightly increasing.
The estimation of the migration parameter γ seems to be rather imprecise, and this is true for both migrate and FST (e.g., Tables 1 and 3). The standard deviations for γ are large for single-locus estimates. More sites per locus give better estimates but the improvement of the variance is small, and given the obstacles of sequencing long stretches of DNA for several individuals one might better invest in the investigation of an additional unlinked locus. Each new locus has its own genealogy that is not correlated with that of the other loci and so increases the power of estimating parameters more, as we can see in Table 2.
There is considerable bias in the estimates of γ when datasets either have no or little genetic variation or have very high variation. Datasets with almost no variation inflate and produce an upward bias. The likelihood surfaces become very flat because of the lack of variation, and almost any migration parameter value is possible. These problems can be overcome by adding more variation, which for low true population size ΘT means either adding more base pairs or adding more individuals. With high ΘT and high true migration parameter γT, the estimates are biased downward because of our need to truncate the number of migration events in the reconstructed genealogies at some high arbitrary number. If we did not bias against high numbers of migration events, the program could add more and more migration events and would eventually crash the computer by running out of memory or would run too long.
In single runs we can also see a “fatal attraction” to 0.0 if the true parameters are very close to 0.0. The genealogy sampler is using the current parameter values to propose coalescences and migration events. If one or more of these parameters are very close to 0.0, then it can be seen by inspecting (2) that as a result our procedure will either most often propose an immediate coalescent event if one of the Θ is close to 0.0 or rarely propose an immigration event with very low γ. If a chain never proposes any instances of an event, its rate of occurrence will be estimated to 0.0, and this situation may persist indefinitely, a fatal attraction. Even if the parameters are prevented from becoming exactly zero this means that it may take large amounts of simulation to escape these values.
Because long sequences define the genealogy very well, our Markov chain Monte Carlo sampler will often reject newly proposed genealogies and only a few different genealogies are sampled for the parameter estimation. Therefore, the program does not readily explore the whole migration-genealogy space and tends to stick to the good starting genealogy. This generates a bias in because we have started with a genealogy that has a minimal number of migration events on it (Table 2). We may be able to overcome this mixing problem by adding a “heating” scheme to our importance sampling. Currently the only way to overcome this is to run the chains much longer than the defaults (for practical guidance see Beerli 1997).
Summing over loci assuming that the mutation rate is constant between loci delivers much better estimates of the parameters than single-locus estimation. This model is rather unnatural for real data, especially microsatellite data or electrophoretic data. Summing over loci using a gamma-distributed mutation rate and estimating the shape parameter 1/α increases the variation of the parameters but allows more realistic estimation of the parameters with several loci.
Conclusion: Our method based on coalescents with migration delivers similar or better estimates than the FST method based on the expectations derived by Maynard Smith (1970), which for certain data also produces fine results. Qualitative comparison with other methods than the FST measure, for example Rannala and Hartigan (1996), or the cladistic approach of Slatkin and Maddison (1989; see also Hudsonet al. 1992, their Table 1) shows that all these methods have a tendency to overestimate γ when Θ is not very high and all methods have quite large variances. The maximum-likelihood estimators have smaller variances than the other approaches. But the biggest drawback of the FST methods compared to ML methods seems to be their inability to estimate population size and migration rate for data where the homozygosity between populations happens to be equal to or less than the homozygosity within a population (see also Hudsonet al. 1992).
Direct comparison of the different methods is currently difficult because each one estimates a different number of parameters or uses a different migration model. We expand our model to accept more populations and different migration schemes. This should then facilitate direct comparison. We believe that taking the history of mutation into account and summing over all possible genealogies gains the most information from the data. We believe that our approach and the approaches of Nath and Griffiths (1996) and Bahlo and Griffiths (http://www.maths.monash.edu.au/~mbahlo/mpg/gtree.html/) will produce better estimates than other approaches.
The methods described here are available in the program Migrate over the World Wide Web at http://evolution.genetics.washington.edu/lamarc.html/. We distribute it as C source code and also as binaries for several platforms including PowerMacintoshes and Windows 95/98 for Intel-compatible processors.
Derivation of the importance sampling function: We want to calculate Equation 1 but would need to sum over all possible genealogies. This function can be transformed into an importance sampling function by assuming that is an expectation and we sample from a distribution whose density is g instead of the correct density, f, (A1) (A2) (A3) Suppose that (A4) (A5) and (A6) because (A7) then we have (A8) (A9) so that (A10) where The expectation can be estimated by its average over the simulation (A11) In Markov chain Monte Carlo approaches, the goal is to sample from the posterior and concentrate the sampling on regions with higher probabilities (Hammersley and Handscomb 1964; Chib and Greenberg 1995; Kasset al. 1998). In our scheme we are approximating the target function Prob(D|G) with Prob(D|G); this may be a rather crude approximation when is very different from , but after several updating chains the target and sampling P distribution are very similar. When Θ and Θ0 are very similar, our approach is nearly optimal if it is run long enough to avoid problems of autocorrelation. Sampling from the prior alone, for example, is very inefficient as was shown by J.F. in 1988 (J. Felsenstein, unpublished data).
Data models: Our migration estimation divides naturally into two parts: calculation of and Prob(D|G). This makes it easy to implement models for different kinds of data: any data model influences only the latter calculation, which is the genealogy likelihood calculation.
For sequences we are using the model of change originated by one of us (J.F.) as implemented in PHYLIP 3.2 in 1984, described in Kishino and Hasegawa (1989) and described as F84 by Swofford et al. (1996). It is a variant of the Kimura two-parameter model, which allows for different transition and transversion ratios and variable base frequencies. In migrate the model is the same as in PHYLIP 3.5 and includes also rate variation among sites (Felsenstein and Churchill 1996), but the inclusion of rate variation between sites was not tested.
For microsatellite data, we have implemented a one-step mutation model in which the probability of making a net change of i steps in time interval u is (A12) In addition to the stepwise mutation model a Brownian motion approximation is available; this fast approximation to the exact model is described elsewhere.
Our “infinite” allele model is approximated with a (k + 1)-allele model. The observed alleles are . All unobserved alleles are pooled into ak+1. (A13)
FST calculations: We use a calculation based on , the homozygosity within the population i and FB, the homozygosity between populations. For sequences, FW and FB can be calculated using the heterozygosity calculations described in Slatkin and Hudson (1991). The original recurrence equations FW and FB were developed by Maynard Smith (1970) and Maruyama (1970). We use an equation based on Nei and Feldman (1972). The exact equations are simplified by removing quadratic terms like μ2, m2, and μm and divisions by number of individuals in a population (e.g., m/N). For two populations in equilibrium we get the equation system (A14) where μ is the mutation rate, mi is the immigration rate into population i, and Ni is the subpopulation size. To fit these calculations into our framework we replace 4Nμ with Θ and m/μ with . We cannot solve for the four parameters we would M need for an accurate comparison with our likelihood method. We can solve the equation system (A14) by assuming either that the population sizes are the same in both subpopulations (SN) or that the migration rates are symmetric (SM). With the SN approach solving for , and we get (A15) The SM approach with Θ1, Θ2, and M ≡ M1 ≡ M2 results in (A16) Comparing (A15) and (A16) one can recognize that the estimation of in (A15) breaks down when . In (A16) the can be reasonably estimated only if . For the comparison with our maximum-likelihood estimator (Table 3) we used SN because of our emphasis on estimation of migration rates. There are differences between the two FST methods even when the true parameters are symmetrical (data not shown), but we do not have the impression that one FST method works better than the other over the range we are using for the comparison with our migrate method (Table 5).
We thank Mary K. Kuhner and Jon Yamato for many discussions, much help in debugging, and comments on the manuscript. This work was funded by National Science Foundation grant no. BIR 9527687 and National Institutes of Health grant no. GM51929, both to J.F., and in part by a Swiss National Funds fellowship to P.B.
Communicating editor: S. Tavaré
- Received January 5, 1998.
- Accepted February 23, 1999.
- Copyright © 1999 by the Genetics Society of America