Modeling Linkage Disequilibrium and Identifying Recombination Hotspots Using Single-Nucleotide Polymorphism Data
Na Li, Matthew Stephens

## Abstract

We introduce a new statistical model for patterns of linkage disequilibrium (LD) among multiple SNPs in a population sample. The model overcomes limitations of existing approaches to understanding, summarizing, and interpreting LD by (i) relating patterns of LD directly to the underlying recombination process; (ii) considering all loci simultaneously, rather than pairwise; (iii) avoiding the assumption that LD necessarily has a “block-like” structure; and (iv) being computationally tractable for huge genomic regions (up to complete chromosomes). We examine in detail one natural application of the model: estimation of underlying recombination rates from population data. Using simulation, we show that in the case where recombination is assumed constant across the region of interest, recombination rate estimates based on our model are competitive with the very best of current available methods. More importantly, we demonstrate, on real and simulated data, the potential of the model to help identify and quantify fine-scale variation in recombination rate from population data. We also outline how the model could be useful in other contexts, such as in the development of more efficient haplotype-based methods for LD mapping.

LINKAGE disequilibrium (LD) is the nonindependence, at a population level, of the alleles carried at different positions in the genome. The patterns of LD observed in natural populations are the result of a complex interplay between genetic factors and the population's demographic history. In particular, recombination plays a key role in shaping patterns of LD in a population. When a recombination occurs between two loci, it tends to reduce the dependence between the alleles carried at those loci and thus reduce LD. Although recombination events in a single meiosis are relatively rare over small regions, the large total number of meioses that occurs each generation in a population has a substantial cumulative effect on patterns of LD, and so molecular data from population samples contain valuable information on fine-scale variations in recombination rate.

Despite the undoubted importance of understanding patterns of LD across the genome, most obviously because of its potential impact on the design and analysis of studies to map disease genes in humans, most current methods for interpreting and analyzing patterns of LD suffer from at least one of the following limitations:

1. They are based on computing some measure of LD defined only for pairs of sites, rather than considering

2. They assume a “block-like” structure for patterns of LD, which may not be appropriate at all loci.

3. They do not directly relate patterns of LD to biological mechanisms of interest, such as the underlying recombination rate.

As an example of the limitations of current methods, consider Figure 1, which graphically shows pairwise LD measures for six simulated data sets, simulated under various models for heterogeneity in the underlying recombination rate. The reader is invited to speculate on what the underlying models are in each case—the answer appears in the Figure 8 legend. In each of the six figures one can identify by eye, or by some quantitative criteria (e.g., Dalyet al. 2001; Olivieret al. 2001; Wanget al. 2002), “blocks” of sites, such that LD tends to be high among markers within a block. In some cases there might also be little LD between markers in different blocks, which might be interpreted as evidence for variation in local recombination rates: low recombination rates within the blocks and higher rates between the blocks. Indeed, Jeffreys et al. (2001) have shown, using sperm typing, that in the class II region of the MHC, variations in local recombination rate are indeed responsible for block-like patterns of LD. However, without this type of experimental confirmation, which is currently technically challenging and time consuming, it is difficult to distinguish between blocks that arise due to recombination rate heterogeneity and blocks that arise due to chance, perhaps through chance clustering of recombination events in the ancestry of the particular sample being considered (Wanget al. 2002). The ability to distinguish between these cases would of course be interesting from a basic science standpoint—for example, in helping to identify sequence characteristics associated with recombination hotspots. In addition, it would have important implications for the design and analysis of LD mapping studies. For example, it would help in predicting patterns of variation at sites that have not been genotyped (perhaps sites influencing susceptibility to a disease), and it would provide some indication of whether block structures observed in one sample are likely to be replicated in other samples—a crucial requirement for being able to select representative “tag” single-nucleotide polymorphisms (SNPs; Johnsonet al. 2001) on the basis of LD patterns observed in some reference sample.

Figure 1.

—Plots of LD measurement, |D′|, (bottom right diagonal) and P value for Fisher's exact test (top left diagonal) for every pair of sites with minor allele frequency >0.15 in data sets simulated under varying assumptions about variation in the local recombination rate. Details of the models used to simulate each data set appear in the Figure 8 legend, which is based on the same six data sets.

In this article we introduce a statistical model for LD that overcomes the limitations of existing approaches by relating genetic variation in a population sample to the underlying recombination rate. We examine in detail one natural application of the model: estimation of underlying recombination rates from population data. Using simulation, we show that in the case in which recombination is assumed constant across the region of interest, recombination rate estimates based on our model are competitive with the very best of current available methods. More importantly, we demonstrate, on real and simulated data, the potential of the model to help identify and quantify fine-scale variation in recombination rate (including “recombination hotspots”) from population data.

Although we focus here on estimating recombination rates, we view the model as being useful more broadly in interpreting and analyzing patterns of LD across multiple loci. In particular, as we outline in our discussion, the model could be helpful in the development of more efficient haplotype-based methods for LD mapping, along the lines of, for example, McPeek and Strahs (1999), Morris et al. (2000), and Liu et al. (2001).

## MODELS

Background: The most successful current approaches to constructing statistical models relating genetic variation to the underlying recombination rate (and to other genetic and demographic factors) are based on the coalescent (Kingman 1982) and its generalization to include recombination (Hudson 1983). Although these approaches are based on rather simplistic assumptions about the demographic history of the population from which individuals were sampled and about the evolutionary processes acting on the genetic region being studied, they have nonetheless proven useful in a variety of applications. In particular, they provide a helpful simulation tool (e.g., software described in Hudson 2002), allowing more realistic data to be generated under various assumptions about underlying biology and demography, and hence aid exploration of what patterns of LD might be expected under different scenarios (Kruglyak 1999; Pritchard and Przeworski 2001).

Despite the ease with which coalescent models can be simulated from, using these models for inference remains extremely challenging. For example, consider the problem of estimating the underlying recombination rate in a region, using data from a random population sample. It follows from coalescent theory that population samples contain information on the value of the product of the recombination rate c and the effective (diploid) population size N, but not on c and N separately. It has therefore become standard to attempt to estimate the compound parameter ρ= 4Nc, and several methods have been proposed. Some (e.g., Griffiths and Marjoram 1996; Kuhneret al. 2000; Nielsen 2000; Fearnhead and Donnelly 2001) try to make use of all the molecular data available. However, although such methods have been applied successfully to small regions and nonrecombining parts of the genome (Hardinget al. 1997; Hammeret al. 1998; Kuhneret al. 2000; Nielsen 2000; Fearnhead and Donnelly 2001), for even moderate-sized autosomal regions (e.g., a few kilobases in humans) they become computationally impractical (Fearnhead and Donnelly 2001). Other methods, many of which are considered by Wall (2000), make use of only summaries of the data, substantially reducing computational requirements at the expense of some loss in efficiency.

More recently, Hudson (2001) and Fearnhead and Donnelly (2002) proposed “composite-likelihood” methods for estimating ρ over moderate to large genomic regions. Hudson's method is based on multiplying together likelihoods for every pair of sites genotyped, in which these pairwise likelihoods are computed via simulation, assuming an “infinite-sites” mutation model (i.e., no repeat mutation). This method has been modified by McVean et al. (2002) to allow for repeat mutation. Fearnhead and Donnelly's method is based on dividing data on a large region into smaller regions and multiplying likelihoods obtained for each smaller region. These methods, together with the best of the summarystatistic-based methods of Wall (2000), appear to be the most accurate of existing methods for estimating recombination rates from patterns of LD over moderate to large genomic regions. None of these methods, as currently implemented, allows explicitly for variation in recombination rate along the region under study.

A new model: Here we describe a new model for LD, which enjoys many of the advantages of coalescent-based methods (e.g., it directly relates LD patterns to the underlying recombination rate) while remaining computationally tractable for huge genomic regions, up to entire chromosomes. Our model relates the distribution of sampled haplotypes to the underlying recombination rate, by exploiting the identity Pr(h1,,hnρ)=Pr(h1ρ)Pr(h2h1;ρ)Pr(hnh1,,hn1;ρ), (1) where h1,..., hn denote the n sampled haplotypes, and ρ denotes the recombination parameter (which may be a vector of parameters if the recombination rate is allowed to vary along the region). This identity expresses the unknown probability distribution on the left as a product of conditional distributions on the right. For simplicity we often use the notation π to denote these conditional distributions. While the conditional distributions are not computationally tractable for models of interest, they are amenable to approximation, as we describe below. Our strategy is to substitute an approximation for these conditional distributions (π^ , say) into the right-hand side of (1), to obtain an approximation to the distribution of the haplotypes h given ρ: Pr(h1,,hnρ)π^(h1ρ)π^(h2h1;ρ)π^(hnh1,,hn1;ρ). (2) We refer to this as a “product of approximate conditionals” (PAC) model and to the corresponding likelihood as a PAC likelihood, which we denote LPAC. Explicitly, LPAC(ρ)=π^(h1ρ)π^(h2h1;ρ)π^(hnh1,,hn1;ρ). (3) Similarly, we refer to the value of ρ that maximizes LPAC as a maximum PAC likelihood estimate for ρ and denote it by ρ^PAC .

The utility of the model (3) will naturally depend on the use of an appropriate approximation for the conditional distribution π. This approximation should be designed to answer the following question: if, at a particular locus, in a random sample of k chromosomes from a population, we observe genetic types h1,..., hk, what is the conditional distribution of the type of the next sampled chromosome, Pr(hk+1|h1,..., hk)? We are aware of three forms for π in the literature, each of which attempts to answer this question under different assumptions for the genetic model underlying the loci being studied. The first and best known comes from the Ewens sampling formula (Ewens 1972). This arises from considering a neutral locus in a randomly mating population, evolving with constant (diploid) size N and mutation rate μ per generation, and assuming an “infinite-alleles” mutation model, in which each mutation creates a novel (previously unseen) haplotype. Under these idealized conditions, if we let θ= 4Nμ, then with probability k/(k + θ) the k + 1st haplotype is an exact copy of one of the first k haplotypes chosen at random; otherwise it is a novel haplotype. Although the assumptions underlying this formula will never hold in practice, it does capture the following properties that we would expect to hold more generally:

1. The next haplotype is more likely to match a haplotype that has already been observed many times rather than one that has been observed less frequently.

2. The probability of seeing a novel haplotype decreases as k increases.

3. The probability of seeing a novel haplotype increases as θ increases. However, for modern molecular data, and for sequence data and SNP data in particular, it fails to capture the following two properties:

4. If the next haplotype is not exactly the same as an existing (i.e., previously seen) haplotype, it will tend to differ by a small number of mutations from an existing haplotype, rather than to be completely different from all existing haplotypes.

5. Due to recombination, the next haplotype will tend to look somewhat similar to existing haplotypes over contiguous genomic regions, the average physical length of these regions being larger in areas of the genome where the local rate of recombination is low.

Figure 2.

—Illustration of how πA(hk+1|h1,..., hk) builds hk+1 as an imperfect mosaic of h1,..., hk. This illustrates the case k = 3 and shows two possible values (h4A and h4B) for h4, given h1,h2,h3. Each of the possible h4's can be thought of as having been created by “copying” (imperfectly) parts of h1,h2, and h3. The shading in each case shows which haplotype was copied at each position along the chromosome. Intuitively we think of h4 as having recent shared ancestry with the haplotype that it copied in each segment. We assume that the copying process is Markov along the chromosome, with jumps (i.e., changes in the shading) occurring at rate ρ/k per physical distance. Thus the more frequent jumps in h4B suggest a higher value of ρ than do the less frequent jumps in h4A. Note that for very large values of ρ the loci become independent, as they should. Each column of circles represents a SNP locus, with black and white representing the two alleles. The imperfect nature of the copying process is exemplified at the third locus, where h4A and h4B have the black allele, although they copied h2, which has the white allele. In practice, of course, the shading is not observed, and so to compute the probability of observing a particular h4 we must sum over all possible shadings. The Markov assumption allows us to do this efficiently, using standard methods for hidden Markov models, as described in appendix a.

Stephens and Donnelly (2000) suggested a form for π that captures properties i–iv above. In their suggested form for π, the next haplotype differs by M mutations from a randomly chosen existing haplotype, where M has a geometric distribution with Pr(M = 0) = k/(k +θ) (so that it reproduces the Ewens sampling formula in the special case of the infinite-alleles mutation model). Thus the next haplotype is a (possibly imperfect) “copy” of a randomly chosen existing haplotype.

Fearnhead and Donnelly (2001; henceforth FD) extended this form for π to also capture property v above. In FD's approximation, the k + 1st haplotype is made up of an imperfect mosaic of the first k haplotypes, with the size of the mosaic fragments being smaller for higher values of the recombination rate.

Here we use two new forms for π that also capture properties i–v above. The first, described in detail in appendix a and illustrated in Figure 2, which we denote πA, is a simplification of FD's approximation that is easier to understand and slightly quicker to compute. [N. Patterson (personal communication) has independently suggested a similar simplification.] The second, which we describe in detail in appendix b and denote πB, is a slight modification of πA, developed using empirical results from Figure 3 to produce a likelihood LPAC that gives more accurate estimates of ρ. Where necessary, we denote the PAC likelihoods and maximum PAC-likelihood estimates corresponding to πA (respectively, πB) by LPAC-A and ρ^PAC-A (respectively, LPAC-B and ρ^PAC-B ).

A key property of both πA and πB is that they are easy and fast to compute. Unlike the Ewens sampling formula, but like the approximations of Stephens and Donnelly (2000) and FD, neither corresponds exactly to the actual conditional distribution under explicit assumptions about population demography and the evolutionary forces on the locus under consideration. Indeed, no closed-form expressions for π, based on such explicit assumptions, and capturing iv or v, are known. However, the suggested forms for π were motivated by considering both the Ewens sampling formula and the underlying genealogy (or, in the case with recombination, genealogies) relating a random sample of haplotypes from a neutrally evolving, constant-sized panmictic population. As such, it may be helpful to view them as approximations to the (unknown) true conditional distribution under these assumptions. In particular, there are certain aspects of many real populations (e.g., population expansion or population structure) and biological factors (e.g., gene conversion and selection) that these forms for π do not attempt to capture. For some applications this may not matter very much. For others it may be necessary to develop forms for π that do capture these aspects—a point we return to in the discussion.

An unwelcome feature of the PAC likelihoods corresponding to our choices of π—and indeed the forms for π from Stephens and Donnelly (2000) and FD—is that they depend on the order in which the haplotypes are considered. In other words, although these likelihoods each correspond to a valid probability distribution on the haplotypes, these probability distributions do not enjoy the property of exchangeability that we would expect to be satisfied by the true (unknown) distribution. Practical experience, and theory in Stephens and Donnelly (2000; their Proposition 1, part d), suggests that this problem cannot be rectified by making a simple modification to π. Although in principle the dependence on ordering could be removed by averaging the PAC likelihood over all possible orderings of the haplotypes, in practice this would require a sum over n! terms, which is infeasible even for rather small values of n. Instead, as a pragmatic alternative solution, we propose to average LPAC over several random orders of the haplotypes. Unless otherwise stated, all results reported here were obtained by averaging over 20 random orders. In our experience, the performance of the method is not especially sensitive to the number of random orders used—results based on 100 random orders gave qualitatively similar results, and results based on a single random order were often not much worse (data not shown). It is, however, important that when comparing likelihoods for different values of ρ, the same set of random orders should be used for each value of ρ.

Figure 3.

—Histograms of the error , based on 100 data sets simulated from the standard coalescent model with n = 50 haplotypes and S = 50 segregating sites. The values of ρ are (a) ρ= 5, (b) ρ= 25, and (c) ρ= 500. Superimposed curves are normal densities with the same mean and standard deviation as the 100 values making up the histogram. These results, as well as those in Figure 4 and Table 1, are based on averaging the likelihoods over 10 random orders of the haplotypes.

## ESTIMATING CONSTANT RECOMBINATION RATE

In this section we consider estimating the recombination rate when it is assumed to be constant across the region of interest. More precisely, we assume that crossovers in a single meiosis occur as a Poisson process of constant rate c per unit (physical) distance and consider estimating the scalar parameter ρ= 4Nc. We first use simulated data to examine the properties of the estimator ρ^PAC-A , corresponding to the conditional distribution πA described in appendix a, under what we call the “standard coalescent model”: a constant-sized, panmictic population with an infinite-sites mutation model. We show that, although quite accurate, ρ^PAC-A exhibits a systematic bias. We use the empirical results to develop a modified conditional distribution, πB (described in detail in appendix b), whose corresponding estimator, ρ^PAC-B , exhibits considerably less bias and is more accurate. We compare the performance of models based on both πA and πB with results from other methods.

Properties of the point estimate ρ^PAC : We used the program mksample (Hudson 2002) to simulate data sets consisting of samples of SNP haplotypes from the standard coalescent model for various values of:

1. The number n of haplotypes in the sample

2. The number S of markers typed

3. The value of ρ (we measure physical distance so that the total physical length of each simulated haplotype equals 1.0. Thus the value of ρ is also the total value of ρ across the region).

For each data set we found ρ^PAC-A by numerically maximizing the PAC likelihood (using a golden bisection search method; Presset al. 1992) and compared it with the true value of ρ used to generate the data.

It seems natural to measure the error in estimates for ρ on a relative, rather than an absolute, scale. For example, Wall (2000) reported the frequency with which different methods for estimating ρ gave estimates within a factor of 2 of the true value, and both FD and Hudson (2001) examine the distribution of the ratio ρ^ρ for their estimates of ρ^ and the deviation of this ratio from the “optimal” value of 1. A problem with working with this ratio directly is that it tends to penalize overestimation more heavily than underestimation. For example, overestimating ρ by a factor of 10 gives a larger deviation from 1 than underestimating ρ by a factor of 10. To avoid this problem, we quantify the relative error of an estimate ρ^ for ρ by Err(ρ,ρ^)=log10(ρ^ρ) . This gives, for example, an error of 0 if ρ^=ρ , an error of 1 if ρ^ overestimates ρ by a factor of 10, and an error of –1 if ρ^ underestimates ρ by a factor of 10.

We note that Err(ρ,ρ^) can also be viewed as the error (on an absolute scale) in estimating log10(ρ) by log10(ρ^) . Thus, if the usual asymptotic theory for maximum-likelihood estimation applies for estimation of log10(ρ) in this setting (which, as discussed in FD, it may not), then for the actual maximum-likelihood estimate (MLE) ρ^MLE of ρ, Err(ρ,ρ^MLE) would be normally distributed asymptotically, centered on 0. Optimistically, we might therefore hope that for sufficiently large data sets (large in terms of the number of haplotypes, the number of markers, or both) Err(ρ,ρ^PAC-A) might be approximately normally distributed, centered on 0. In our simulations, we found that for some combinations of n, S, and ρ this did indeed appear to be the case (e.g., Figure 3b), but that for other combinations, although the distribution often appeared close to normal, it was centered around some nonzero value (e.g., Figure 3, a and c), indicating a systematic tendency for ρ^PAC-A to over- or underestimate ρ. We refer to the median of Err as the “bias” [of log(ρ^PAC-A) in estimating log(ρ)]. Although bias is usually defined as a mean error, this is not particularly helpful here since the mean is often heavily influenced by a small number of very large values and may even be infinite in some cases (see also FD). We therefore follow previous authors, including Hudson (2001) and FD, in concentrating on the behavior of the median, rather than the mean, of the error.

Despite the biases evident in Figure 3, a and c, ρ^PAC-A gives reasonably accurate estimates of ρ. For example, even in Figure 3c, which shows one of the most extreme biases that we observed in our simulations, the bias corresponds to underestimating ρ by approximately a factor of 2, and ρ^PAC-A is within a factor of 2 of the true value of ρ in 68% of cases. Although in many statistical applications estimates within a factor of 2 of the truth would not be considered particularly helpful or impressive, in this setting this kind of accuracy is often not easy to achieve (see, for example, Wall 2000).

We performed extensive simulations to better characterize the bias noted above and found that, although the bias depends on all three variables (n, S, and ρ), it is especially dependent on the average spacing between sites. More specifically, for fixed n and S we observed a striking linear relationship between the bias and the log of the average marker spacing (Figure 4). This linear relationship was also apparent for data simulated under an assumption of population expansion (data not shown). The slope of the linear relationship is negative in each case, indicating a tendency for ρ^PAC-A to overestimate ρ when the markers are very closely spaced and underestimate ρ when the markers are far apart. As the number of sampled haplotypes increases, both the slope and intercept of the line appear to get closer to 0 (Table 1). On the basis of these empirical results we can modify πA to reduce the bias of the point estimates (see appendix b for details). The improved performance of this modified conditional distribution, which we denote πB, is illustrated in the next section.

Figure 4 also illustrates the effect of varying parameter values on the variability of point estimates. As might be expected, the variance of the error reduces with increased sample size and increased number of sites, with the latter providing the more substantial decrease. For example, doubling the number of sites from 50 to 100 roughly halved the variance of the error in most cases, while doubling the number of individuals from 50 to 100 resulted in much smaller decreases. For a fixed sample size and number of sites, the variance of the error decreases as the spacing between sites grows. This may be due to the fact that for larger spacings more recombination events occur, increasing the relative accuracy with which ρ can be estimated, although we would not expect this pattern to continue indefinitely as the marker spacing is increased beyond the range considered here.

Comparison of point estimates with other methods: Hudson (2001) introduced a composite-likelihood method for estimating ρ, based on multiplying together the likelihood computed for every pair of SNPs. He compared the performance of this method with others in the literature (Hudson and Kaplan 1985; Hudson 1987; Hey and Wakeley 1997; Wall 2000), under the standard coalescent model, and found it to be as good as, or better than, the best of these. We compared the results reported by Hudson (2001) for his maximum composite-likelihood estimate, ρ^CL , with the results for ρ^PAC-A and ρ^PAC-B on data sets simulated under the same conditions (Figure 5). For data sets with small numbers of SNPs (≤ ∼12) ρ^CL provides the most accurate estimates of ρ, although all three methods struggle to produce reliable estimates. For larger numbers of SNPs both ρ^PAC-A and ρ^PAC-B tend, desirably, to exhibit less variability than ρ^CL . Further, ρ^PAC-B exhibits little or none of the bias present in ρ^PAC-A and provides the most accurate estimates of ρ.

The superior performance of the pairwise composite-likelihood method for data sets with small numbers of SNPs is perhaps not surprising—indeed, for data sets with only two SNPs ρ^CL is precisely the maximum-likelihood estimate for ρ. However, we note that almost all of the improvement in accuracy comes from the increase in the 10th percentile of the estimator toward the true value, rather than from a decrease in the 90th percentile. One possible explanation for this is that ρ^CL uses a likelihood based on an infinite-sites mutation model (i.e., assumes no repeat mutation) and so is able essentially to rule out very small values for ρ if there is even one pair of sites at which all four gametes are present. (The effect of this may be compounded by the fact that ρ^CL was found by maximizing over a grid of possible values, which forces all nonzero estimates of ρ to be above some threshold.) Our estimator does not make the infinite-sites assumption and so will be more inclined to estimate very small values of ρ, possibly leading to occasional substantial underestimates. Since in real data it will typically be unclear whether or not the infinite-sites assumption holds, the advantage of ρ^CL for even small numbers of sites is perhaps less clear-cut than it appears in Figure 5.

We used the same simulated data to examine the accuracy of estimates of ρ obtained by the methods described by Kuhner et al. (2000) and FD, both of which use computationally intensive Monte Carlo procedures to attempt to approximate the full coalescent likelihood. The computational complexity of these approaches increases with what might be called “the total value of ρ across the region,” or “per-locus ρ,” which we denote ρ (more precisely, in our notation ρ=ρL , where L is the physical length of the region). Results from FD suggest that even for small values of ρ (<3, say), the approximate-likelihood curves obtained by these methods may be poor approximations to the actual likelihood curve, and so it seems unlikely that the curves will be accurate for larger ρ . However, point estimates based on these methods could still be accurate, if the maximum of the approximate-likelihood curve occurs in about the right place. To investigate this possibility, we applied both methods, using ∼1 day of CPU time per method per data set (compared with ∼30 sec per data set for ρ^PAC-B ), to 10 of the data sets simulated with ρ=40 . Computational considerations make a more comprehensive simulation study inconvenient. Each of the methods was run with θ fixed at the value used to simulate the data, giving them some advantage over how they could be used in practice. Nevertheless, neither method produced point estimates of ρ as accurate as those from ρ^PAC-B (Table 2). Of the two full-likelihood schemes, the maximum of the likelihood curve obtained by infs was consistently closer to the true value of ρ than was the maximum of the likelihood curve obtained by Recombine. Indeed, the estimates obtained from Recombine were often close to an order of magnitude smaller than the true value of ρ , which raises a danger that when the method is applied to real data (for which the value of ρ is of course not known) the user might be misled into thinking that the value of ρ is small enough for the method to produce reliable results. Results from longer runs of infs, taking ∼5 days of CPU time each, produced improved results, competitive with ρ^PAC-B (data not shown).

View this table:
TABLE 1

The intercepts and slopes of the linear relationship between and log10(spacing)

Figure 4.

—Box plots showing the relationship of the bias to the average marker spacing. For each combination of parameters, 100 data sets each were simulated under the standard coalescent model. The parameters involved are: the number of haplotypes in each sample, n = 20,50,100,200; the number of segregating sites, S = 20,50,100; and the average marker spacing, ρ/S = 0.1,0.5,1.0,5.0, and 10.0. In humans a marker spacing of ρ/S = 0.5 corresponds to ∼1 kb between markers. The unlabeled tick marks on the y-axis correspond to .

Figure 5.

—Comparison of and with Hudson's pairwise composite-likelihood estimator (Hudson 2001) on data sets of n = 50 haplotypes simulated from the neutral infinite-sites model. The data sets were simulated with haplotypes of physical length L = 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, and 48 units, with ρ= 1/unit physical length, and θ = ¼/unit physical length. (With these parameters the expected number of SNPs in each data set is approximately equal to the physical length of the haplotypes.) The results of come from (Hudson 2001) and were kindly provided by R. Hudson. The results for and are based on 1000 data sets that we simulated for each set of parameters using the program mksample (Hudson 2002). (We discarded the few simulated data sets that had only one SNP.) The panels are (a) , (b) , and (c) . In each part, the solid line is the median of and the dashed lines are the 10 and 90% quantiles.

Properties of PAC likelihood curves: Construction of confidence intervals: We examined the coverage properties of confidence intervals (C.I.'s) constructed from the PAC-likelihood curve in two ways:

View this table:
TABLE 2

Comparison of with estimates of from Recombine and infs, for 10 data sets simulated with , θ = 10, n = 50

1. Include all values of ρ for which loge(LPAC-B(ρ)) is within 2 of the maximum.

2. Include all values of loge(ρ) within ±1.96σ of ρ^PAC-B , where σ is the square root of the inverse of minus the second derivative (found numerically) of the log of the PAC-B likelihood curve [as a function of loge(ρ)] evaluated at ρ^PAC-A=ρ^PAC-B .

The rationale for looking at such C.I.'s is that, under standard asymptotic theory for likelihood estimation, C.I.'s constructed in this way using the true likelihood curve would include the true value of ρ ∼95% of the time. (For I this follows from the asymptotic χ2 distribution of the log-likelihood-ratio statistic; for II it follows from asymptotic normal distribution of the MLE.)

Figure 6 shows the coverage properties for C.I.'s produced using the two methods (i.e., the proportion of times that C.I.'s formed using each method contained the true value of ρ) for the data sets used to obtain Figure 5c. For moderate sequence length both methods produce C.I.'s that are slightly anticonservative, with coverage properties that approach ∼0.91, compared with the expectation of ∼0.95 under asymptotic theory. On the basis of these results we speculate that the curvature of the PAC-B-likelihood curve does not deviate grossly from that of the true-likelihood curve. We note that the coverage properties are also closer to asymptotic expectations than those reported by Fearnhead and Donnelly (2002) for their composite likelihood using the same methods of C.I. construction.

Comparison with other methods: We compared the LPAC-B-likelihood curves with likelihood curves obtained from three other methods: the full-data coalescent method of FD (implemented in the computer program infs) and the pairwise composite-likelihood methods of Hudson [2001; implemented by one of us (N.L.), using tables available from R. Hudson's website] and McVean et al. (2002) (implemented in the computer program LDhat). Figure 7 shows likelihood curves obtained using each method for the 20 data sets considered by Wall (2000), which were simulated under the standard coalescent model with ρ= θ= 3.0 across a region of physical length 1 and were kindly supplied by J. D. Wall. (These likelihood curves are plotted with ρ on the x-axis, rather than log(ρ), because infs and LDhat output likelihood curves for evenly spaced values of ρ.)

Figure 6.

—Empirical coverage properties of confidence intervals produced using two different methods described in the text. Each number is based on analysis of 1000 data sets and shows the proportion of cases in which the C.I. contained the true value of ρ used to generate the data. The data sets used are the same as those used to produce Figure 5c.

Interpreting the results of this comparison is slightly tricky. Unlike the other three methods we consider, the full-data coalescent method can, in principle, provide a fully accurate representation of the true-likelihood curve. As such it is tempting to treat this as a “gold standard” against which to compare the other methods. However, as mentioned previously, even for the rather small value of ρ= 3 used to generate these data, accurate approximation of the true-likelihood curve may be computationally impractical. Indeed, the estimated effective sample sizes (ESSs) obtained for these data sets, shown at the top of each part of Figure 7, suggest that we should not place much confidence in the accuracy of many of the curves. Our attempts to obtain more accurate likelihood curves by performing longer runs for some of the data sets (numbers 15 and 16) actually produced smaller estimated ESSs, suggesting that the effective sample sizes quoted for the other data sets are optimistic (see FD for further discussion of this problem). A further complication in comparing the methods is that both our method and that of McVean et al. (2002) allow (implicitly and explicitly, respectively) for the possibility of multiple mutations, and thus the likelihoods from these methods are in some sense not directly comparable with those from the other two methods. Finally, we note that the methods deal in different ways with the unknown mutation parameter θ: the likelihood curves shown from infs are profile likelihoods for ρ at the true value of θ; Hudson's method and our method avoid explicitly estimating θ; LDhat estimates θ using an analog of Watterson's estimate, but allowing for multiple mutations.

Notwithstanding these issues, we attempt to draw some general conclusions:

1. In general, the likelihood curves produced by the four methods seem to agree rather more closely than might have been expected. (Compare, for example, the variability here with the variability observed for different runs of a single method in FD.) However, the closeness of the agreement between the methods differs appreciably across data sets. Data set 12 consists of only four sites, three of which are singletons, and so the differences in the curves for this data set seem not to be particularly interesting. We were unable to discern a systematic reason for the larger differences among methods observed in some of the other data sets (e.g., 16).

2. The two pairwise composite-likelihood methods tend to produce likelihood curves that are slightly more peaked than those of the other two methods. This might be expected since, as pointed out by McVean et al. (2002), pairwise composite-likelihood curves are typically more peaked than the true-likelihood curve because they treat each pair of sites as independent, when in fact many pairs are highly dependent.

3. The method implemented in LDhat, which allows for multiple mutations, tends to achieve its maximum at larger values of ρ than does Hudson's method, which does not allow for multiple mutations. This is surprising; indeed, the opposite might have been expected, since multiple mutations could be used in place of recombination events to explain certain patterns of LD. One possible explanation is that the run lengths we used for computing the likelihood in LDhat might be insufficient (we used the default values).

4. Different orderings of the haplotypes can give PAC-likelihood curves that differ appreciably from one another. In addition, the maximum of the likelihood curve based on the average over several orderings tends to be toward the left end of the distribution of maxima obtained from different orderings. This is because, although not shown in Figure 7, the curves with maxima at smaller values of ρ tend to be larger (in absolute value) than those with maxima at larger values of ρ (presumably because they correspond to orderings of the haplotypes that, in some sense, require fewer recombination events to explain them) and thus contribute more to the average. Although this dependence on ordering is bothersome, in simulation studies (results not shown) we have found that the variability in the position of the maxima of the PAC likelihood over different orderings of the haplotypes is typically small compared with the uncertainty in estimation of ρ.

Figure 7.

—Comparison of the relative PAC-likelihood curves, with coalescent-based and pairwise composite relative-likelihood curves, for the first 20 data sets in Wall (2000). In each case the relative likelihood is obtained by normalizing each likelihood curve to have a maximum of 1. The light gray lines show 20 PAC-likelihood curves, each from a different random order of the haplotypes, and the solid black line is based on the PAC likelihood averaged over the 20 random orders. The other lines correspond to likelihood curves computed using the methods of: FD, implemented in the computer program infs (red dashed line); McVean et al. (2002), implemented in LDhat (blue dotted line); and Hudson (2001), using the table generated by program eh written by Hudson (cyan dot-dashed line). The effective sample size (ESS) for infs at the MLE is given for each data set above the graph and is a measure of the confidence infs has in its estimated likelihood curve (the larger the better). Results for infs for all data sets except 15 and 16 were kindly provided to us by P. Fearnhead and were obtained using between 50,000 and 5,000,000 iterations. Results for data sets 15 and 16 were obtained by ourselves using 10,000,000 iterations.

## VARIABLE RECOMBINATION RATE

Models for variation in recombination rate: One of our main motivations for developing this model is to explore fine-scale variation in recombination rates. A simple (no interference) model for variation in recombination rates is that crossovers in a single meiosis occur as an inhomogeneous Poisson process, of rate c(x) at position x. Here we consider two specific cases of this general model:

1. A simple single-hotspot model, where c(x)={λcforaxb,cotherwise.} (4) Here represents the background rate of crossover, a and b represent the left and right ends of the hotspot region, and λ (>1) quantifies the magnitude of the recombination hotspot. The PAC likelihood for this model is a function of four parameters: a, b, λ, and ρ¯=4Nc¯ .

2. A more general model, where if x is a position between markers j and j + 1, then c(x)=λjc. (5) Here represents the background rate of crossover, and λj is a multiplier controlling how the crossover rate between markers j and j + 1 deviates from the background rate. The PAC likelihood for this model is a function of the parameters λ1,... λS–1 (where S is the number of SNPs) and ρ¯=4Nc¯ .

View this table:
TABLE 3

Performance of the simple single-hotspot model, for data sets simulated under various demographic scenarios, with a hotspot of magnitude λ = 10 and with no hotspot (i.e., λ = 1)

For the simple single-hotspot model it is straightforward to obtain numerically the maximum PAC-likelihood estimates for all four parameters simultaneously, although in the examples that we consider we assume that a and b are known and maximize the PAC likelihood in terms of λ and ρ¯ . The evidence for the presence of a hotspot can be summarized by the log-likelihood ratio (LLR) for the null hypothesis of no hotspot, H0: λ= 1 vs. the alternative H1: λ> 1. If ρ0¯ denotes the value of ρ¯ that maximizes LPAC-B under H0, and ρ¯1 and λ1 denote the values of ρ¯ and λ that maximize LPAC-B under H1, then LLR=logeLPAC-B(ρ1,λ1)LPAC-B(ρ0,λ=1), (6) and large values of LLR represent evidence for the existence of a hotspot. Under standard asymptotic theory, two times LLR would have (asymptotically) a chi-square distribution on 1 d.f., and so rejecting H0 if LLR > 1.92 would give a hypothesis test with a type I error rate of 0.05. Although it seemed unlikely that standard asymptotic theory would apply here, we found that for data sets simulated under the null hypothesis, rejecting H0 for LLR > 1.92 gave empirical type I error rates close to 0.05 (Table 3), which provides some guidance as to what might be considered a “large” value of LLR.

For the second, more general model, obtaining maximum PAC-likelihood estimates for the parameters creates problems. First, the maximum-likelihood estimates are not unique (indeed there are infinitely many of them), because multiplying all the λj by any constant, and dividing ρ¯ by the same constant, gives exactly the same likelihood. (In technical terms, the parameters are said to be unidentifiable.) Second, even if the identifiability problem is solved (for example, by first obtaining an estimate for ρ¯ assuming that there is no hotspot and then fixing this when estimating the other parameters) there is the practical problem that the likelihood curve for some λj will often be very flat, resulting in estimates for many λj being very close to (or equal to) either 0 or infinity. This seems undesirable: if the likelihood for a particular λj is very flat, this indicates that there is little information about the recombination rate in that marker interval, in which case it seems sensible to estimate that the recombination rate is close to the background rate (i.e., λj ≈ 1), rather than (close to) infinitely bigger or smaller!

To solve both these problems, we assume a “prior” distribution for the λj's: specifically that the λj's are independent and identically distributed, with log10j) ∼ N(0, 0.52). This prior was chosen to allow occasional deviations from the background rate of recombination by a factor of 10 or more (with probability ∼95%, λj lies in the range 0.1–10). This choice of prior could be motivated from a Bayesian viewpoint as reflecting our prior beliefs about the λj's, but it also has the more pragmatic justification that identifying variations of this kind of magnitude seems both interesting and, perhaps, attainable. We consider alternative prior specifications in the discussion.

In principle, given the prior distribution for the λj's described above, we could also place a prior distribution on ρ¯ and obtain an approximation to the posterior distribution of all parameters, using Markov chain Monte Carlo, for example. Although this would be our preferred approach, for simplicity we avoid this here and instead use the following ad hoc two-stage approach: first, obtain point estimates for ρ¯ and λ by jointly maximizing the product of LPAC(ρ¯,λ) and the prior density of λ; second, obtain a “posterior distribution” for each λj by fixing all other parameters at their estimated values, discretizing the prior on λj (truncated at λj =±3), and computing the corresponding discretized posterior distribution as being proportional to the prior times the PAC likelihood. For data sets with a large number of sites the first stage (optimization over ρ¯ , λ) can be very time consuming, requiring large numbers of evaluations of the likelihood function. Further, it seems unlikely that the simple optimization method we used will reliably find the global maximum of the likelihood surface. Both these problems could be alleviated by exploiting the fact that the derivatives of the PAC likelihood can be computed efficiently, but we do not pursue this here.

Power to detect recombination hotspots and robustness: In this section we assume that there is a single recombination hotspot (Equation 4), whose putative position is known, and examine the power of our model to detect the hotspot under various assumptions about the population demography and SNP marker ascertainment. Although the assumptions made here (in particular, that there is a single hotspot with known putative position) are unrealistic, they provide a convenient framework within which to examine quantitatively the power of our approach and how it is affected by population demographic history and marker ascertainment schemes.

We consider the following scenarios:

1. Constant-size randomly mating population, all markers

2. Constant-size randomly mating population, only markers at frequency >0.1

3. Exponentially expanding population, with expansion starting t = 500 generations ago

4. Exponentially expanding population, with expansion starting t = 5000 generations ago

5. Haplotypes sampled from a structured population, consisting of two islands exchanging migrants at a rate of one per generation (scaled migration parameter 4Nm = 4)

6. Haplotypes sampled from only one of the islands in the structured population described above.

These last four models are the same as those considered in Pritchard and Przeworski (2001). In the two expanding-population scenarios, the population was assumed constant sized until t generations ago, when it started to expand exponentially, continuing until the present. The current population size N0 is set to be 105 and the population growth rate is chosen, as a function of t, to match the expected diversity in a population of constant size 104. (The necessary growth rates of α= 1960, 350 for t = 500,5000 were kindly provided by M. Przeworski.)

For each scenario we simulated data sets under the simple single-hotspot model described above, using mksample and the postprocessing algorithm described in Appendix C. For the first two scenarios each data set was simulated to have ∼50 segregating sites and 60 chromosomes, with a = 0.4, b = 0.5, ρ¯=20 , and λ= 10 [these values were chosen to approximately match values for the TAP2 data from Jeffreys et al. (2000) considered in Figure 9]. For the expanding population scenarios we set ρ¯=4N0c=200 , and for the structured scenarios ρ¯=20 within each population. For each scenario we also simulated data sets under the same conditions, but with no hotspot (i.e., λ= 1).

We applied the likelihood-ratio test to each data set to test the null hypothesis H0: λ= 1 against the alternative H1: λ> 1 (Table 3). For the scenarios not involving population expansion, the test gave type I error rates of ∼0.05 when applied to data without a hotspot and a power of ∼0.90 when applied to data simulated with a hotspot, although the test based on just the common SNPs had a slightly reduced power. The two scenarios involving population expansion gave either a substantial reduction in power or an inflated type I error rate (which is in some sense equivalent to a reduction in power). This might be due to a reduction in the number of “common” SNPs under these scenarios, as common SNPs tend to be most informative for estimating recombination rates.

We also examined the robustness of estimates of λ under the various scenarios (Figure 3). As noted by FD, the recombination rate ρ¯=4Nc depends on how the effective population size N is defined. In contrast, the definition of the parameter λ does not depend on how the effective population size is defined, and so we might hope that estimation of λ will be robust to departures from the assumption of a constant-sized panmictic population. For the levels of population structure we used in our simulations this does indeed appear to be the case—in both cases estimates were more accurate than those for the sample from a single random-mating population, perhaps because population structure makes recombinants easier to “spot.” As might be expected, estimates based only on common SNPs were less accurate than those based on all SNPs. A drop in accuracy is also evident for the scenarios simulated under population expansion, probably again due to a reduction in the number of “common” SNPs under these scenarios. Some of the scenarios also resulted in an upward bias for estimates of λ, notably one of the expansion scenarios in which the median of the estimates was almost 2.5 times the true value.

Estimating recombination rates along a region: Simulated data: We fitted the more general varying recombination rate model to the simulated data used to produce the pairwise LD plots in Figure 1; the results are shown in Figure 8. From the latter figure we might conclude, correctly, that the data sets corresponding to the top left and bottom middle parts had recombination hotspots somewhere in the region 0.4–0.6. We might also conclude that the other data sets had no hotspots, which would be correct except in the case of the bottom left figure, which was actually generated from data with a hotspot between 0.4 and 0.5. One possible reason that the hotspot shows up less well in this case is that there are fewer sites at high frequency (>0.15) in this data set. Despite the fact that we might have been misled in one case out of six, we view Figure 8 as considerably more informative than Figure 1, from which we find it difficult to draw any conclusions.

TAP2 data: Jeffreys et al. (2000) used patterns of LD (measured by haplotype diversity) in a population sample to refine the location of a putative recombination hotspot in the human TAP2 gene and provided a more detailed characterization of its properties through sperm typing. The population sample consists of 30 individuals from the United Kingdom typed at 47 polymorphisms (45 SNPs, two insertion/deletions) across 9.7 kb, with haplotypes determined by allele-specific PCR.

Figure 8.

—Estimates of variation from background recombination rate within each marker interval for the same simulated data sets that were used to produce Figure 1. Top left, bottom left, and bottom middle correspond to data sets simulated with a single hotspot of magnitude λ= 10 between positions 0.4 and 0.5. The other parts correspond to data simulated with constant recombination rate across the region.

Through analysis of sperm crossover events Jeffreys et al. (2000) identified a region of increased crossover intensity, located approximately in the interval from 4 to 5.2 kb.

We fitted the simple single-hotspot model to the haplotype data (kindly provided in convenient electronic format by A. Jeffreys), assuming a hotspot between 4 and 5.2 kb, and obtained estimates of ρ¯=1.3kb and λ= 12 (95% C.I. [6,21]), with a LLR of 12, indicating strong evidence for the presence of the hotspot. Our estimate for the average magnitude of the hotspot, λ= 12 times the background rate, agrees well with the sperm-typing results from Jeffreys et al. (2000). In particular, Jeffreys et al. (2000; their Figure 4) observed 128 crossovers within the interval 4–5.2 kb in 2.4 million progenitor molecules, giving an average rate of 4.4 cM/Mb, which is 11 times the approximate background rate (for males) that of 0.4 cM/Mb they quote, (although they warn that this estimate of the background rate should be “treated with caution”). Since our estimate is based on a population sample, it is actually an estimate of the magnitude of the hotspot in the sex-averaged crossover rates. Jeffreys et al. (2000) point out that the crossover rate in this region appears to be substantially higher in females than in males, and so the sex-averaged crossover rates are likely to be dominated by the female crossover process. Our results therefore suggest that the crossover rate within the 1.2-kb hotspot in female meioses is roughly an order of magnitude higher than the (female) background rate.

Jeffreys et al. (2000) found that if one assumes that the sex-averaged background recombination rate is equal to the male background recombination rate of ≈0.4 cM/Mb (a number that we again emphasize they suggest should be treated with caution), the observed patterns of LD in the population sample appear consistent with an effective population size of N = 100,000, which contrasts with the more commonly quoted number for humans of N = 10,000. Our estimate of ρ¯=1.3kb supports their analysis, as it corresponds to N ≈ 84,000 if the background sex-averaged recombination rate is assumed to be 0.4 cM/Mb. As pointed out by Jeffreys et al. (2000), one possible explanation for this is differences between male and female recombination rates—in particular, a sex-averaged background rate across the 9.7-kb region of 3.4 cM/Mb would give N ≈ 10,000. An alternative (or additional) explanation, suggested to us by M. Przeworski (personal communication), is that gene conversion events not detected by the sperm-typing experiments could partially account for the unusually large estimated effective population size.

Figure 9.

—TAP2 data: MLE with posterior quantiles. Results of fitting the more general model for varying recombination rate to the TAP2 data from Jeffreys et al. (2000). The plot shows the estimated value and 25, 10, and 5% posterior quantiles for λj in each marker interval along the chromosome. The vertical lines indicate the approximate boundaries of the hotspot identified by Jeffreys et al. (2000).

Figure 9 shows the estimates of λj and posterior quantiles obtained by fitting the more general model for recombination rate variation to the TAP2 haplotype data. The hotspot in the region 4–5.2 kb is fairly clear, with some suggestion that it may extend slightly farther to the right than 5.2 kb. The peak of the recombination hotspot is estimated as being ∼14 times the background rate. In the interval corresponding to this peak the posterior probability that λj > 7 is 75% (compared with a prior probability of <5%). However, the large number of parameters estimated within this more general model results in generally poor precision for the estimate of each λj. In particular, confidence in the estimates is probably not sufficient to conclude that the three subpeaks present in Figure 9, within the hotspot, correspond to actual variation in the hotspot intensity.

Lipoprotein lipase data: The lipoprotein lipase (LPL) data (Clarket al. 1998; Nickersonet al. 1998) consist of 9.7 kb of genomic DNA sequence from the human lipoprotein lipase gene from 71 individuals from Jackson, Mississippi (n = 24), Rochester, Minnesota (n = 23), and North Karelia, Finland (n = 24). In the published data, the haplotypic phase for 69 sites was either determined by experiment or estimated by Clark's (1990) algorithm. Although the use of a statistical method to infer some of the phases means that there is some possibility that not all the published haplotypes are completely correct, the majority seem likely to be accurate, and in this analysis we assume them to be known without error. On the basis of patterns of LD and on the results of phylogenetic-based methods that attempt to infer ancestral recombination events, Templeton et al. (2000) suggested the existence of a putative recombination hotspot between [2987,4872].

Table 4 shows the results of fitting the simple singlehotspot model to the whole data set and to the data from each subpopulation individually, assuming a hotspot from 3 to 5 kb. Figure 10 shows the results for fitting the more general model for recombination rate variation. Overall, these results seem to support the existence of the putative hotspot, although there is considerable variation in the strength of the evidence (as measured by the LLR), and of the estimated magnitude of the hotspot, in different subpopulations. We note that the apparent magnitude of the hotspot in the Finnish population is smaller in Figure 10 than in Table 4, due to the affect of the prior. There is also tentative evidence, mostly from the Jackson sample, for a smaller-magnitude hotspot between 8 and 9 kb. Although no interval in that region produces a very large estimate for λi, the clustering together of three intervals with moderate λi provides stronger evidence than any one of these estimates taken separately.

View this table:
TABLE 4

Results of fitting the simple single-hotspot model to the LPL data, to each subpopulation sample individually, and to the combined sample

Figure 10.

—Results of fitting the more general model for varying recombination rate to the LPL haplotype data from Clark et al. (1998). The plot shows the estimated value and 25, 10, and 5% posterior quantiles for λj in each marker interval along the chromosome. The vertical dashed lines indicate the boundaries of the putative recombination hotspot identified by Templeton et al. (2000).

Our results are consistent with those from Fearnhead and Donnelly (2002), who found evidence for the [2987,4872] hotspot in the samples from Rochester and Finland, but not in those from Jackson. In addition, both we and Fearnhead and Donnelly (2002) found that the Rochester and Finland samples give much smaller estimates for ρ than the Jackson sample gives, probably reflecting smaller effective population sizes as a result of a recent bottleneck. One general advantage of the approach we take here, over the method of considering segments of the chromosome separately as do Fearnhead and Donnelly (2002), is that it uses the patterns of LD not only between markers within the hotspot, but also between markers either side of the hotspot, to estimate the magnitude of the hotspot. This may explain why we detected a signal (albeit a modest one) for a [2987,4872] hotspot in the Jackson sample, where Fearnhead and Donnelly (2002) did not.

The large differences among estimates for λ from the three separate population samples are surprising. To examine whether this might be simply due to poor precision for these estimates in one or more of the populations (due, for example, to too much or too little diversity), we constructed ∼95% confidence intervals for λ using an analog of method I in the Construction of confidence intervals section (column C.I. in Table 4). Although the coverage probabilities for these C.I.'s are unlikely to be 95%, they give some indication of the curvature of the likelihood surface, and the fact that not one of the three intervals overlaps with either of the other two suggests that the hotspot intensity may indeed vary among the three populations. Additional simulation results (not shown) suggest that the larger effective population size of the Jackson population should actually increase power to detect the hotspot compared with the Finnish population, and so differences in effective population sizes do not appear to explain our results. An alternative explanation is the bias we observed for our estimates of λ under certain expansion scenarios (Figure 3), which might partially explain the large estimate of λ in the Finnish population, for example, although it is unclear whether this is enough to account for the fact that the estimated λ is almost 40 times greater than that in the Jackson population. Biological mechanisms that could lead to different patterns of recombination rate heterogeneity in different populations are known to exist (e.g., Jeffreys and Neumann 2002), and the kinds of methods we introduce here should be helpful in determining how often this occurs in practice.

## DISCUSSION

In this article we have introduced a new statistical model relating patterns of LD at multiple loci to the underlying recombination rate and examined its effectiveness for inferring the underlying rate of recombination. Another potential application of our model is to methods for LD (association) mapping in “case-control” studies, where chromosomes have been collected and typed for both case and control individuals. Several authors, including McPeek and Strahs (1999), Morris et al. (2000), and Liu et al. (2001), have developed methods to use genetic types at multiple loci to perform association mapping for case-control studies. These methods aim to improve on other common methods—which typically test small groups of markers, one group at a time, for association with a trait—by considering data at many SNP markers simultaneously. Although the methods differ in details, broadly speaking they all pursue a strategy of assuming that (subsets of) the case chromosomes share some region identical by descent about a causal mutation and as a result will be more similar than would be expected by chance. The challenge then is to identify regions where (subsets of) the case chromosomes are more similar than would be expected by chance. Models of LD play a key role here, because what would be expected “by chance” depends critically on the amount of LD among loci. In particular, correlations among loci will cause chromosomes to tend to be more similar by chance than if the loci were independent. McPeek and Strahs (1999) use a first-order Markov chain to model LD, so that the probability of observing types (x1,..., xL) at L loci along a chromosome is Pr(x1) Pr(x2|x1) Pr(x3|x2)... Pr(xL|xL–1), where the conditional probabilities Pr(xr|xr–1 are estimated using the control chromosomes. This model was also adopted by Morris et al. (2000). While the first-order Markov assumption is better than assuming that the loci are independent and may suffice if there is little LD among markers, it seems not to be a good model for LD in general. In particular, it fails to capture the fact that markers may be in weak LD with neighboring markers, but in strong LD with more distant markers. Although McPeek and Strahs (1999) mention that higher-order Markov models might better model LD, such models seem unlikely to be helpful in practice because of the difficulty of estimating all the necessary parameters. The model we have introduced here provides a parsimonious method for modeling LD: even the more general model for varying recombination rates has fewer parameters than the first-order Markov model used previously. Further, in these kinds of applications, where estimation of underlying recombination rates may be of only indirect interest, the usefulness of our model will depend only on whether Pr(h1,..., hn|ρ) is a sensible distribution for h1,..., hn for some value of the parameters ρ, even if this ρ does not correspond precisely to the background recombination rate scaled by the effective population size. Under these circumstances our two approximations πA and πB should perform almost identically, and so πA might be preferred on the grounds that it is simpler to understand and implement and is more amenable to theoretical study.

Another model for LD across multiple sites, introduced by Daly et al. (2001), is based on the empirical observation that in some regions of the genome LD exhibits a “block-like” structure. Daly et al. (2001) model each observed haplotype as a mosaic of “ancestral haplotypes,” with the transition rates among these ancestral states (representing the “historical recombination frequency” between each pair of consecutive markers) being estimated by maximum likelihood. The ancestral haplotypes are identified by an initial scan for regions of low haplotype diversity, although in principle they could instead be treated as parameters in the model. Daly et al. (2001) used this model to produce a summary of patterns of LD that illustrates the haplotype structure in their data more clearly, and in more detail, than would plots of pairwise LD measures. However, it is currently unclear to what extent this model might be helpful for applications involving statistical inference, or prediction, particularly in regions where patterns of LD are less block-like.

Several challenges might arise in applying our method to real data, which we have ignored here. In particular, we have assumed in our examples that haplotypes are known and that there are no missing genotypes or genotyping errors. A new version of the software package PHASE (Stephenset al. 2001) is under development, and it will deal with these problems by incorporating the PAC likelihood into a Markov chain Monte Carlo algorithm to jointly estimate the recombination rate parameters, haplotypes, missing genotypes, and potential locations of genotyping errors. This algorithm also produces a method for estimating haplotypes that takes account of decay of LD along chromosomes. Preliminary results for simulated data suggest that these ideas result in slightly more accurate haplotype estimates than does the method described in Stephens et al. (2001).

There are also biological aspects of real data that we have not accounted for here, including, for example, gene conversion, whose effect on patterns of LD in humans has been the subject of considerable recent interest (see Frisseet al. 2001, for example). The effect that the presence of gene conversion will have on our method will vary, depending on how the tract length—about which little is known in humans—compares with the marker density. Gene conversion events with very small tract lengths compared to the marker density will only rarely involve a typed marker and so will tend to have a small effect on our method unless such events are extremely common. Conversely, gene conversion events with longer tract lengths—comparable to the typical distance between markers—will often affect one or more markers and will tend to look like double-crossover events to our method. The presence of gene conversion with this kind of tract length will thus elevate our estimates of recombination rate, perhaps substantially, and regions with elevated rates of such gene conversion may appear as recombination hotspots in our method. In principle the PAC model could be extended to account explicitly for gene conversion by suitable modification of the conditional distribution π. A concrete suggestion for how to achieve this would be to augment the space of the hidden Markov model for the mosaic process (described in detail in appendix a) to include both the current and previous “copied” chromosome and then to modify the Markov jump process to make jumps back to the previously copied chromosome more likely than jumps to other chromosomes. However, this would greatly increase the computational expense of the model, making it unappealing in practice. A more attractive possibility would be to settle for modeling only those gene conversion events that affect a single marker (which, depending on tract length and marker density, may be the vast majority of gene conversion events affecting patterns of LD). This would require only a simple modification of the conditional distribution (it could be handled similarly to the way that mutations are currently handled), with essentially no increase in the computation required.

Another aspect of real data that we have not accounted for explicitly is population structure. Our simulation results in Figure 3 suggest that for the purposes of identifying recombination hotspots our method is robust to a certain amount of population structure. Nevertheless, modeling population structure explicitly might prove helpful in some settings. For example, it could be used to extend methods for detecting population structure from unlinked markers (e.g., Pritchardet al. 2000) to allow them to be applied to sets of tightly linked markers. Again, a natural approach is to modify the conditional distribution π to account explicitly for population structure. One suggestion is to modify the copying process in the k + 1st chromosome (see appendix a) so that, rather than being equally likely to copy all r existing chromosomes, it is more likely to copy chromosomes from the same population than chromosomes from a different population. This would effectively model population structure by increasing the probability of seeing similar chromosomes in the same population, compared with in different populations. We are currently investigating the effectiveness of a similar idea for LD mapping in case-control studies: treating cases and controls as separate populations and examining whether there appears to be evidence in some regions for the case chromosomes to be more similar to other case chromosomes than to control chromosomes.

While we have concentrated here on models for biallelic loci, the ideas we have introduced could also be used to model LD among multiallelic loci such as microsatellites. There is a natural analog of πA for loci with K alleles (see also the conditional distribution for K-allele loci suggested in FD), and this could form a starting point for further investigation.

To deal with the problem that the PAC likelihood depends on the order in which the haplotypes are considered, we have chosen to average the likelihood over several random orders. One possible alternative would have been to use the pseudo-likelihood (Besag 1974) based on our conditional distribution, Lpseudo(ρ)=k=1nπ(hkHk), (7) where Hk denotes the set of all haplotypes excluding hk. The pseudo-likelihood, by definition, does not depend on the ordering of the haplotypes. This idea is more along the lines of the way that these conditional distributions are used in Stephens et al. (2001). However, in preliminary studies we found that this pseudo-likelihood performed poorly for estimating ρ. Our intuitive explanation for this is that the pseudo-likelihood in effect contains only information on the recombination that is occurring in the tips of the trees and not on the structure of the tree as a whole. (Interestingly, under our approximation the first two haplotypes contain no information on ρ, so in some sense the information on ρ comes from intermediate haplotypes.) Nonetheless, it is possible that the pseudo-likelihood may prove useful in settings where estimating ρ is not of direct interest.

We have introduced here two models for variation in recombination rate: a simple single-hotspot model and a more general model that allows recombination rates to vary along the chromosome. Each of these models has weaknesses. The simple single-hotspot model makes some unrealistic assumptions: the background recombination is unlikely to be constant and neither is the recombination rate within the hotspot; furthermore, there could be more than one hotspot. The more general model makes few assumptions and allows more flexible investigation of patterns of recombination rate variation along a region. However, this extra flexibility comes at the expense of the introduction of extra parameters, which can result in a reduction in the precision with which parameters can be estimated. When using the model as a general model for LD, rather than for parameter estimation as we have concentrated on here, the precision of parameter estimates may be unimportant, and the few assumptions made by the more general model make it particularly attractive in this situation. When estimation of recombination rates is the main goal, the more general model may be viewed as most suited to exploratory data analysis, identifying plausible positions for hotspots, whose magnitudes might be estimated by a more parsimonious model. In this situation it might prove fruitful to consider modifying the more general model by putting a more informative prior on the λj's. In particular, a prior in which the λj's are correlated along the chromosome (e.g., an autoregressive prior) would reduce the variance of parameter estimates at the expense of assuming that changes in recombination rate occur more or less smoothly along the chromosome (which may or may not be the case).

In assessing our model as a method for estimating recombination rates from sequence data over moderate genomic regions, perhaps the most natural comparisons to make are with the composite-likelihood methods of Hudson (2001) and Fearnhead and Donnelly (2002). (While some other methods based on summaries of the data might be competitive with these approaches when the recombination rate is assumed constant, they seem likely to suffer from loss of information when fitting models with more parameters, such as either of our models for recombination rate variation.) Of the two composite-likelihood methods, although both are tractable for estimating ρ over large genomic regions, only Hudson's method is comparable with our own in terms of computational expense: our method and Hudson's method typically take seconds, or less, to compute a likelihood, while Fearnhead and Donnelly's method can take hours per likelihood computation. Although the time and effort expended in collecting these kinds of data make it not unreasonable to wait hours or days for the results of analysis, the extra computational burden may make Fearnhead and Donnelly's method difficult to extend to more general settings involving missing genotype data, genotyping error, and/or unknown haplotypic phase, for example. The approach of splitting the sequence data into contiguous segments also has the disadvantage, noted earlier, of estimating recombination rates in a region only on the basis of sites within the region and not sites either side of the region, resulting in potential loss of information. Our limited comparisons with Hudson's method suggest that it performs similarly to our method for estimating the recombination rate when it is assumed constant across the region. In principle, Hudson's method could also be applied to fit models of varying recombination rate along the sequence, and the existence of more than one method to fit such models would be welcome. Both approaches seem to offer considerable advantages over other available methods for modeling LD and inferring patterns of recombination rate heterogeneity.

## Acknowledgments

We thank Peter Donnelly, Paul Fearnhead, Gil McVean, Jonathan Pritchard, and Molly Przeworski for helpful conversations and Paul Fearnhead, Richard Hudson, and Mary Kuhner for helpful advice and/or results from software. Two anonymous referees gave helpful comments on the submitted manuscript. This work was supported by a grant from the University of Washington and National Institutes of Health grant no. 1RO1HG/LM02585-01.

## APPENDIX A: THE CONDITIONAL DISTRIBUTION πA

Here we give a formal description of πA. We also provide some additional motivation for the form of this approximation and describe briefly some of the variations on this form with which we have also experimented.

Formal description of πA: Let h1,..., hn denote the n sampled haplotypes typed at S biallelic loci (SNPs). Typically h1,..., hn would come from a sample of n haploid individuals or n/2 diploid individuals. We assume that the distribution of the first haplotype is independent of ρ [e.g., all 2S possible haplotypes are equally likely, so πA(h1) = 1/2S]. Consider now the conditional distribution of hk+1, given h1,..., hk, for k ≥ 1. Recall (Figure 2) that hk+1 is an imperfect mosaic of h1,..., hk. That is, for k ≥ 1, at each SNP, hk+1 is a (possibly imperfect) copy of one of h1,..., hk at that position. Let Xj denote which haplotype hk+1 copies at site j (so Xj ∈ {1,2,..., k}). For example, for haplotype h4A in Figure 2, (X1,X2,X3,X4,X5) = (3,3,2,2,2). To mimic the effects of recombination, we model the Xj as a Markov chain on {1,..., k}, with Pr(X1 = x) = 1/k (x ∈ {1,..., k}), and Pr(Xj+1=xXj=x)={exp(ρjdjk)+(1exp(ρjdjk))(1k)ifx=x;(1exp(ρjdjk))(1k)otherwise,} (A1) where dj is the physical distance between markers j and j + 1 (assumed known); and ρj = 4Ncj, where N is the effective (diploid) population size, and cj is the average rate of crossover per unit physical distance, per meiosis, between sites j and j + 1 (so that cjdj is the genetic distance between sites j and j + 1). This transition matrix captures the idea that, if sites j and j + 1 are a small genetic distance apart (i.e., cjdj is small), then they are highly likely to copy the same chromosome (i.e., Xj+1 = Xj). We note the following special cases used in this article:

1. Constant recombination rate (see estimating constant recombination rate): cj = for all j.

2. Simple single-hotspot model (see Power to detect recombination hotspots and robustness): cj = if markers j and j + 1 are both outside the hotspot, and cj if markers j and j + 1 are both inside the hotspot. (For brevity we omit details of the more tedious, though straightforward, case in which one marker is in the hotspot and the other outside the hotspot.)

3. General variable recombination rate model (see Estimating recombination rates along a region): cjj.

To mimic the effects of mutation, the copying process may be imperfect: with probability k(k+θ) the copy is exact, while with probability θ(k+θ) , a “mutation” will be applied to the copied haplotype. Specifically, if hi,j denotes the allele (0 or 1) at site j in haplotype i, then, given the copying process X1,..., XS, the alleles hk+1,1, hk+1,2,..., hk+1,S are independent, with Pr(hk+1,j=aXj=x,h1,,hk)={k(k+θ)+(12)×θ(k+θ),hx,j=a(12)×θ(k+θ),hx,ja.} (A2) [The factor of (1/2) appears in both cases, so that as θ both alleles become equally likely.]

We fix the value of θ to be θ=(m=1n11m)1, (A3) where n is the total number of sampled haplotypes. (See Motivation and Variation below for more discussion.)

Computation: Computing πA(hk+1|h1,..., hk) requires a sum over all possible values of the Xj, which can be done efficiently using the forward part of the forward-backward algorithm for hidden Markov models (e.g., Rabiner 1989). Specifically, let hk+1,≤j denote the types of the first j sites of haplotype hk+1, and let αj(x) = Pr(hk+1,≤j,Xj = x). Then α1(x) can be computed directly for x = 1,... k, and α2(x),.... αS(x) can be computed recursively using αj+1(x)=γj+1(x)x=1kαj(x)Pr(Xj+1=xXj=x) (A4) =γj+1(x)(pjαj(x)+(1pj)1kx=1kαj(x)), (A5) where γj+1(x) = Pr(hk+1,j+1|Xj+1 = x, h1,..., hk) is given in (A2), and pj = exp(–ρjdj/k).

The value of πA(hk+1|h1,..., hk) can then be computed using πA(hk+1h1,,hk)=x=1kαS(x). (A6)

The second term in the parentheses of (A5) does not depend on x and needs to be computed only once for each j (as noted in FD). Thus the computational complexity of πA(hk+1|h1,..., hk) increases linearly in the number of SNPs and linearly in k. As a result, the computation of LPAC-A is linear in the number of SNPs and quadratic in the number of chromosomes in the sample.

Motivation and variations: Although it seems intuitively sensible that the transition matrix in Figure 8 should have the property that the rate at which jumps occur in the copying process should increase with ρ and decrease with the number of previous sampled chromosomes k, it is perhaps not so obvious why we chose the rate ρ/k. Indeed, the empirical results in Figure 4 suggest that this rate is not quite ideal, and appendix b describes a corrected rate, based on these empirical results. However, we can get some idea of why ρ/n is a sensible starting point for the rate parameter from the following informal argument. Assume that h1,..., hk+1 are a random sample of haplotypes from a neutrally evolving, randomly mating, constant-sized population, and consider the unknown genealogical tree relating h1,..., hk+1 at a single site. It follows from the Ewens sampling formula that in this tree, the probability that hk+1 is separated by at least one mutation from each of h1,..., hk (unconditional on the actual values of h1,..., hk) is θ/(k +θ), where θ= 4Nμ, and μ is the probability of mutation per meiosis at that site. Similarly, if we consider marking on the tree recombination events that occur between this site and the next site, the probability that there will be at least one such event separating hk+1 from each of h1,..., hk is ρ/(k +ρ), where ρ= 4Nc and c is the probability of recombination between two adjacent sites per meiosis. Since ρ is small, ρ/(k + ρ) ≈ ρ/k, giving the rate that we used. (We emphasize that this is not intended to be a formal argument and in particular that it is unclear how our mosaic process relates formally to the genealogical tree relating the haplotypes. It is merely intended to provide additional motivation for the use of this rate and perhaps to stimulate research into a more formal connection.)

The reason for our choice of θ is that θm=1n11m is the expected number of mutation events at a single site on the genealogical tree relating a random sample of n chromosomes, so (A3) gives a priori an expected number of mutation events at each site of 1 (although it does not force the number of mutations to be exactly 1, and so our method should be somewhat robust to the presence of multiple mutations at some sites).

We performed simulation experiments along the lines of those used to produce Figure 4 to see whether variations on the conditional distribution πA described above might eliminate the bias we observed for πA. In particular, we tried using values for θ that were up to four times bigger or smaller than that in (A3); estimating θ from the data; replacing the transition probability in (A1) with a transition probability of ρdi/(kdi), as in FD; and making use of the more complex mutation mechanism (involving Gaussian quadrature) used in Stephens and Donnelly (2000) and in FD. Although these different variations gave different quantitative results, they all produced similar qualitative patterns, and in particular the bias we observed for πA remained in every variation that we tried. We therefore resorted to the empirical correction described in appendix b below.

## APPENDIX B: πB, A BIAS-CORRECTED VERSION OF πA

To correct the bias observed in the results for ρ^PAC-A , we modified the transition matrix in (A1) by replacing ρj by δjρj, where δj = exp(a + b log10ρj). The intercept a and slope b are interpolated on the basis of the number of haplotypes n and segregating sites S in the data (Figure 1), using tensor product interpolation with natural cubic splines first in the direction of varying n and then in the direction of varying S (Ueberhuber 1997).

## APPENDIX C: SIMULATING DATA WITH A RECOMBINATION HOTSPOT

We use the following algorithm to postprocess the output from mksample (Hudson 2002) to simulate data under the simple single-hotspot model for recombination variation. Suppose we would like to simulate a sample with approximately S segregating sites. The background recombination rate is ρ. A hotspot of width w = (ba) lies between positions a and b, with recombination rate λρ where λ> 1. We follow these steps:

1. Simulate samples with S′= (1 + w(λ–1))S segregating sites and constant recombination rate ρ′ = (1 + w(λ–1))ρ.

2. Multiply the position of each site by a factor of 1 + w(λ–1) so that the total length of the haplotypes is 1 + w(λ–1) instead of 1 (and the background recombination rate is ρ).

3. For sites within a and a + wλ, randomly delete them with probability 1 – 1/λ.

4. For the remaining sites within a and a + wλ, shrink the distance of adjacent sites by a factor of λ.

5. Shift the positions of the sites to the right of the hotspot [subtract w(λ–1)] so that the total length is again 1.

Shrinking the distance between sites in the hotspot produces the effect of elevated recombination rate. Deleting some sites keeps the mutation rate constant over the region.

## Footnotes

• Communicating editor: S. Tavare