Human immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes chronic infections, so genetic diversity within a single infection can be very high. High-throughput “deep” sequencing can now measure this diversity in unprecedented detail, particularly since it can be performed at different time points during an infection, and this offers a potentially powerful way to infer the evolutionary dynamics of the intrahost viral population. However, population genomic inference from HIV sequence data is challenging because of high rates of mutation and recombination, rapid demographic changes, and ongoing selective pressures. In this article we develop a new method for inference using HIV deep sequencing data, using an approach based on importance sampling of ancestral recombination graphs under a multilocus coalescent model. The approach further extends recent progress in the approximation of so-called conditional sampling distributions, a quantity of key interest when approximating coalescent likelihoods. The chief novelties of our method are that it is able to infer rates of recombination and mutation, as well as the effective population size, while handling sampling over different time points and missing data without extra computational difficulty. We apply our method to a data set of HIV-1, in which several hundred sequences were obtained from an infected individual at seven time points over 2 years. We find mutation rate and effective population size estimates to be comparable to those produced by the software BEAST. Additionally, our method is able to produce local recombination rate estimates. The software underlying our method, Coalescenator, is freely available.
HUMAN immunodeficiency virus (HIV) is a rapidly evolving pathogen that causes a chronic infection for an individual’s lifetime. As a consequence, the genetic diversity within a single infection can be very high. Important clinical variables, such as the rate of progression to AIDS and the set point viral load, are related to the diversity and evolution of the within-patient viral population (Shankarappa et al. 1999; Ross and Rodrigo 2002; Williamson 2003; Edwards et al. 2006; Lemey et al. 2007; Pybus and Rambaut 2009), and so genetic data from these populations are of medical relevance in addition to providing insight into molecular evolutionary processes. However, population genomic inference from HIV sequence data can be challenging as result of high rates of mutation and recombination within a small RNA genome of ∼10 kb. Furthermore, natural selection is expected to play an important role in shaping within-host HIV genetic diversity (Rouzine and Coffin 1999; Neher and Leitner 2010; Batorsky et al. 2011; Pennings et al. 2014). For example, phylogenies constructed from serially sampled intrahost population sequences are typically characterized by a ladder-like topology (Shankarappa et al. 1999), indicating a rapid and continual turnover of genetic lineages as result of continual host immune selection. Although the relationship between census viral population size (i.e., viral loads) over the course of infection and virus kinetics is well understood for HIV (Nowak and May 2000), relatively little is known about the effective population size during a single infection.
High coverage “deep” sequencing is now being used to address these problems and to investigate the genomic diversity of HIV, with several data sets already under study (Henn et al. 2012; Poon et al. 2012; Gall et al. 2013). For the study of within-host evolution of HIV patients, deep sequencing serves as a potentially powerful way to infer the evolutionary and ecological dynamics of the viral population in unprecedented detail, particularly since sequencing can be performed at different time points during infection (Drummond et al. 2003). This is especially important for studying fast-evolving RNA viruses, where the substitution rates and effective population size may change through time. By utilizing the temporal structure of the sampled sequences, statistical power can greatly improve the precision of population demographic and evolutionary estimates (Pybus et al. 2000; Drummond et al. 2003). However, from the perspective of population genomic inference, deep sequencing data are unusual: high-throughput sequencing generates large numbers of short sequence reads, with each read representing 400–700 nt of a viral genome. Typically no virus particle is sampled twice, so that only a small fraction of the viral genome is sequenced for any given virion in the population. Genealogies can be estimated, for example, for each ∼500-nt subgenomic partition, but each genealogy represents a different random sample of individuals from the same population. This presents peculiar challenges in genomic inference.
To infer parameters of an underlying evolutionary model from deep sequencing data, such as effective population size, mutation rate, and recombination rate, new theoretical and statistical approaches are therefore needed. In this article we work within a coalescent framework and in particular its extensions to allow for serially sampled, or heterochronous, sequences (Rodrigo and Felsenstein 1999). (This is in contrast to the usual situation of isochronous sampling at a single fixed time.) The coalescent is a powerful and flexible framework for modeling the genealogy of a large, panmictic population, with many further extensions that incorporate changing population size, recombination, and recurrent mutations (see Hein et al. 2005, for a textbook introduction). It is a crucial component in the inference of the evolutionary dynamics of fast-evolving RNA viruses, which can be combined with epidemiological data in an approach known as phylodynamics (Grenfell et al. 2004). A potentially powerful method of inference under complicated coalescent evolutionary models is to proceed by computationally intensive Monte Carlo simulation. The quantity of interest, such as the likelihood for the data, is estimated by averaging over a large number of the possible unobserved gene genealogies that could have given rise to the observed sequences. Genealogies of high posterior probability can be targeted by Markov chain Monte Carlo (MCMC) or importance sampling (IS). There is now a sizeable literature on these and related approaches, focusing on various complications of the basic coalescent, depending on the species under study. For example, there are methods to account for recombination [MCMC (Kuhner et al. 2000; Wang and Rannala 2008; Rasmussen et al. 2014) and IS (Griffiths and Marjoram 1996; Fearnhead and Donnelly 2001; McVean et al. 2002; Griffiths et al. 2008; Jenkins and Griffiths 2011)], changing population size [MCMC (Beaumont 1999; Drummond et al. 2002, 2005; Wilson et al. 2003; Minin et al. 2008) and IS (Griffiths and Tavaré 1994a; Beaumont 2003; Leblois et al. 2014)], and heterochronous sequence data [MCMC (Drummond et al. 2002, 2005; Minin et al. 2008) and IS (Beaumont 2003; Anderson 2005; Fearnhead 2008)].
However, for accurate inference using deep sequencing data from within-host virus populations we need to account for several of these complications simultaneously, and no existing methods are available for this task. In particular, current inference methods in this context have tended to omit the process of recombination (Pybus and Rambaut 2009), limiting our understanding of the extent of the association between recombination and viral adaptation. However, the effective recombination rate is of an order of magnitude comparable to the mutation rate (for example, Shriner et al. 2004 found the recombination rate to be 5.5-fold greater than the mutation rate in HIV); furthermore, recombination may play an important role in the evolution of drug resistance (Kellam and Larder 1995; Neher and Leitner 2010). There is also growing evidence that recombination rates are not constant along the genome (Fan et al. 2007; Archer et al. 2008). Thus, our goal in this article is to develop a coalescent method of inference that can handle all of the following: (i) recombination, (ii) high mutation rates, (iii) heterochronous sequences, (iv) missing data, and (v) changing effective population size.
Here we take a novel importance sampling approach to tackle this problem. It is based on recent developments in the efficient computation of conditional sampling distributions (CSDs) (Paul and Song 2010; Paul et al. 2011; see also Sheehan et al. 2013): the probability distribution of an additionally sampled haplotype, conditioned on the sampled haplotypes we have seen already. These are crucial in the design of efficient IS algorithms: Stephens and Donnelly (2000) noted an equivalence between designing an IS proposal distribution and approximating the (unknown) CSD. This work was later formalized by De Iorio and Griffiths (2004a), which subsequently allowed Griffiths et al. (2008) and Paul and Song (2010) to derive an accurate approximate CSD in the presence of crossover recombination. Finally, work by Paul et al. (2011) resulted in an efficient approximation of the CSD for multiple loci via a hidden Markov model (HMM).
Our work is most closely related to that of Griffiths et al. (2008), whose focus was a model able to account for recombination, but was not designed with heterochronous deep sequencing data in mind and thus exhibited only properties i and ii above. Extending this method to include properties iii, iv, and v raises numerous methodological challenges that we address in further detail below. Briefly, we replace the CSD of Griffiths et al. (2008) with that of the sequentially Markov model of Paul et al. (2011). The latter is more efficient to compute and easily extended to blocks of completely linked sites. To allow for samples taken at different times we introduce an explicit time variable, which in turn determines when samples are inserted into the reconstruction of a genealogy backward in time. Further, and in contrast to the imputation approaches of Fearnhead and Donnelly (2001) and Griffiths et al. (2008), our model allows for haplotypes to be only partially specified, assigning allelic states only at a subset of loci where possible. This dramatically reduces the state space of the model, and it also provides a convenient way of handling missing data.
Despite all of these contributions, application to large high-throughput data sets remains challenging, particularly for full-likelihood methods. To retain tractability, recent research has turned to approximations of the model, or of the likelihood, or both. The approach we take here is a principled method to find an accurate full-likelihood solution first by restricting attention to a two-locus model. Our two-locus model is then extended to multilocus data by analyzing selected pairs of loci separately and then aggregating these pairwise inferences, either by taking the median of the inferred estimates or by combining their likelihood surfaces via a pseudolikelihood [in particular, via a pairwise composite likelihood (McVean et al. 2002; Larribe and Fearnhead 2011)]. As we describe below, we performed a simulation study to demonstrate that our method can recover model parameters accurately.
The inferential process involves simulating ancestral trees backward in time, and, in agreement with Griffiths and Tavaré (1994b), Nielsen (1997), and Jasra et al. (2011), as we get closer to the most recent common ancestor (MRCA), coalescence times increase greatly. This adds undesirably high variance to the inference and extensive central processing unit (CPU) time. To circumvent this, we further employ the Time Machine strategy developed by Jasra et al. (2011): stopping the simulation before the MRCA is reached while controlling for the bias, acquiring sizable computational saving and reduced variance.
To investigate the performance of our method on empirical data, we analyzed HIV-1 RNA samples that had been extracted and sequenced at seven time points over a period of 2 years, from a patient enrolled in the control arm of the Short Pulse Antiretroviral Therapy at Seroconversion (SPARTAC) study. This patient received no antiviral drugs. Nine regions from the whole HIV-1 genome alignments of these heterochronous data were then analyzed using our model and for comparison by the Bayesian MCMC coalescence approach implemented in BEAST (Drummond and Rambaut 2007; Drummond et al. 2012). Both analyses provide strong agreement in the mutation rate, effective population size, and time to the most recent common ancestor. However, in addition, our approach also estimates recombination rates.
A C++ implementation of the algorithm is available from https://github.com/OSSCB2013 under the name Coalescenator. The program can process serially sampled, deep sequencing data from viral populations, to infer mutation rates, recombination rates, the effective population size, and times to recent ancestors.
Materials and Methods
Model and notation
In this section we describe our notation and genealogical model. We begin by formulating a two-locus model; that is, given a pair of loci, call them A and B, recombination may occur at any position along the sequence separating them. We regard a locus as a fixed stretch of nucleotides, which may contain several polymorphic sites. Recombination within a locus is ignored.
Suppose we have data collected at time points with being the most recent collection time and extending into the past. We assume the effective population size () to be constant between collection times, but it may change at these times. This piecewise-constant model is similar to that of Pybus et al. (2000). Except through their effects on we otherwise ignore the effects of natural selection and population substructure.
To reduce the parameter space, viral load estimates at the sampling times can be used as a guide for constraining the relative magnitudes of the effective population sizes. In the simplest scenario (e.g., when viral load estimates are unavailable or unreliable or we decide not to use them), a single, constant effective population size can be fitted, and in this article we focus on the inference of a single effective population size parameter. In Results, we investigate the robustness of our inference to this assumption.
Let and denote the lengths (in nucleotides) of loci A and B, respectively, and set to be their total length. We assume that all nucleotides conform to a diallelic model, with alleles labeled arbitrarily as Each nucleotide mutates independently at the same rate and with symmetric transitions such that the mutation transition matrix for each nucleotide is . (In other words, denotes the probability that, if the ith allele undergoes a mutation event, it changes to the jth allele.) Let be the mutation rate per generation time per site, which we assume to be constant across sites, and let be the population-scaled mutation rate per generation per site when the effective population size is Then is the population-scaled mutation rate per generation time across both loci, with individual locus mutation rates given by and The overall mutation transition matrix for locus ℓ is (1)where I is the identity matrix and for each kth summand, P appears in the kth position in the direct product (Griffiths and Tavaré 1994b). In other words, when a mutation occurs at locus ℓ, the resulting haplotype is chosen uniformly from among those differing from the parental haplotype at precisely one of the sites. Because of the possibility of a high rate of mutation, this model allows for a site to have undergone more than one mutation event in its genealogical history. In practice we simulate mutation events only at polymorphic sites.
Suppose loci A and B are separated by a region of length d nucleotides, and assume a uniform recombination rate across this region. Let be the recombination rate per generation time per site, which we assume to be constant across sites. Therefore, if denotes the population-scaled recombination rate per generation time per site, then is the population-scaled recombination rate per generation time between loci A and B. We assume only crossover-type recombination, which implies an approximately linear relationship between short physical distance and recombination rate.
We sample n haplotypes at a given collection time; by haplotype we mean a sequence of nucleotides that we will later subdivide into a sequence of contiguous blocks of loci. Due to missing data, we observe a haplotypes specified only at locus A, b haplotypes specified only at locus B, and c haplotypes specified at both loci, so that Reads only partially overlapping with a locus are treated as missing at this locus. A haplotype is seen with multiplicity so that Haplotypes with data missing at locus A or B are respectively denoted and with respective multiplicities and and Let and the complete data set is thus written compactly as As we reconstruct a genealogy for the sample backward in time, recombination events create lineages each ancestral only at one of the two loci. The notation we have introduced to allow for missing data also allows us to leave unspecified the allelic types of these nonancestral recombinant loci: backward in time, a recombination event replaces a type with and
Parameter estimation in population genetic models requires computation of the likelihood of the observed data as a function of the model parameters Φ. We start by describing a method for estimating the likelihood for heterochronous data at two loci, with Our method also provides a weighted approximation to the posterior distribution of genealogical histories given these data, so it is straightforward to address questions of ancestral inference, such as the time to the most recent common ancestor (TMRCA) of the data. Note that under the standard (isochronous) coalescent model, it is not possible to identify mutation and recombination rates separately, as only their respective products with the effective population size can be identified. Serial sampling from a rapidly evolving population (relative to the spacing between samples) allows for the separate estimation of these three parameters, because a separate estimate of is then available. [To understand this, observe that with heterochronous data the times between sample collections appear in the likelihood expression, The timescale of the coalescent is governed by which therefore now appears in —and so can be estimated—separately from its appearance inside the composite parameters and Further details are given by Rodrigo and Felsenstein 1999, pp. 259–262.]
As it is not possible to write down analytically, a latent genealogy variable, is introduced. The likelihood is then calculated by marginalizing over (2)A naive Monte Carlo estimator for the integral in (2) is given by (3)where represent independent samples from the coalescent prior. This estimator has poor properties because = 0 with high probability, resulting in having very high variance (Stephens and Donnelly 2000). Estimation of the integral (2) is therefore generally conducted using either MCMC or IS. The latter approach, which we follow here, was pioneered by Griffiths and Tavaré (1994b). An IS estimator is obtained by introducing an artificial proposal distribution ℚ and reweighting the samples; the estimator (4)where represent independent samples from has lower variance than (3) provided is chosen carefully. It can be shown that the optimal proposal distribution is the posterior in which case has variance 0 (Stephens and Donnelly 2000). This distribution is unknown in general, but it can guide our intuition on how to choose
Griffiths and Tavaré (1994b) noted that we can focus solely on genealogies compatible with the observed data by simulating them sequentially and by a reversal of time. To illustrate the idea we introduce some further notation similar to that of Stephens and Donnelly (2000): first, regard the coalescent model as a Markov process forward in time on unordered sets of haplotypes. The process starts at the MRCA of the sample, and ends at an unordered set of n observed haplotypes, corresponding to the most recently observed data. The process visits a sequence of states as the genealogy is constructed forward in time by mutations, coalescences, and recombinations; the known rates of these transitions are prescribed by the coalescent model. We refer to ℋ as the history of the sample. The importance sampling estimator can then be decomposed asThe numerators are known from the coalescent model, and the denominators are specified by the proposal distribution. The key point is that this reformulation of the importance sampling procedure can be viewed as exploring the state space of latent histories backward from the sample to the MRCA One advantage of this is immediate: when the only data collection time is then and by choosing for each we can ensure that for each i, so that no simulation is wasted. Moreover, the Markov specification of the proposal distribution helps to simplify our task of actually designing it.
A key observation of Stephens and Donnelly (2000) was that the posterior, conditional distribution of genealogies can be expressed in terms of the CSD, which is defined as the probability that a haplotype sampled from the population is of type given that we have already sampled the haplotypes with configuration n. Although this distribution is as intractable as it provides a sensible starting point in the design of an efficient proposal distribution, since we have reduced the problem to one of finding a good approximation of . We emphasize that even if we use a proposal distribution based on an approximation to the reweighting in (4) still provides us with an unbiased and consistent estimator.
New proposal distribution
Our aim then is to design a sequential proposal distribution defining a Markov chain backward in time that approximates Following the discussion of the previous section, to achieve this we use two steps: first, express the true backward transition rates in terms of the CSDs second, substitute the CSD for a well-motivated approximation We address each of these steps in turn.
Backward transition rates:
For now, fix and assume a single sample collection time Griffiths et al. (2008) obtained the backward transition rates for a two-locus, finite-alleles model with isochronous samples taken from a population at equilibrium. Their formulation is efficient because it corresponds to ancestral recombination graphs (ARGs) in which only lineages carrying genetic material ancestral to the sample are simulated; entirely nonancestral lineages are integrated out. This dramatically reduces the state space and improves efficiency. However, the simulation continues to assign allelic types to the remaining nonancestral loci. For example, if a lineage in an ARG is ancestral to a member of the sample at locus A but not at locus B, then it is still necessary to assign an allelic status at locus B. In our application these loci represent tens or hundreds of nucleotides, and assigning a haplotypic status to such loci throughout the simulation is cumbersome. We therefore marginalize over these nonancestral loci too. The resulting backward transition rates are given in Table A1, and the entries in Table A1 are derived in Appendix A.
Thus far we have ignored changes in population size and heterochronous sampling. To account for these factors we introduce an explicit time variable which contains the times between events in the history ℋ (we now also include the collection of additional samples at times as valid events). To calculate the likelihood, we now need to marginalize over both histories ℋ and interevent times with IS estimator (5)where the parameters of interest are and are independent samples from
Our sequential proposal is described below. Suppose our genealogical reconstruction has proceeded backward at time past the sample collection time We first sample an event time backward according to an exponential distribution with rate depending on the current sample size and the current effective population size In apprehension of the appearance of new samples at given times in the past, we measure time in chronological units rather than in generation units (Rodrigo and Felsenstein 1999). Then the waiting time T (in days) for the next event is exponentially distributed with density (6)where is (twice) the total prior rate of events and τ is the generation time in days. (The parameters and ρ are redefined each time the effective population size changes; e.g., ) We sample a time T from (6): if the corresponding event time is more recent than the next (pastward) sample collection time, i.e., then we accept this time and define our proposal bywith defined as in Table A1—only we replace each instance of with an approximation developed below. Otherwise, with probabilitythe next event is the insertion of additional samples at the next collection time, and we set the next waiting time to be At collection time we insert the samples collected at time into our current configuration That is, if and then we set We then update the effective population size parameter to and continue this iterative procedure until we have both passed the oldest collection time and reached a MRCA for each locus. The overall procedure is illustrated in Figure 1.
Conditional sampling distribution:
Our proposal distribution is obtained by substituting an approximate CSD for π in Table A1. We use the HMM approach devised by Paul et al. (2011), yielding an algorithm linear in both the number of loci and the number of haplotypes (i.e., linear in sequence length and sample size, respectively). It is a relatively accurate approximation of the true CSD and practical to compute. Briefly, the new haplotype is sampled given an existing configuration n by assuming that the genealogy for the latter is a simple, improper “trunk” ancestry: one extending infinitely far back into the past with no mutations, recombinations, or coalescences (see Figure 2 and Paul and Song 2010). The alleles of the new haplotype are determined by allowing its ancestral lineage to undergo mutation and recombination and to coalesce into this trunk ancestry at appropriate rates. This framework can be cast as an HMM across loci, whose emissions are the observed alleles at the newly sampled haplotype and whose hidden state is the pair where is the absorption time of this locus into the trunk, is the type of the lineage into which absorption occurs (Figure 2), and is the length of this locus (in nucleotides).
Although this framework is obviously a strong simplification of the true underlying genealogical process, the resulting CSD has many sensible properties (Paul et al. 2011). However, this approach does not lend itself immediately to handling unspecified alleles. We therefore make several further modifications to the CSD of Paul et al. (2011) to construct a practicable IS algorithm, as we now describe.
For our model, where all nucleotides are diallelic and mutate independently, we can simplify the emission probabilities used in the HMM calculation—increasing the computational efficiency greatly. Given a hidden state of the Markov chain at locus ℓ, the emission probability mass function for observed state is given by (7)where Q is the mutation transition matrix (1). Because sites within a locus mutate independently in our model, we can reformulate this aswhere we recall that . We can simplify this even further as(8)where denotes the number of nucleotide differences between i and j. Thus, we have successfully eliminated the infinite sum in the emission distribution (7). Because the hidden state is continuous, we follow Paul et al. (2011) and employ Gaussian quadrature to construct a discrete HMM. We chose Laguerre–Gauss quadrature (Abramowitz and Stegun 1972, table 25.9) with 16 points. Applying the forward algorithm to this HMM yields the required CSD. The expression for the emission distribution in this discretized model can also be reduced to a closed-form formula (Appendix B).
Emission probability when the absorbing hidden state is unspecified:
The method of Paul et al. (2011) does not deal with the case where the locus of interest at the absorbing hidden state is unspecified, i.e., calculating emissions of the form In a two-locus model this occurs, for example, when the absorbing state is the haplotype for some i and we are interested in the emission distribution at locus B. In this case, we condition on choosing an absorbing haplotype with the allele at this locus specified (Figure 3B). When there are no haplotypes in the trunk ancestry for which the locus of interest has a specified allele, we sample the allele from the stationary distribution of Q (Equation 1); that is, we pick uniformly from all possible haplotypes at this locus, (9)where the trunk ancestry has configuration and we have used the notation
Emission probability for partially observed haplotypes:
If the observed haplotype has an unspecified locus (i.e., we seek to calculate or ), then we transform this to a one-locus problem with a degenerate HMM summing over hidden states at a single locus (Figure 3A). If the absorbing hidden state is unspecified (Figure 3B), then we choose uniformly from the informative trunk lineages as described above.
CSD for recombination event:
To model recombination, we require the CSD for two partially observed haplotypes (see Table A1). This quantity satisfies the following decompositions and symmetries: (10a) (10b)and (11a) (11b)To estimate Griffiths et al. (2008) use relationship (10) by substituting approximate CSDs for each fully observed haplotype. In addition, they noted that may not satisfy symmetries (10a) and (10b)—and so they take the average of the two expressions. However, this strategy in averaging over all the alleles at the unspecified loci is very computationally intensive. Here, we instead use relationship (11) and substitute the approximate CSDs and from the previous section (Emission probability for partially observed haplotypes). We still take the average of the two expressions (11a) and (11b) to account for asymmetries.
To summarize, solving the forward equation associated with the HMM described above allows us to compute an approximate CSD, for each as well as for the special cases and Plugging this into Table A1 defines an IS proposal distribution for sampling genealogical histories associated with the two loci. Equation 5 then provides an unbiased estimator of the likelihood. We have not yet specified how to compute the numerator in (5); to do this we also need the prior probabilities These can be computed easily and sequentially, using the rightmost column of Table A1 (with minor modifications to account for the simulation of ). Table A1 covers events involving coalescence, mutation, and recombination. In addition, we also need to calculate the probability of subsampling the observed haplotypes from among the lineages in the reconstructed genealogy, for each collection time Thus, there is a contribution to the numerator of (5) given by the hypergeometric distribution (Beaumont 2003): if is the configuration at the collection time (including the additional samples) and is the configuration just before (more recently than) the addition of these samples, then (12)After decomposing in this manner it remains to compute the final term for the probability of the type of the MRCA. Under our uniform mutation model, the stationary distribution of the mutation process and thus the type of the MRCA is uniformly distributed such that Finally, the probability of the data given a history, is simply an indicator for whether the configurations at the leaves of the simulated history coincide with the observed data. By construction this is identically equal to 1.
Further algorithmic improvements
In this section, we present two modifications to our model for reducing the computational overhead, as well as two strategies for extending the two-locus model to process multilocus data.
The work of Stephens and Donnelly (2000), De Iorio and Griffiths (2004a), Griffiths et al. (2008), Paul and Song (2010), and Paul et al. (2011) suggests that our approach to IS will have attractive statistical efficiency. However, generating proposals according to Table A1 requires the evaluation of all possible events for all haplotypes in the current configuration which may be too costly for complex samples. Instead we therefore make a simple modification, similar to the one suggested by Fearnhead and Donnelly (2001), which is a further approximation to the proposal suggested by Table A1: consider the waiting time to the next event as the minimum of independent competing exponential times for the events involving each of the and haplotypes. Now sample a haplotype to be involved in the next event according to the prior probability of its involvement. In this calculation we exclude the possibility of a coalescence between dissimilar haplotypes, resulting in the probabilities given in Table A2. Next, given the chosen haplotype we choose the event it is involved in with probabilities proportional to the relevant rows in Table A1; now only those rows need to be computed.
As reported by Griffiths and Tavaré (1994b), Nielsen (1997), and Jasra et al. (2011), as we simulate the tree backward and closer to the MRCA, the simulation times increase greatly. This long simulation run results in undesirably high variance of the likelihood estimate and extensive CPU time—a particular drawback of an IS approach. To circumvent this, Jasra et al. (2011) considered stopping the simulation before the MRCA is reached, in an approach termed the Time Machine. The bias from this action is then characterized. First, it depends on the underlying mixing of the evolutionary process: the closer to the root node of the tree, the process is able to “forget” its initial condition. Second, it depends on the specific distribution of the process at the stopping (exit) time. Using the IS algorithm of Stephens and Donnelly (2000), they investigated the bias-variance effect of stopping simulations at the first time back that the number of lineages had decreased to 1%, 2%, 5%, 10%, 25%, and 50% of the original sample size n. They concluded that stopping at 5% strikes the right balance between computational efficiency and numerical accuracy.
We conservatively adopt their approach: stopping at the first time the backward simulation reaches five ancestors. At this exit time, the alleles of the remaining lineages are drawn from a uniform distribution. Indeed, we experience sizable computational saving and reduced variance, without noticeable effect on the quality of the estimates.
Estimation of local and global parameters:
The method described above estimates the likelihood for heterochronous deep sequencing data at a pair of loci. Recall that each locus is a stretch of nucleotides within which recombination is ignored, but between which recombination is explicitly modeled. By searching for Φ that maximizes we obtain the pairwise maximum-likelihood estimate (pairwise MLE) for the mean mutation rate and effective population size across the pair and the mean recombination rate between the pair.
This two-locus IS algorithm can be used to analyze multilocus data. Given a region of length κ (typically ∼400–500 nt), we partition it into loci of length δ (typically ∼50 nt), defining a collection of loci. We run the IS algorithm on pairs of nonadjacent loci . Adjacent pairs are excluded because recombination between loci separated by a single nucleotide is expected to be negligible. The resulting pairwise MLEs are then indicative of how the local population parameters are distributed within the region.
In practice, the size for the partitioned loci, δ, would be chosen based on the read length distribution of the data, so that a large proportion of the sample at any particular locus will be fully specified and thus can be included in our analysis. (Recall that if a sequence contains even a few missing nucleotides at a given locus, the rest of the information at this locus is discarded and we consider it as a block of missing data.) A balance should be struck: smaller locus length allows inference of finer recombination and mutation rate variation across the region, at the expense of higher computational complexity.
It is also of interest to have a single, global estimate for the population parameters that is representative of the whole region. An ideal situation is a short region (typically ∼500 nt) where the evolutionary behavior is relatively homogeneous. In such cases, one can take the median or mean of pairwise MLEs as a simple global estimator; we found the median to be more robust. An alternative, more sophisticated global estimator is via a pairwise composite likelihood (reviewed in Larribe and Fearnhead 2011), which we include for comparison with the median of pairwise MLEs. In this strategy, the two-locus IS algorithm is run on each pair of loci such that for some threshold distance Δ. The global parameters are inferred by maximizing pairwise composite likelihood (13)where denotes the marginal data restricted to the pair of loci i and j. Equation 13 is similar to the composite likelihood of McVean et al. (2002), using pairs of multinucleotide loci rather than pairs of single-nucleotide polymorphisms (SNPs), as suggested by Jenkins and Griffiths (2011).
Introducing a threshold Δ is attractive for both statistical and computational reasons (Larribe and Fearnhead 2011), and there are several additional reasons why Δ should be small in our application. First, the short-read nature of the data means that if then many samples will be missing at one or both of the two loci. Second, since our loci represent blocks of nucleotides rather than single SNPs, parameter variation along the sequence implies we should concentrate on proximate pairs of loci. Third, our modeling of complete blocks of nucleotides instead of isolated SNPs prohibits the precomputation of an exhaustive list of pairwise likelihoods, as is possible in McVean et al. (2004), for example. For these reasons we focus on the composite-likelihood (13) in which
These procedures are illustrated in Figure 4.
Simulated heterochronous data sets were generated using the software package NetRecodon (Arenas and Posada 2010), using a Jukes–Cantor substitution model (Jukes and Cantor 1969) and with each nucleotide frequency value equal to 0.25. Population parameters were chosen to be comparable to those previously described for HIV: per site per generation, per site per generation, days (e.g., Shriner et al. 2004; Neher and Leitner 2010; Batorsky et al. 2011). Each simulation was set to produce sequences of length nt, at four collection times separated by 252 days, totaling sequences.
Under the above conditions, four data sets were generated under slightly different scenarios. The first pair of data sets, called Const-120 and Const-600, were generated under a constant population size. Const-120 had reads sampled at each collection time, totaling reads. Const-600 had a fivefold increased sample size to reads at each collection time, totaling reads. The second pair of data sets, Dynamic-120 and Dynamic-600, had the same population parameters and samples sizes as Const-120 and Const-600, respectively. However, they were generated under a fluctuating viral population such that corresponded to 1000, 2000, 4000, and 500 at the four collection times (from the most recent to the earliest). This models a population undergoing both an expansion and a bottleneck through time.
The subject studied here was enrolled in the control (no therapy) arm of the SPARTAC study. EDTA plasma samples were obtained at the start of the study (at an estimated 12 days from HIV-1 seroconversion) and at 28, 120, 176, 373, 429, and 695 days after the start of the study. Viral RNA was extracted, whole HIV-1 genomes were sequenced in a pool of 21 samples on a 1/2 PicoTiterPlate using a Genome Sequencer FLX Titanium XL instrument (Roche/454 Life Sciences), and a consensus sequence for each time point was derived as previously described (Gall et al. 2012). The seven consensus sequences were aligned using MAFFT v6.857 (Katoh et al. 2002), and a consensus sequence of this alignment was used as a reference sequence for mapping of all reads, with suffixes representing the time points, using Burrows-Wheeler Aligner v0.5.9 (Li and Durbin 2010). The resulting SAM file was converted into a FASTA file, using a custom Java script. The distribution of read lengths and positions across the genome are given in Supporting Information, Table S1 and Table S2, respectively.
HIV genome analysis:
Nine regions were selected from the whole HIV genome alignment of mapped reads, each region ∼600 nt in length and comprising samples from all time points. Furthermore, we ensured that these regions were in frame and nonoverlapping, spanning the four main open-reading frames (i.e., gag, pol, env, and nef). Reads covering at least 95% of a region were retained in our analysis. The average number of sequences per time point was 34 (range 20–65). Further details about the sequence data set for each region can be found in Table S3.
To evaluate the performance of Coalescenator in estimating population parameters of interest, primarily the effective population size, and mutation rate, we reanalyzed the sequence alignments for the nine regions described above, using the Bayesian MCMC coalescent approach implemented in BEAST (v1.8.0) (Drummond and Rambaut 2007; Drummond et al. 2012). Specifically, a codon-structured nucleotide substitution model (Shapiro et al. 2006), a constant-size population demographic model, and a strict molecular clock prior were employed. Two independent MCMC chains of 50 million steps were run to assess convergence and adequate mixing.
Accession numbers for sequencing data:
The Roche/454 Life Sciences sequencing data obtained in this study are available from the EMBL/NCBI/DDBJ Sequence Read Archive sample accession nos. ERS661087–ERS661093.
To evaluate the validity of our model implementation, Coalescenator, we simulated heterochronous data sets under known population parameters. Employing this set, we assessed Coalescenator’s performance with respect to the number of Monte Carlo iterations, the sample sizes of the data, and the data’s underlying population size dynamics. For each study, we searched through a three-dimensional parameter space of 11 ’s, 11 ’s, and 6 ’s that best explain the observed diallelic sites in the data (see Table S4 for the summary of the sites classification and the numbers of sites used). We assessed whether parameter combinations with the highest likelihoods corresponded to the true parameters that generated the data. A total of parameter combinations of the following were used:
250, 750, 1000, 1250, 1750, 3000.
Performance for data from constant population size:
The behavior of Coalescenator when analyzing data generated under constant population size was assessed using data sets Const-120 and Const-600 (see Materials and Methods). Coalescenator was run with a constant population size model.
First, the performance of local inference using pairwise MLEs was assessed by analyzing a single pair of loci of lengths nt at positions (1, 50) and (101, 150). Figure 5 shows the likelihood surfaces for Const-120, with 100 and 500 Monte Carlo iterations, respectively; a further increase to 1000 and 5000 iterations is shown in Figure S1. Here, across various choices of there is a clear clustering of higher likelihoods near the true parameter values. With only 100 iterations, the pairwise MLE recovers the correct mutation rate (among the 11 parameter values considered) and estimates the recombination rate at The MLE for the effective population size is farther off target, at By increasing the number of iterations to 500 (Figure 5B), there was a marked improvement in the likelihood surface; it is now much more peaked, and the recombination rate estimate improves to within a factor of 2 of its true value (although for this experiment the effective population size is still overestimated). Overall, parameter estimates are accurate for 500 iterations and certainly to within an order of magnitude. For this data set, we also investigated a more ambitious increase to 1000 and 5000 Monte Carlo iterations (Figure S1); the recombination rate estimate sees modest further improvement, but at a cost of infeasible computing requirements should we hope to scale this to real data sets. Indeed, it is worth emphasizing that these experiments use relatively few Monte Carlo iterations; typical IS approaches to coalescent inference might use several orders of magnitude more. Our aim here is to stretch our algorithm when only a few iterations are available, so that it can feasibly be scaled up to complex multilocus data sets. The results presented here suggest that the sophistication of our algorithm adequately compensates for the dearth of CPU time, and comparisons with longer runs show that even as few as 100 iterations give decent estimates. The nature of the data, with its high mutation rates and heterochronicity, may also assist in this exploration of tree space.
Figure 6 shows the corresponding likelihood surfaces for Const-600. The likelihood hypersurface concentrates in a similar manner with respect to the true parameter values, but this time there is a shift toward the correct effective population size. This suggests that Coalescenator performs favorably with increasing sample size.
It is also worth commenting on the general shape of the likelihood hypersurfaces in Figure 5 and Figure 6. One might expect to see a ridge corresponding to a negative correlation in the estimation of mutation and recombination rates, as alternative ways of resolving any incompatibilities in the data. In fact the main ridge, evident from several panels in Figure 5 and Figure 6, is parallel to the recombination axis. This reflects the simple fact that recombination is inherently more difficult than mutation to estimate to a given accuracy. A second, weaker ridge is evident in the contours along lines of constant as might be expected. That no other “ridges” seem to be discernible from these likelihood hypersurfaces is encouraging, as it is consistent with the fact that the data contain sufficient information for mutation and recombination parameters to be estimated separately (and that our method can reconstruct the likelihood surface to sufficient accuracy).
The above analysis shows that by using the information from a single pair of loci of 50 nt each, Coalescenator is able to arrive at a sensible indication of the population parameters for the whole 500-nt region. This raises two questions:
For effective inference, how far apart should the two loci be?
A pairwise MLE provides only partial information since it utilizes a subset of the whole region. How, then, should one come up with the global estimate for the whole region?
To answer the first question, inference quality was assessed with respect to the separation d of a pair of loci. The 500-nt sequence region was divided into 10 neighboring loci of 50 nt, and for each of the 36 pairs of nonadjacent loci, pairwise MLEs for the population parameters were calculated. Figure S2 shows how the MLEs vary with d. We observed that the mutation rate estimate is stable across separation distance. However, this is not the case with the recombination estimates. Specifically, (i) variance in MLEs for fixed d is greater than for mutation, and (ii) with increasing d, recombination rate estimates (per site) seem to be lower.
These two observations could have been anticipated to some extent; the signal for recombination in genetic data is generally weaker than that of mutation. Furthermore, with increasing separation between two loci the signal for recombination can become “saturated,” so that recombination events are undetected and recombination rates are underestimated. More precisely, the curvature of the likelihood curve for ρ is flatter when ρ is greater (Chan et al. 2012). Ultimately, this provides the basis in deciding the appropriate constraint on locus pair separation for reliable inference. This is ∼50–100 nt, and from here onward we proceeded by computing pairwise MLEs for pairs of loci separated by 50 nt.
To address the second challenge of obtaining global parameter estimates, we assessed both of the two approaches discussed in the section Estimation of local and global parameters. The first approach is to take the median of the pairwise MLEs. Figure 7 shows pairwise MLEs and the corresponding median across the data sets. We see that the median can effectively capture the true estimates, albeit with a slight overestimation of There is also greater variance in estimates for the recombination rates, which in turn reflect the confidence we should have in these estimates.
The second approach is via a pairwise composite likelihood (Equation 13), in which likelihood surfaces for pairs of loci separated by 50 nt are multiplicatively combined. Figure S3 shows the composite-likelihood surfaces for data set Const-120. The MLEs using 100 and 500 Monte Carlo iterations are both of high accuracy and differ only slightly in the recombination rate estimates; this further illustrates the fast convergence of Coalescenator. With 500 iterations, the true mutation and recombination rates are captured, although the effective population size is off target at Figure S4 shows the result for Const-600, where the larger sample size also provides accurate estimates of the effective population size. For example, with 500 iterations the joint MLE is only two cells away from the true values (on this grid). These estimates are also included in Figure 7 for direct visual comparison with the first approach.
Performance for data from dynamic population sizes:
The behavior of Coalescenator for analyzing data generated under dynamic population sizes was assessed using data sets Dynamic-120 and Dynamic-600. Figure 8 shows the pairwise MLEs and the global estimates based on the median and pairwise composite likelihoods, for both data sets. Each estimate was based on 100 Monte Carlo iterations.
With the incorrect assumption of a constant population size, Coalescenator slightly overestimates the mutation rates. The pairwise MLEs for recombination rates increase in accuracy for the larger sample size, but there is greater variability compared to the analysis in the previous section on Const-120 and Const-600. Effective population size estimates are still of very reasonable accuracy and stability. This suggests that Coalescenator is reasonably robust to unmodeled changes in population size and can be reliably used in this way. Of course, it is straightforward to extend our algorithm to impose population size changes between sample collection times, perhaps guided by viral load information, and even to infer changes in effective population size. However, in the latter case we found that our chosen ranges of sample size and Monte Carlo iterations were insufficient to compensate for the significant increase in model complexity.
As all the cases here and in the previous section illustrate (Figure 7 and Figure 8), the two methods for providing global parameter estimates—taking the median of pairwise MLEs and constructing a composite-likelihood estimate—have generally shown good agreement, which corroborates each other’s accuracy. The former seems to give similar or better global measures than the latter, which seems slightly more susceptible to outlier estimates, so we focus on the median of pairwise MLEs to provide our global summaries from here onward. Clearly, strategies more sophisticated than a simple median might be able to aggregate pairwise estimates more efficiently, and we leave this for future work.
Variance of the importance weights:
For each of the four simulated data sets, we assessed the stability of Coalescenator’s likelihood estimates by computing a coefficient of variation of the importance weights [i.e., the summands in (5)]. We used 100 Monte Carlo iterations for each particular parameter combination, analyzing each pair of loci separated by 50 nt. There are parameter combinations and eight pairs of loci, resulting in 5808 CVs being computed for each data set.
Table 1 summarizes the distribution of the coefficient of variation for the four simulated data sets. The variance appears to be well constrained, with the majority of CVs <0.001. The larger CVs originate from parameter combinations that deviate farthest from the ground truth parameters. While a small CV (equivalently, a large effective sample size) is not a guarantee of good performance, it is consistent with the idea that the proposal distribution is not too skewed in its exploration of the space of ARGs.
Coalescenator scales approximately linearly with respect to the sample size and the number of Monte Carlo iterations. For data set Const-120, a likelihood calculation for a single parameter combination using 100 iterations takes on average 10 sec when run in parallel on a 16-core computer with a 2.0-GHz processor. For data set Const-600, the corresponding calculation increases fivefold to about 50 sec.
Real data analysis
Coalescenator was run on nine HIV regions from HIV genome alignments collected at seven time points from a patient infected with HIV over a period of 2 years (see Materials and Methods). The sequence data are summarized in Table S3.
Each gene region is 600 nt long, but some reads had missing data, chiefly up to the first 30 nt and the last 30 nt of the region. Although Coalescenator can handle missing data, for the purpose of this analysis we concentrate on the central positions (51, 550) of each gene region. Sites containing ambiguous nucleotides, indels, and nondiallelic sites were filtered out. Table S5 summarizes the sites classification, as well as the total and breakdown of the number of sites used for each gene region.
The parameter combinations extended those used in the simulation study. Additional grid points for the recombination rate were considered, to account for the observation in a preliminary run that several MLEs for recombination rates across the regions seem to be between and Following the simulation study, each gene region was partitioned into ten 50-nt loci, and likelihood surfaces were generated for pairs of loci separated by 50 nt, using 100 iterations.
Figure 9 shows the evolutionary parameter estimates across each region. The median of the pairwise MLEs and the pairwise composite-likelihood estimates agree quite closely for the mutation and effective population size estimates, and all lie in the range for mutation rate estimates in each region. There was greater variability across the genome for effective population size estimates: for example, all median-based estimates were in the range 875–1000 in regions encompassing env, while the median-based estimate for the region pol 2836–3436 had There was some disagreement between the two estimates of recombination rate. The pairwise composite-likelihood estimates seem to be less stable; the estimate in the region env 6415–7015 is 2–10 times lower than in any other region and 1/10th of the estimate obtained using the median of pairwise MLEs. (As discussed above, this behavior was also evident in the simulation study.) We prefer therefore to use the median of the pairwise MLEs as our regional summaries, with the uncertainty expressed by the variability in individual MLEs.
It is worth noting that some pairwise MLEs are suggestive of a higher mutation rate in parts of the env region (described previously by Alizon and Fraser 2013). There is also some variability in the median-based recombination rate estimates across regions, but this is consistent with Monte Carlo sampling error. A single pairwise likelihood surface between loci (1, 50) and (101, 151) and a pairwise composite-likelihood surface is shown in Figure 10 for the env 6415–7015 region. This serves as a sanity check that we indeed have convergence to a plausible likelihood surface, since each grid point is simulated independently.
Finally, we compared our estimates against the output of the corresponding analysis using BEAST (Drummond and Rambaut 2007; Drummond et al. 2012). (We do not compare recombination rates—recall that BEAST is unable to infer these. Our recombination rate estimates are summarized separately in Table 2.) The comparison is illustrated in Figure 11. The raw values for all the estimates are reported in detail in Table S6. The mutation and recombination rate estimates from Coalescenator were converted into the number of substitutions and recombinations per site per year, respectively. denotes effective population size, and the time to the most recent common ancestor, or the time until we reach five ancestors, is given in years. Estimates of mutation rates and ’s show very good agreement, although our method seems to be more cautious in estimating variation in across regions. Across the mutation rate estimates, the only notable differences are our higher estimates for the second gag region and the first pol region (each greater by a factor of ). This discrepancy could reflect that Coalescenator can be prone to slight overestimation of mutation rates, as suggested by the simulation results on Dynamic-120 and Dynamic-600.
We have presented a method capable of handling many of the challenges associated with modeling evolution of rapidly evolving viral populations measured using high-throughput sequencing. It is based on recent developments in the approximation of CSDs (Stephens and Donnelly 2000; De Iorio and Griffiths 2004a; Griffiths et al. 2008; Paul and Song 2010; Paul et al. 2011), and the adoption of this work into a practical IS algorithm should be a useful contribution in itself. The model we derive handles recombination by allowing for partially specified haplotypes and thus avoids the inflation of the model state space caused by alternative imputation procedures. It also provides a natural way of dealing with missing data. One limitation is that data are lost when we simply consider an entire locus as unobserved if any nucleotides are missing; another possibility for future research would be to impute short stretches of missing nucleotides within a locus.
High-throughput sequence data from viral genomes present many challenges. We have focused in particular on handling high mutation rates, recombination, heterochronous sampling, and missing data. Each of these presents its own hurdles; dealing with them simultaneously raises numerous further complications. In this article we have therefore also explored several statistical and algorithmic techniques for reducing the large computational overhead. In particular, employing the Time Machine strategy developed by Jasra et al. (2011) proves to be effective in saving computational time, while controlling the bias and variance. Nevertheless, the overhead remains large, and it is likely that further implementation strategies will have to be explored in the future as sample sizes continue to grow. For example, one advantage of an IS approach is that it is especially suited for parallelization using the rapidly growing graphics processing unit (GPU) (Lee et al. 2010). Lee et al. (2010) focused on MCMC and IS as case studies and concluded that significant performance boost can be achieved for IS using GPU, whereas BEAST’s MCMC is inherently less suitable for exploitation using GPU’s massive parallelization.
In spite of these challenges, it is encouraging that our new method performs comparably to BEAST in the analysis of a deep sequencing data set generated from a serially sampled acute HIV infection. Like previous studies based on coalescent inference, BEAST and Coalescenator also indicate that the effective population size is of the order (Brown 1997). While there has been great interest in measuring the effective population size of within-host HIV infections, especially in regard to addressing whether evolution is largely stochastic or deterministic (Rouzine et al., 2014), it is important to note that estimates of effective population size from methods based on the coalescent framework do not typically consider natural selection or other factors that may lead to variation in offspring distribution. Consequently, in populations where variation in reproductive success is likely to be present, such as in within-host HIV infections, interpretation of the effective population size is confounded. Instead, a more accurate interpretation of effective population size is that it represents a measure of population turnover or relative genetic diversity. We further note that in this article we have focused our analyses on inference of a single effective population size parameter, while a priority for future work will be also to make inference of temporal changes in this parameter computationally tractable.
The estimates of recombination rate from Coalescenator strongly agree with non-coalescent-based estimates obtained by Neher and Leitner (2010) and Batorsky et al. (2011). This is encouraging as the sequence data analyzed in this study have been sampled from an early phase of infection, in contrast to previous studies that utilized data from patients with a much longer follow-up (Neher and Leitner 2010; Batorsky et al. 2011). Consequently, this demonstrates the applicability of our method even in spite of limited accumulation of diversity in the viral population.
Despite the complicated evolutionary model considered in this article, we have omitted other mechanisms that are likely to be important to within-host HIV evolution. Our simplified model of population size changes provides a restrictive model for the phases of exponential expansion and contraction HIV and other viral populations are known to go through (Nowak and May 2000). A natural solution to this problem would be to fit exponential functions between consecutive time points. More importantly, however, selective responses to the host immune system are known to be an important evolutionary factor (Edwards et al. 2006; Lemey et al. 2006), having an effect on intrahost genealogies that is not fully captured merely by adjusting the effective population size. Other commonly used methods such as BEAST (Drummond et al. 2002, 2005, 2012; Drummond and Rambaut 2007; Minin et al. 2008) also ignore the effects of selection on genealogies (as well as ignoring recombination); thus, developing methods that can robustly account for both selection and recombination (and indeed other factors) remains a challenging task. Another important consideration is population substructure: HIV is known to undergo compartmentalization, forming distinct subpopulations at different anatomical sites and in different cell types (Ewing et al. 2004). Incorporating substructure and migration into IS algorithms is in principle straightforward and has been achieved in other applications (Bahlo and Griffiths 2000; De Iorio and Griffiths 2004b; Griffiths et al. 2008), although it obviously introduces yet more computational burden.
Exploring more complicated mutation models would also be interesting, although the diallelic model we consider here allows for a simple analytical solution, which in turn allows for more efficient computation. However, it should be straightforward to allow for time-varying mutation rates as a proxy for the effects of selection. Inclusion of indels into the mutation model would also be desirable; inspection of the HIV data (cf. Table S5) shows that these are highly prevalent. The model also considers only crossover recombination; however, McVean et al. (2002) suggested that a gene-conversion model may be appropriate for bacteria and viruses. Incorporating this type of recombination would also be a worthwhile extension.
Finally, we remark that in our analysis we tuned our algorithm to analyze data generated under the 454 platform. However, it should be straightforward to adapt our work for other platforms such as Illumina paired-end sequencing, in which a haplotype would now comprise a pair of reads. Since the number of reads generated by this method is typically severalfold larger, this raises further computational and statistical challenges that form the basis of ongoing work.
This research was initiated during the 2013 Oxford Summer School in Computational Biology. We gratefully acknowledge Sarah Fidler, Steve Kaye, Jonathan Weber, Myra McClure, and the SPARTAC Trial participants and investigators. We thank James Anderson, Farah Colchester, Pierre Haas, Thomas Leitner, István Miklós, and Yee Whye Teh for helpful comments on this work. We acknowledge the use of the University of Oxford Advanced Research Computing facility in carrying out this work (http://dx.doi.org/10.5281/zenodo.22558). This work was supported by the Wellcome Trust and the National Institute for Health Research (NIHR) Biomedical Research Centre funding scheme at Imperial College Healthcare National Health Service (NHS) Trust and University College London Hospitals NHS Foundation Trust. We also acknowledge funding from the Novo Nordisk Foundation (to J.A.S. and L.M.) and the Engineering and Physical Sciences Research Council (EPSRC) (to P.A.J., grant EP/L018497/1). J.R. is supported by the Oxford Martin School. O.G.P. received funding from the European Research Council (ERC) under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 614725-PATHPHYLODYN.
Appendix A: Backward Transition Rates in the ARG
The distribution on the most recent event back in time in a two-locus ARG, conditioned on observing the sample configuration is given in Table A1. This distribution is expressed in terms of the (unknown) CSD and transitions are marginalized over nonancestral genetic material; thus, for example, a recombination event changes an haplotype to an and an with “∗” denoting unspecified alleles (Such a marginalization approach was proposed by Jenkins and Griffiths 2011, but was based instead on an infinite-sites model.) The corresponding forward transition probabilities are also given in the rightmost column of Table A1. Here, denotes a unit vector with ith entry 1 and the rest 0, with denoting a unit matrix whose only nonzero entry is a 1 at position The notation is shorthand for and so on, and we write and for the marginal counts of fully specified haplotypes. To avoid computing each row in Table A1 at each step, a simplified proposal distribution first picks a haplotype according to Table A2, as described in the main text.
To derive the entries in Table A1 we use sampling exchangeability and Bayes’ theorem (see Stephens and Donnelly 2000; De Iorio and Griffiths 2004a). For example, to obtain the first row of Table A1, with note that sampling exchangeability implies (A1)since both sides are equal to the probability that we sample the configuration n and then subsample a type on the left by selecting the fully specified haplotype we sampled most recently and on the right by subsampling from c. [Our convention for “unordered” is that where and denotes any particular ordering of the sample configuration n. In this convention, whether or not a haplotype is fully specified is not considered to be “random”; that is, the missingness or otherwise of either allele need not be assigned a probability. Instead, we can think of this as having conditioned on the missingness status of each locus.]
Now, by Bayes’ theorem, (A2)with (A3)known from the coalescent prior. Probabilities such as (A3) can be obtained, for example, from the recursive system of equations studied by Ethier and Griffiths (1990) and Jenkins and Song (2009) [for example, multiply equation 5 of Jenkins and Song 2009 by rearrange to express the system in terms of and then read off the coefficients of the resulting system].
The entry for in the first row of Table A1 follows by combining (A1), (A2), and (A3). The remaining entries follow similarly. At the bottom row, denotes
Appendix B: Emission Probabilities for Gaussian Quadrature
After applying Gaussian quadrature it is necessary to compute discretized emission probabilities of the form (B1)where (Paul et al. 2011, equation 17). In light of (8), we are also able to reduce this to a closed-form formula as follows. Substituting (8) into (B1), we find
Communicating editor: J. D. Wall
Supporting Information is available online at www.genetics.org/lookup/suppl/doi:10.1534/genetics.115.177931/-/DC1.
- Received June 5, 2015.
- Accepted January 31, 2016.
- Copyright © 2016 by the Genetics Society of America