## Abstract

Methods that bypass analytical evaluations of the likelihood function have become an indispensable tool for statistical inference in many fields of science. These so-called likelihood-free methods rely on accepting and rejecting simulations based on summary statistics, which limits them to low-dimensional models for which the value of the likelihood is large enough to result in manageable acceptance rates. To get around these issues, we introduce a novel, likelihood-free Markov chain Monte Carlo (MCMC) method combining two key innovations: updating only one parameter per iteration and accepting or rejecting this update based on subsets of statistics approximately sufficient for this parameter. This increases acceptance rates dramatically, rendering this approach suitable even for models of very high dimensionality. We further derive that for linear models, a one-dimensional combination of statistics per parameter is sufficient and can be found empirically with simulations. Finally, we demonstrate that our method readily scales to models of very high dimensionality, using toy models as well as by jointly inferring the effective population size, the distribution of fitness effects (DFE) of segregating mutations, and selection coefficients for each locus from data of a recent experiment on the evolution of drug resistance in influenza.

- approximate Bayesian computation
- distribution of fitness effects
- hierarchical models
- high dimensions
- Markov chain Monte Carlo

THE past decade has seen a rise in the application of Bayesian inference algorithms that bypass likelihood calculations with simulations. Indeed, these generally termed likelihood-free or approximate Bayesian computation (ABC) (Beaumont *et al.* 2002) methods have been applied in a wide range of scientific disciplines, including cosmology (Schafer and Freeman 2012), ecology (Jabot and Chave 2009, protein-network evolution (Ratmann *et al.* 2007), phylogenetics (Fan and Kubatko 2011), and population genetics (Cornuet *et al.* 2008). Arguably ABC has had its greatest success in population genetics because inferences in this field are frequently conducted under complex models for which likelihood calculations are intractable, thus necessitating inference through simulations.

Let us consider a model *ℳ* that depends on *n* parameters creates data *D*, and has the posterior distributionwhere is the prior and is the likelihood function. ABC methods bypass the evaluation of by performing simulations with parameter values sampled from that generate *D*, which in turn is summarized by a set of *m*-dimensional statistics The posterior distribution is then evaluated by accepting such simulations that reproduce the statistics calculated from the observed data ()However, for models with the condition might be too restrictive and require a prohibitively large simulation effort. Therefore, an approximation step can be employed by relaxing the condition to where is a distance metric of choice between *x* and *y* and *δ* is a chosen distance (tolerance) below which simulations are accepted. The posterior is thus approximated byAn important advance in ABC inference was the development of methods coupling ABC with Markov chain Monte Carlo (MCMC) (Marjoram *et al.* 2003). These methods allow efficient sampling of the parameter space in regions of high likelihood, thus requiring fewer simulations to obtain posterior estimates (Wegmann *et al.* 2009). The original ABC-MCMC algorithm proposed by Marjoram *et al.* (2003) is as follows:

If now at

propose to move to according to the transition kernel*θ*Simulate

*D*using model*ℳ*with and calculate summary statisticsfor*s**D*.If go to step 4; otherwise go to step 1.

Calculate the Metropolis–Hastings ratio

Accept with probability

*h*; otherwise stay at Go to step 1.

The sampling success of ABC algorithms is given by the likelihood values, which are often very low even for relatively large tolerance values In such situations, the condition will impose a quite rough approximation to the posterior. As a result, the utility of the ABC approaches described above is limited to models of relatively low dimensionality, typically up to 10 parameters (Blum 2010; Fearnhead and Prangle 2012). The same limitation applies to the more recently developed sequential Monte Carlo sampling methods (Sisson *et al.* 2007; Beaumont *et al.* 2009). Despite these limitations ABC has been useful in addressing population genetics problems of low to moderate dimensionality such as the inference of demographic histories (*e.g.*, Wegmann and Excoffier 2010; Brown *et al.* 2011; Adrion *et al.* 2014) or selection coefficients of a single locus (*e.g.*, Jensen *et al.* 2008). However, as more genomic data become available, there is increasing interest in applying ABC to models of higher dimensionality, such as to estimate genome-wide and locus-specific effects jointly.

To our knowledge, to date, three approaches have been suggested to tackle high dimensionality with ABC. The first approach proposes an expectation propagation approximation to factorize the data space (Barthelmé and Chopin 2014), which is an efficient solution for situations with high-dimensional data, but does not directly address the issue of high-dimensional parameter spaces. The second approach consists of first inferring marginal posterior distributions on low-dimensional subsets of the parameter space [either one (Nott *et al.* 2012) or two dimensions (Li *et al.* 2015)] and then reconstructing the joint posterior distribution from those. This approach benefits from the lower dimensionality of the statistics space when considering subsets of the parameters individually and hence renders the acceptance criterion meaningful. The third approach achieves the same benefit by formulating the problem using hierarchical models, proposing to estimate the hyperparameters first, and then fixing them when inferring parameters of lower hierarchies individually (Bazin *et al.* 2010).

Among these, the approach by Bazin *et al.* (2010) is the most relevant for population genetics problems, since those are frequently specified in a hierarchical fashion by modeling genome-wide effects as hyperparameters and locus-specific effects at lower hierarchies. In this way, Bazin *et al.* (2010) estimated locus-specific selection coefficients and deme-specific migration rates of an island model from microsatellite data. Furthermore, this approach has inspired the development of similar methods for estimating more complex migration patterns (Aeschbacher *et al.* 2013) and locus-specific selection from time-series data (Foll *et al.* 2015). However, this approach and its derivatives will not recover the true joint distribution if parameters are correlated, which is a common feature of such complex models.

Here, we introduce a new ABC algorithm that exploits the reduction of dimensionality of the summary statistics when focusing on subsets of parameters, but couples the parameter updates in an MCMC framework. As we prove below, this coupling ensures that our algorithm converges to the true joint posterior distribution even for models of very high dimensions. We then demonstrate its usefulness by inferring the effective population size jointly with locus-specific selection coefficients and the hierarchical parameters of the distribution of fitness effects (DFE) from allele frequency time-series data.

## Theory

Let us define the random variable as an -dimensional function of We call *sufficient* for the parameter if the conditional distribution of given does not depend on More precisely, let Then(1)where is ** θ** with the

*i*th component omitted.

It is not hard to find examples for parameter-wise sufficient statistics. Most common distributions are members of the exponential family, and for these, the density of has the formFor a given parameter the vector consisting of only those for which the respective natural parameter function depends on is a sufficient statistic for in the sense of our definition. Some concrete examples of this type are studied below.

If sufficient statistics can be found for each parameter and their dimension is substantially smaller than the dimension *m* of then the ABC-MCMC algorithm can be greatly improved with the following algorithm that we denote ABC with **pa**rameter-**s**pecific **s**tatistics (ABC-PaSS) henceforth.

The algorithm starts at time and at some initial parameter value

Choose an index according to a probability distribution with and all

At propose according to the transition kernel where differs from

only in the*θ**i*th component:Simulate

*D*using model*ℳ*with and calculate summary statisticsfor*s**D*. Calculate andLet be the tolerance for parameter If go to step 5; otherwise go to step 1.

Calculate the Metropolis–Hastings ratio

Accept with probability

*h*; otherwise stay atIncrease

*t*by one, save a new parameter value and continue at step 1.

Convergence of the MCMC chain is guaranteed by the following:

**Theorem 1.** *For * *if* *and* *is sufficient for parameter * *then* *the* *stationary distribution of the Markov chain is*

The *Proof for Theorem 1* is provided in the *Appendix*.

It is important to note that the same algorithm can also be applied to groups of parameters, which may be particularly relevant in the case of very high correlations between parameters that may render their individual MCMC updates inefficient. Also, the efficiency of ABC-PaSS can be improved with all previously proposed extensions for ABC-MCMC. To increase acceptance rates and render ABC-PaSS applicable to models with continuous sampling distributions, for instance, the assumption must be relaxed to in practice. This is commonly done in ABC applications and will lead to an approximation of the posterior distribution Because of the continuity of the summary statistics *s* and the sufficient statistics we theoretically recover the true posterior distribution in the limit We can also perform an initial calibration ABC step to find an optimal starting position and tolerance and to adjust the proposal kernel for each parameter (Wegmann *et al.* 2009).

## Materials and Methods

### Implementation

We implemented the proposed ABC-PaSS framework into a new version of the software package ABCtoolbox (Wegmann *et al.* 2010), which will be made available at the authors’ website and will be described elsewhere.

### Toy model 1: Normal distribution

We performed simulations to assess the performance of ABC-MCMC and ABC-PaSS in estimating and for a univariate normal distribution. We used the sample mean and sample variance of samples of size *n* as statistics. Recall that for noninformative priors the posterior distribution for *μ* is and the posterior distribution for is distributed with d.f. As *μ* and are independent, we get the posterior densityIn our simulations the sample size was and the true parameters were given by and We performed 50 MCMC chains per simulation and chose effectively noninformative priors for and Our simulations were performed for a wide range of tolerances (from 0.01 to 41) and proposal ranges (from 0.05 to 1.5). We did this exhaustive search to identify the combination of these tuning parameters that allows ABC-MCMC and ABC-PaSS to perform best in estimating *μ* and We then recorded the minimum total variation distance () between the true and estimated posteriors over these sets of tolerances and ranges and compared it between ABC-MCMC and ABC-PaSS.

### Toy model 2: General linear model

As a second toy model to compare the performance of ABC-MCMC and ABC-PaSS, we considered general linear models (GLMs) with *m* statistics * s* being a linear function of parameters where

**is a square design matrix and the vector of errors**

*C***ϵ**is multivariate normal. Under noninformative priors for the parameters their posterior distribution is multivariate normalWe set up the design matrices

**in a cyclic manner to allow all statistics to have information on all parameters but their contributions to differ for each parameter; namely we set whereThe normalization factor in the definition of**

*C***was chosen such that the determinant of the posterior variance is constant and thus the widths of the marginal posteriors are comparable independently of the dimensionality**

*C**n*. We used all statistics for ABC-MCMC and calculated a single linear combination of statistics per parameter for ABC-PaSS according to

*Theorem 2*, using ordinary least squares. For the estimation, we assumed that and the priors are uniform for all parameters, which are effectively noninformative. We started the MCMC chains at a normal deviate

*i.e.*, around the true values of To ensure fair comparisons between methods, we performed simulations of 50 chains for a variety of tolerances (from 0.01 to 256) and proposal ranges (from 0.1 to 8) to choose the combination of these tuning parameters at which each method performed best. We ran all our MCMC chains for iterations per model parameter to account for model complexity.

### Estimating selection and demography

#### Model:

Consider a vector ** ξ** of observed allele trajectories (sample allele frequencies) over loci, as is commonly obtained in studies of experimental evolution. We assume these trajectories to be the result of both random drift and selection, parameterized by the effective population size and locus-specific selection coefficients respectively, under the classic Wright–Fisher model with allelic fitnesses 1 and We further assume the locus-specific selection coefficients follow a DFE parameterized as a generalized Pareto distribution (GPD) with mean shape

*χ*, and scale

*σ*. Our goal is thus to estimate the joint posterior distributionTo apply our ABC-PaSS framework to this problem, we approximate the likelihood term numerically with simulations, while updating the hyperparameters

*χ*and

*σ*analytically.

#### Summary statistics:

To summarize the data we used statistics originally proposed by Foll *et al.* (2015). Specifically, we first calculated for each locus individually a measure of the difference in allele frequency between consecutive time points aswhere*x* and *y* are the minor allele frequencies separated by *t* generations, and is the harmonic mean of the sample sizes and We then summed the values of all pairs of consecutive time points with increasing and decreasing allele frequencies into and respectively (Foll *et al.* 2015). Finally, we followed Aeschbacher *et al.* (2012) and calculated boosted variants of the two statistics to take more complex relationships between parameters and statistics into account. The full set of statistics used per locus was =

We next calculated parameter-specific linear combinations for and locus-specific following the procedure developed above. To do so, we simulated allele trajectories of a single locus for different values of and *s* sampled from their prior. We then calculated for each simulation and performed a Box–Cox transformation to linearize the relationships between statistics and parameters (Box and Cox 1964; Wegmann *et al.* 2009). We then fitted a linear model as outlined in Equation A3 to estimate the coefficients of an approximately sufficient linear combination of ** F** for each parameter and

*s*. This resulted in and To combine information across loci when updating we then calculatedwhere In summary, we used the ABC approximation

### Simulations and application

We applied our framework to allele frequency data for the whole influenza H1N1 genome obtained in a recently published evolutionary experiment (Foll *et al.* 2014). In this experiment, influenza A/Brisbane/59/2007 (H1N1) was serially amplified on Madin–Darby canine kidney (MDCK) cells for 12 passages of 72 hr each, corresponding to ∼13 generations (doublings). After the three initial passages, samples were passed either in the absence of drug or in the presence of increasing concentrations of the antiviral drug oseltamivir. At the end of each passage, samples were collected for whole-genome high-throughput population sequencing. We obtained the raw data from http://bib.umassmed.edu/influenza/ and, following the original study (Foll *et al.* 2014), we downsampled it to 1000 haplotypes per time point and filtered it to contain only loci for which sufficient data were available to calculate the statistics. Specifically, we included all loci with an allele frequency at time points. There were 86 and 42 such loci for the control and drug-treated experiments, respectively. Further, we restricted our analysis of the data of the drug-treated experiment to the last nine time points during which drug was administered.

We performed all our Wright–Fisher simulations with in-house C++ code implemented as a module of ABCtoolbox. We simulated 13 generations between time points and a sample of size 1000 per time point. We set the prior for uniform on the scale such that and for the parameters of the GPD and for For the simulations where no DFE was assumed, we set the prior of

As above, we ran all our ABC-PaSS chains for iterations per model parameter to account for model complexity. To ensure fast convergence, the ABC-PaSS implementation benefited from an initial calibration step we originally developed for ABC-MCMC and implemented in ABCtoolbox (Wegmann *et al.* 2009). Specifically, we first generated 10,000 simulations with values drawn randomly from the prior. For each parameter, we then selected the 1% subset of these simulations with the smallest distances to the observed data based on the linear combination specific for that parameter. These accepted simulations were used to calibrate three important metrics prior to the MCMC run: First, we set the parameter-specific tolerances to the largest distance among the accepted simulations. Second, we set the width of the parameter-specific proposal kernel to half of the standard deviation of the accepted parameter values. Third, we chose the starting value of the chain for each parameter as the accepted simulation with smallest distance. Each chain was then run for 1000 iterations, and new starting values were chosen randomly among the accepted calibration simulations for those parameters for which no update was accepted. This was repeated until all parameters were updated at least once.

### Data availability

The authors state that all data necessary for confirming the conclusions presented in the article are represented fully within the article.

## Results

### Toy model 1: Normal distribution

We first compared the performance of ABC-PaSS and ABC-MCMC under a simple model: the normal distribution with parameters mean (*μ*) and variance (). Given a sample of size *n*, the sample mean () is a sufficient statistic for *μ*, while both and the sample variance () are sufficient for (Casella and Berger 2002). For ABC-MCMC, we used both and as statistics. For ABC-PaSS, we used only when updating *μ* and both and when updating

We then compared the accuracy between the two algorithms by calculating the total variation distance between the inferred and the true posteriors ( distance from kernel smoothed posterior based on 10,000 samples). We computed under a wide range of tolerances to find the tolerance for which each algorithm had the best performance (*i.e.*, minimum ). As shown in Figure 1, A and C, ABC-PaSS produced a more accurate estimation for *μ* than ABC-MCMC. The two algorithms had similar performance when estimating (Figure 1, B and D).

The normal distribution toy model, although simple, is quite illustrative of the nature of the improvement in performance by using ABC-PaSS over ABC-MCMC. Indeed, our results demonstrate that the slight reduction of the summary statistics space by ignoring a single uninformative statistic when updating *μ* already results in a noticeable improvement in estimation accuracy. This improvement would not be possible to attain with classic dimension reduction techniques, such as partial least squares (PLS), since the information contained in and is irreducible under ABC-MCMC.

### Toy model 2: GLM

We expect our approach to be particularly powerful for models of the exponential family, for which a small number of summary statistics per parameter are sufficient, regardless of sample size. To illustrate this, we next compared the performance of ABC-MCMC and ABC-PaSS under GLM models of increasing dimensionality *n*. For all models, we constructed the design matrix ** C** such that all statistics are informative for all parameters, while retaining the total information on the individual parameters regardless of dimensionality (see

*Materials and Methods*). For a GLM, a single linear function is a sufficient statistic for each associated parameter, and this function can easily be learned from a set of simulations, using standard regression approaches (see

*Theorem 2*in the

*Appendix*). Therefore, for ABC-MCMC, we used all statistics while for ABC-PaSS, we employed

*Theorem 2*and used a single linear combination of statistics per parameter As above, we assessed performance of ABC-MCMC and ABC-PaSS by calculating the total variation distance () between the inferred and the true posterior distribution. We calculated for several tolerances to find the tolerance where was minimal for each algorithm (see Figure 2A for examples with and ). Since in ABC-MCMC distances are calculated in the multidimensional statistics space, the optimal tolerance increased with higher dimensionality. This is not the case for ABC-PaSS, because distances are always calculated in one dimension only (Figure 2A).

We found that ABC-MCMC performance was good for low *n*, but worsened rapidly with increasing number of parameters, as expected from the corresponding increase in the dimensionality of statistics space (Figure 2B). For a GLM with 32 parameters, approximate posteriors obtained with ABC-MCMC differed only little from the prior (Figure 2B). In contrast, performance of ABC-PaSS was unaffected by dimensionality and was better than that of ABC-MCMC even in low dimensions (Figure 2B). These results support that by considering low-dimensional parameter-specific summary statistics under our framework, ABC inference remains feasible even under models of very high dimensionality, for which current ABC algorithms are not capable of producing meaningful estimates.

### Application: Inference of natural selection and demography

One of the major research problems in modern population genetics is the inference of natural selection and demographic history, ideally jointly (Crisci *et al.* 2012; Bank *et al.* 2014). One way to gain insight into these processes is by investigating how they affect allele frequency trajectories through time in populations, for instance under experimental evolution. Several methods have thus been developed to analyze allele trajectory data to infer both locus-specific selection coefficients (*s*) and the effective population size (). The modeling framework of these methods assumes Wright–Fisher (WF) population dynamics in a hidden Markov setting to evaluate the likelihood of the parameters and *s* given the observed allele trajectories (Bollback *et al.* 2008; Malaspinas *et al.* 2012). In this setting, likelihood calculations are feasible, but very time-consuming, especially when considering many loci at the genome-wide scale (Foll *et al.* 2015).

To speed up calculations, Foll *et al.* (2015) developed an ABC method (WF-ABC), adopting the hierarchical ABC framework of Bazin *et al.* (2010). Specifically, WF-ABC first estimates based on statistics that are functions of all loci and then infers *s* for each locus individually under the inferred value of While WF-ABC easily scales to genome-wide data, it suffers from the unrealistic assumption of complete neutrality when inferring which potentially leads to biases in the inference.

Here we show that by employing ABC-PaSS, and locus-specific selection coefficients can be inferred jointly, which is not possible with ABC-MCMC due to high dimensionality of the summary statistics that is a direct function of the number of loci considered.

#### Finding sufficient statistics:

All ABC algorithms, including ABC-PaSS introduced here, require that statistics are sufficient for estimating the parameters of a given model. As mentioned above, parameter-wise sufficient statistics as required by ABC-PaSS are trivial to find for distributions of the exponential family. Since many population genetics models do not follow such distributions, sufficient statistics are known for the most simple models only. The number of haplotypes segregating in a sample, for example, is a sufficient statistic for estimating the population-scaled mutation rate under Wright–Fisher equilibrium assumptions (Durrett 2008).

For more realistic models involving multiple populations or population size changes, only approximately-sufficient statistics can be found. Choosing such statistics is not trivial, however, as too few statistics are insufficient to summarize the data while too many statistics can create an excessively large statistics space that worsens the approximation of the posterior (Beaumont *et al.* 2002; Wegmann *et al.* 2009; Csilléry *et al.* 2010). Often, such statistics are thus found empirically by applying dimensionality reduction techniques to a larger set of statistics initially calculated (Blum *et al.* 2013).

Fearnhead and Prangle (2012) suggested a method where an initial set of simulations is used to fit a linear model, using ordinary least squares that expresses each parameter as a function of the summary statistics These functions are then used as statistics in subsequent ABC analysis. Thus Fearnhead and Prangle’s approach reduces the dimensionality of statistics space to a single combination of statistics per parameter. However, the Pitman–Koopman–Darmois theorem states that for models that do not belong to the exponential family, the dimensionality of sufficient statistics must grow with increasing sample size, suggesting that multiple summary statistics are likely required in this case as any locus carries independent information for the parameter A method similar in spirit but not limited to a single summary statistic per parameter is a partial least-squares transformation (Wegmann *et al.* 2009), which has been used successfully in many ABC applications (*e.g.*, Veeramah *et al.* 2011; Chu *et al.* 2013; Dussex *et al.* 2014).

Here we chose to calculate the per locus statistics proposed by Foll *et al.* (2015) and to then apply and empirically compare both methods to reduce dimensionality for this particular model. Before dimension reduction, however, we applied a multivariate Box–Cox transformation (Box and Cox 1964) to increase linearity between statistics and parameters, as suggested by Wegmann *et al.* (2009). To decide on the required number of PLS components, we performed a leave-one-out analysis implemented in the R package “PLS” (Mevik and Wehrens 2007). In line with the Pitman–Koopman–Darmois theorem, a small number (two) of PLS components were sufficient for *s*, but many more components contained information about for which many independent observations are available (Supplemental Material, Figure S1). However, the first PLS component alone explained already two-thirds of the total variance than can be explained with up to 100 components, suggesting that additional components add, besides information, also substantial noise. We thus chose to evaluate the accuracy of our inference with three different sets of summary statistics: (1) a single linear combination of summary statistic for each *s* and chosen using ordinary least squares, as suggested by Fearnhead and Prangle (2012) (LC 1/1); (2) two PLS components for *s* and five PLS components for as suggested by the leave-one-out analysis (PLS 5/2); and (3) an intermediate set of one PLS component for *s* and three PLS components for (PLS 3/1).

#### Performance of ABC-PaSS in inferring selection and demography:

To examine the performance of ABC-PaSS under the WF model, we inferred and *s* on sets of 100 loci simulated with varying selection coefficients. We evaluated the estimation accuracy by comparing the estimated *vs.* the true values of the parameters over 25 replicate simulations, first using a single linear combination of summary statistics per parameter found using ordinary least squares (LC 1/1). As shown in Figure 3A, was estimated well over the whole range of values tested. Estimates for *s* were on average unbiased and accuracy was, as expected, higher for larger (Figure 3B). Note that since the prior on *s* was these results imply that our approach estimates with high accuracy even when the majority of the simulated loci are under strong selection ( of loci had ). Hence, our method allows us to relax the assumption of neutrality on most of the loci, which was necessary in previous studies (Foll *et al.* 2015).

We next introduced hyperparameters for the distribution of selection coefficients (the so-called DFE). Such hyperparameters are computationally cheap to estimate under our framework, as their updates can be done analytically and do not require simulations. Following previous work (Beisel *et al.* 2007; Martin and Lenormand 2008), we assumed that the distribution of the locus-specific *s* is realistically described by a truncated GPD with location and parameters shape *σ* and scale *χ* (Figure S2).

We first evaluated the accuracy of estimating *χ* and *σ* when fixing the value of the other parameter and found that both parameters are well estimated under these conditions (Figure 3, C and D, respectively). Since the truncated GPD of multiple combinations of *χ* and *σ* is very similar, these parameters are not always identifiable. This renders the accurate joint estimation of both parameters difficult (Figure S3, B and C). However, despite the reduced accuracy on the individual parameters, we found the overall shape of the GPD to be well recovered (Figure S3, D–F). Also, was estimated with high accuracy for all combinations of *χ* and *σ* (Figure S3A).

We then checked whether the accuracy of these estimates can be improved by using summary statistics of higher dimensionality. Specifically, we repeated these analyses with a high-dimensional set (PLS 5/2) consisting of the first five and the first two PLS components for and each *s*, respectively, as well as a set of intermediate dimensionality (PLS 3/1) consisting of the first three PLS components for and only the first PLS component for each *s*. Overall, all sets of summary statistics compared here resulted in very similar performance as assessed both visually (compare Figure 3, Figure S4, and Figure S5 for LC 1/1, PLS 5/2, and PLS 3/1, respectively) and by calculating the both root mean square error (RMSE) and Pearson’s correlation coefficient between true and inferred values (Table S1). Interestingly, the intermediate set (PLS 3/1) performed worst in all comparisons, while the differences between LC 1/1 and PLS 5/2 were very subtle, particularly when uniform priors were used on all *s* (simulation set 1; Table S1). However, in the presence of hyperparameters on *s*, results were more variable (simulation sets 2–4; Table S1) and we found the effective population size to be consistently overestimated when using high-dimensional summaries such as PLS 5/2 (simulation sets 2–4; Table S1). These results suggest that while our analysis is generally rather robust to the choice of summary statistics, the benefit of extra information added by additional summary statistics is offset by the increased noise in higher dimensions. We expect that robustness of results to the choice of summary statistics will be model dependent and recommend that the performance of multiple-dimension reduction techniques should be evaluated in future applications of ABC-PaSS like we did here.

#### Analysis of infuenza data:

We applied our approach to data from a previous study (Foll *et al.* 2014) where cultured canine kidney cells infected with the influenza virus were subjected to serial transfers for several generations. In one experiment, the cells were treated with the drug Oseltamivir, and in a control experiment they were not treated with the drug. To obtain allele frequency trajectories of all sites of the infuenza virus genome (13.5 kbp), samples were taken and sequenced every 13 generations with pooled population sequencing. The aim of our application was to identify which viral mutations rose in frequency during the experiment due to natural selection and which due to drift and to investigate the shape of the DFE for the control and drug-treated viral populations.

Following Foll *et al.* (2014), we filtered the raw data to contain loci for which sufficient data were available to calculate the summary statistics considered here (see *Materials and Methods*). There were 86 and 42 such loci for the control and drug-treated experiments, respectively (Figure S6).

We then employed ABC-PaSS to estimate *s* per locus and the parameters of the DFE, first using a single summary statistic per parameter (LC 1/1). We obtained a low estimate for (posterior medians 350 for drug-treated and 250 for control influenza; Figure 4A), which is expected given the bottleneck that the cells were subjected to in each transfer. While we obtained similar estimates for the *χ* parameters for the drug-treated and for the control influenza (posterior medians 0.44 and 0.56, respectively), the *σ* parameter was estimated to be much higher for the drug-treated than for the control influenza (posterior medians 0.047 and 0.0071, respectively; Figure 4B). The resulting DFE was thus very different for the two conditions: The DFE for the drug-treated influenza had a much heavier tail than the control (Figure 4C). Posterior estimates for per locus also indicated that the drug-treated influenza had more loci under strong positive selection than the control (19% *vs.* 3.5% of loci had respectively; Figure 4D and Figure S6). Almost identical results were also obtained when using higher-dimensional summary statistics based on PLS components (Figure S7). These results indicate that the drug treatment placed the influenza population away from a fitness optimum, thus increasing the number of positively selected mutations with large effect sizes. Presumably these mutations confer resistance to the drug, thus helping influenza to reach a new fitness optimum.

Our results for influenza were qualitatively similar to those obtained by Foll *et al.* (2014). We obtained slightly larger estimates for (350 *vs.* 226 for drug-treated and 250 *vs.* 176 for control influenza). Our estimates for the parameters of the GPD were substantially different from those of Foll *et al.* (2014) but resulted in qualitatively similar overall shapes of the DFE for both drug-treated and control experiments. These results underline the applicability of our method to a high-dimensional problem. In contrast to Foll *et al.* (2014) who performed estimations in a three-step approach, combining a moment-based estimator for ABC for *s*, and a maximum-likelihood approach for the GPD, our Bayesian framework allowed us to perform joint estimation and to obtain posterior distributions for all parameters in a single step.

## Discussion

Due to the difficulty to find analytically tractable likelihood solutions, statistical inference is often limited to models that made substantial approximations of reality. To address this problem, so-called likelihood-free approaches have been introduced that bypass the analytical evaluation of the likelihood function with computer simulations. While full-likelihood solutions generally have more power, likelihood-free methods have been used in many fields of science to overcome undesired model assumptions.

Here we developed and implemented a novel likelihood-free, MCMC inference framework that scales naturally to high dimensions. This framework takes advantage of the observation that the information about one model parameter is often contained in a subset of the data, by integrating two key innovations: First, only a single parameter is updated at a time, and the update is accepted based on a subset of summary statistics sufficient for this parameter. We proved that this MCMC variant converges to the true joint posterior distribution under the standard assumptions.

Since simulations are accepted based on lower dimensionality, our algorithm proposed here will have a higher acceptance rate than other ABC approaches for the same accuracy and hence require fewer simulations. This is particularly relevant for cases in which the simulation step is computationally challenging, such as for population genetic models that are spatially explicit (Ray *et al.* 2010) or require forward-in-time simulations (as opposed to coalescent simulations) (Hernandez 2008; Messer 2013).

We demonstrated the power of our framework through the application to multiple problems. First, our framework led to more accurate inference of the mean and standard deviation of a normal distribution than standard likelihood-free MCMC, suggesting that our framework is already competitive in models of low dimensionality. In high dimensions, the benefit was even more apparent. When applied to the problem of inferring parameters of a GLM, for instance, we found our framework to be insensitive to the dimensionality, resulting in a performance similar to that of analytical solutions both in low and in high dimensions. Finally, we used our framework to address the difficult and high-dimensional problem of inferring demography and selection jointly from genetic data. Specifically, and through simulations and an application to experimental data, we show that our framework enables the accurate joint estimation of the effective population size, the distribution of fitness effects of segregating mutations, and locus-specific selection coefficients from allele frequency time-series data.

More generally, we envision that any hierarchical model with genome-wide and locus-specific parameters would be well suited for application of ABC-PaSS. Such models may include hyperparameters like genome-wide mutation and recombination rates or parameters regarding the demographic history, along with locus-specific parameters that allow for between-locus variation, for instance in the intensity of selection, mutation, recombination, or migration rates. Among these, the prospect of jointly inferring selection and demographic history even from data of a single time point is particularly relevant, since it allows for the relaxation of a frequently used yet unrealistic assumption that neutral loci can be identified *a priori*. In addition, such a joint estimation allows for hierarchical parameters to aggregate information across individual loci to increase estimation power, for instance for the inference of locus-specific selection coefficients by also jointly inferring parameters of the DFE, as we did here.

## Acknowledgments

We thank Pablo Duchen and the Laurent Excoffier group for comments and discussion on this work. We also thank two anonymous reviewers whose constructive criticism and comments helped to improve this work significantly. This study was supported by Swiss National Foundation grant 31003A_149920 (to D.W.).

## Appendix

*Proof for Theorem 1*. The transition kernel associated with the Markov chain is zero if * θ* and differ in more than one component. If for some index

*i*, then we have

where is the Dirac mass in andWe may assume without loss of generality thatFrom (1) we concludeSettingand keeping in mind that and we getFrom this and Equation A1 it follows readily that the transition kernel satisfies the detailed balanced equationof the Metropolis–Hastings chain. □

Suppose that, given the parameters the distribution of the statistics vector ** s** is multivariate normal according to the GLMwhere and for any matrix If the prior distribution of the parameter vector is then the posterior distribution of

**given is(A2)with and (see,**

*θ**e.g.*, Leuenberger and Wegmann 2010). We have the following:

**Theorem 2.** *Let* *be the ith column of* **C***and* *Moreover*, *let**Then* *is sufficient for the parameter* *and the collection of statistics**yields the same posterior* (A2) *as*

In practice, the design matrix ** C** is unknown. We can perform an initial set of simulations from which we can infer thatA reasonable estimator for the sufficient statistic is then with(A3)where and for are the covariances estimated, for instance, using ordinary least squares.

*Proof for Theorem 2*. It is easy to check that the mean of is and its variance is The covariance between ** s** and

*τ*is given byConsider the conditional multinormal distribution Using the well-known formula for the variance and the mean of a conditional multivariate normal (see,

*e.g.*, Bilodeau and Brenner 2008), we get that the covariance of is given byand thus is independent of The mean of isThe part of this expression depending on isInserting we obtainThus the distribution of is independent of and hence is sufficient for

To prove the second part of *Theorem 2*, we observe that ** τ** is given by the linear modelwith Using we get for the posterior varianceSimilarly we see that the posterior mean is □

## Footnotes

*Communicating editor: M. A. Beaumont*Supplemental material is available online at www.genetics.org/lookup/suppl/doi:10.1534/genetics.116.187567/-/DC1.

- Received January 26, 2016.
- Accepted April 4, 2016.

- Copyright © 2016 by the Genetics Society of America