## Abstract

Genetic pleiotropy is when a single gene influences more than one trait. Detecting pleiotropy and understanding its causes can improve the biological understanding of a gene in multiple ways, yet current multivariate methods to evaluate pleiotropy test the null hypothesis that none of the traits are associated with a variant; departures from the null could be driven by just one associated trait. A formal test of pleiotropy should assume a null hypothesis that one or no traits are associated with a genetic variant. For the special case of two traits, one can construct this null hypothesis based on the intersection-union (IU) test, which rejects the null hypothesis only if the null hypotheses of no association for both traits are rejected. To allow for more than two traits, we developed a new likelihood-ratio test for pleiotropy. We then extended the testing framework to a sequential approach to test the null hypothesis that traits are associated, given that the null of *k* traits are associated was rejected. This provides a formal testing framework to determine the number of traits associated with a genetic variant, while accounting for correlations among the traits. By simulations, we illustrate the type I error rate and power of our new methods; describe how they are influenced by sample size, the number of traits, and the trait correlations; and apply the new methods to multivariate immune phenotypes in response to smallpox vaccination. Our new approach provides a quantitative assessment of pleiotropy, enhancing current analytic practice.

- constrained model
- likelihood-ratio test
- multivariate analysis
- seemingly unrelated regression
- sequential testing

GENETIC pleiotropy is when a single gene influences more than one trait. Detecting pleiotropy, and understanding its causes, can improve the biological understanding of a gene in multiple ways: (1) There is potential to expand understanding of the medical impact of a gene, such as in phenome-wide association studies (Denny *et al.* 2013); (2) the pharmacologic genetic target could affect multiple traits or diseases, allowing a drug developed for a disease to be repurposed for other diseases or suggesting that a toxicity should be monitored for multiple traits; and (3) joint analysis of multiple traits can increase accuracy of phenotype prediction (Maier *et al.* 2015). Yet, understanding pleiotropy can be challenging. A gene can be associated with more than one trait for many reasons, such as when a single genetic variant directly influences multiple traits or when different variants within a gene influence different traits. Alternatively, the association of a gene with some of the traits can be indirect, such as when a gene directly influences a trait, and that trait directly influences a second trait; the gene and the second trait are indirectly associated. The association of a gene with multiple traits can also result from spurious associations. One cause of spurious association is when subjects with more than one disease symptom are more likely ascertained for a study than if they had only one symptom—called Berkson’s bias (Berkson 1946). A second cause is misclassification between two similar traits, a common problem for some psychiatric conditions. A third cause is when a genetic marker is in linkage disequilibrium with each of two causal loci (Gianola *et al.* 2015). These types of biases, and a thorough review of pleiotropy with numerous examples, are nicely summarized elsewhere (Solovieff *et al.* 2013). Despite the great deal of attention given to pleiotropy, most statistical tests do not formally test pleiotropy. Rather, they test the null hypothesis that no trait is associated with a variant; rejecting this null could be due to just one associated trait, not a situation of pleiotropy. The aim of this report is to provide a formal statistical method to assess pleiotropy to infer the number of traits associated with a variant.

Statistical methods to evaluate pleiotropy have been developed from different angles, ranging from comparison of univariate marginal associations of a genetic variant with multiple traits, to multivariate analyses with simultaneous regression of all traits on a genetic variant, to reversed regression of a genetic variant on all traits. A brief survey of statistical methods for pleiotropy is provided here, with more details provided elsewhere (Schriner 2012; Yang and Wang 2012; Solovieff *et al.* 2013; Zhang *et al.* 2014). Univariate analyses are often based on comparison of variant-specific *P*-values across multiple traits. Although simple and feasible for meta-analyses, this approach ignores correlation among the traits and is based on *post hoc* analyses. More formal meta-analysis methods aggregate *P*-values to test whether any traits are associated with a variant, yet a significant association could be driven by just one trait. A slightly more sophisticated approach, also based on summary *P*-values, tests whether the distribution of *P*-values differs from the null distribution of no associations beyond those already detected (Cotsapas *et al.* 2011). Descriptions of additional univariate methods are given elsewhere (Solovieff *et al.* 2013).

Multivariate methods have been popular for quantitative traits. Although different statistical methods have been proposed, some of them result in the same statistical tests. The following three approaches to analyze quantitative traits result in the same *F*-statistic to test whether any of the traits are associated with a genetic variant: (1) simultaneous regression of all traits on a single variant [for example, using the statistical software R function lm(**Y** ∼ **g**), where **Y** is a matrix of traits and **g** a vector for a single genetic variant coded as 0, 1, 2 for the dose of the minor allele], (2) regression of the minor allele dose on all traits (lm(**g** ∼ **Y**)), and (3) canonical correlation of **Y** with **g** [using either plink.multivariate (Ferreira and Purcell 2009) or R code given in *Appendix A*]. The regression of the dose of the minor allele on all traits is a convenient approach, particularly if some of the traits are binary. A slightly different approach is to account for the categorical nature of the dose of the minor allele: Instead of using linear regression, use ordinal logistic regression of the dose on the traits [R MultiPhen package (O’Reilly *et al.* 2012)]. An advantage of this approach is that it allows for binary traits, unlike most methods that assume traits are quantitative with a multivariate normal distribution. However, score tests for generalized linear models, based on estimating equations, have been developed as a way to simultaneously test multiple traits, some of which could be binary (Xu and Pan 2015). An approach somewhat between univariate and multivariate is based on reducing the dimension of the multiple traits by principal components (PC) and using a reduced set of PCs as either the dependent or the independent variables in regression. A comparison of univariate and multivariate approaches found that multivariate methods based on multivariate normality {*e.g.*, canonical correlation, linear regression of traits on minor allele dose, reverse regression, MultiPhen, and Bayes methods [BIMBAM (Stephens 2013) and SNPTEST (Marchini *et al.* 2007)]} all had similar power and were generally more powerful than univariate methods (Galesloot *et al.* 2014).

The power advantage of multivariate over univariate methods occurs when the direction of the residual correlation is opposite from that of the genetic correlation induced by the causal variant (Liu *et al.* 2009; Galesloot *et al.* 2014). In addition to the methods discussed above, a few new approaches have been proposed, but have not yet been compared with others. An interesting approach is to scale the different traits by their standard deviation and then assume that the effect of a single-nucleotide polymorphism (SNP) is constant across all traits to construct a test of association with 1 d.f.—so-called “scaled marginal models” (Roy *et al.* 2003; Schifano *et al.* 2013). Finally, an approach based on kernel machine regression extended the sequential kernel association test (Wu *et al.* 2010) to multiple traits, providing a simultaneous test of multiple traits with multiple genetic variants in a genomic region (Maity *et al.* 2012).

A limitation of all current approaches is that they test whether any traits are associated with a genetic variant, and small *P*-values could be driven by the association of the genetic variant with a single trait. Hence, *post hoc* analyses are required to interpret the possibility of pleiotropy. This can be quite challenging when scaling up to a large number of genetic variants. Another significant challenge is to distinguish direct from indirect associations. When there is evidence that a secondary trait is associated with a genetic marker, and one wishes to distinguish whether the same genetic marker has a direct effect on a primary trait *vs.* an indirect effect, with the secondary trait acting as a mediator between the genetic marker and the primary trait, ideas from causal modeling have proved useful. For example, disentangling direct from indirect effects can be achieved by regressing the primary trait on the secondary trait, the genetic marker, and all other covariates shared between the primary and secondary traits. Results from this regression can be used to construct an adjusted primary trait that can then be used in subsequent analyses (Vansteelandt *et al.* 2009). Another approach is based on Bayesian methods to partition associations into unassociated, indirect, and direct associations. However, it is difficult to accurately classify the type of causal association, particularly when residual correlations are large (*e.g.*, it is difficult to discriminate between direct and indirect effects) (Stephens 2013).

The above methods are used to test whether a single genetic variant is associated with multiple traits. When scaling up to genome-wide data, it has been useful to use all the genetic markers to estimate the marker-predicted heritability of a trait. This has recently been extended to multiple traits to estimate pleiotropy as the genetic correlation of multiple traits. Mixed models are used to partition the phenotype correlations into genetic correlation (*i.e.*, correlation of polygenic total genetic values) and environmental correlation (Korte *et al.* 2012; Lee *et al.* 2012; Zhou and Stephens 2014; Furlotte and Eskin 2015). Although this approach does not evaluate whether particular SNPs or particular genomic regions are the cause for phenotype correlations, it has the potential to guide design of studies that focus on pleiotropy. For example, the correlation of two phenotypes can be partitioned as where is the heritability of trait *i*, is the genetic correlation, and is the environmental correlation (Falconer and Mackay 1996, p. 314). Heritability in the narrow sense is the percentage of the variance of the trait explained by additive genetic factors. This illustrates that if both traits have low heritability, the phenotype correlation is primarily due to environmental correlation (and nonadditive genetic effects that are missed by ), implying that large sample sizes would be needed to test pleiotropy when there are small genetic effects.

We have emphasized that current methods to evaluate pleiotropy do not perform a formal test of the null hypothesis of no pleiotropy. For the special case of two traits, one can construct a null hypothesis of no pleiotropy based on the intersection-union (IU) test (Silvapulle and Sen 2004). Consider the regression equation where is the vector of values for the *j*th trait, is the intercept, is the slope association parameter of interest, is the vector of doses for the minor allele, and is a vector of residuals. The union null hypothesis is and the intersection alternative hypothesis is Testing each at a desired type I error, say the null is rejected only if both tests reject. There is no need to correct for multiple testing, because the type I error rate is not inflated by this procedure. But this approach can be conservative, particularly if the two tests are uncorrelated. The IU test can be extended to traits, but rejection of the null would occur only when all tests are significant at the specified For our situation, we wish to reject the null if at least two of the *p* tests reject. One approach would be to apply the IU test to each pair of traits and reject the null if at least one of the IU tests rejects. But this would entail many pairs of tests, and for this situation one would need to correct for testing multiple pairs. Bonferroni correction would lead to an overly conservative test.

Because of current limitations, we developed a likelihood-ratio test for testing the null hypothesis of no pleiotropy—the null hypothesis that one or no traits are associated with a genetic variant *vs.* the alternative hypothesis that two or more traits are associated. We then extended the testing framework to test the null hypothesis that *k* or fewer traits are associated *vs.* the alternative hypothesis that more than *k* traits are associated (). By this generalization, we propose sequential testing to test the null hypothesis that traits are associated, given that the null hypothesis of *k* traits are associated was rejected. This sequential approach provides a refined approach to evaluate how many traits, and which traits, are associated with a genetic variant, accounting for correlation among the traits and possibly adjusting for covariates that could differ across the traits.

## Methods

### Likelihood-ratio test of pleiotropy: null of one or fewer traits

Suppose that *p* traits are measured on each of *n* subjects. Let denote the vector of measures on the *j*th trait for *n* subjects. Assume that each trait is modeled by linear regression, denotedwhere is the dose of the minor allele for *n* subjects. Also assume that all and are centered, so intercepts can be ignored. For simplicity of presentation, we ignore adjusting covariates, but our methods are general and allow for trait-specific covariates. By stacking vectors, we can express the model as where and The error term where ** I** is an identity matrix, is the Kronecker product, and the matrix is the covariance matrix for the within-subject covariances of the errors. Under this model, the log-likelihood function of is given bySuppose that the covariance is known; otherwise, we can obtain a consistent estimate by maximum-likelihood estimation. For example, we can estimate by using methods from seemingly unrelated regression, an approach called feasible generalized least squares. Separate ordinary linear regression for each trait can be used to obtain residuals to estimate and then this is used in the generalized least-squares (GLS) solution,Note that the feasible generalized least squares is asymptotically equivalent to maximum-likelihood estimation (MLE). There are two special cases when separate ordinary regressions and GLS result in the same solution: (1) when is a diagonal matrix and (2) when the regressors in are the same for all traits. Hence, for the case where each trait is regressed on the same

**, without additional adjusting covariates, separate ordinary least-squares regression and GLS give the same results. The covariance matrix of the residuals then provides a consistent estimate of Then, the Cholesky decompositon of is where and We then decorrelate the data by and to transform the model to where which has log-likelihood Based on this log likelihood, we derived the likelihood-ratio test (LRT) to test the null hypothesis of no pleiotropy: One or no traits are associated with a genetic variant. Below we outline how to compute the LRT and provide details of the derivations in**

*x**Appendix B*.

The null hypothesis of no pleiotropy can be expressed asThe null hypothesis is equivalent to testing whether one of the following tests holds,for Note that represents all while for allows while all other To represent these *p* + 1 hypotheses, we use Let be a matrix such that tests whether all This is the usual multivariate test. In this case, is the identity matrix of dimension To construct create an identity matrix of dimension and then remove the *k*th row. This results in Then, the null hypothesis is equivalent toTo construct the LRT, center and about their means, use ordinary least squares to estimate use the residuals to estimate and then use to decorrelate and according to where Then, for each computeAn alternative way to express is the squared norm between the fitted values based on the ordinary least-squares estimates, and the fitted values based on the constrained estimates, (see *Appendix B*).

As shown in *Appendix B*, the LRT isBecause is based on the sum of squared differences of the fitted values between the unconstrained and constrained models, for a correctly specified constrained model, has a distribution. But the distribution of is more complicated. The statistic has two different asymptotic distributions depending on when = 0 or not. When the asymptotic distribution of each is a distribution, yet the distribution of the minimum of them, is unknown. Alternatively, when we can use the commonly used test for the null hypothesis that all This motivates us to do the test by two stages. The first stage is to just test using the statistic as the test statistic, so we reject if where is the quantile of the distribution with d.f. If cannot be rejected, then the of no pleiotropy cannot be rejected. If is rejected, we turn to the second stage to test the null hypothesis that one holds for For this we ignore and use the test statisticSince we reject the null hypothesis that only one holds for if Then, the null hypothesis of no pleiotropy is rejected only if both is rejected and the null hypothesis that only one holds is rejected

To provide intuition why has a large sample distribution with (*p* − 1) d.f. when only one differs from zero, while all others equal zero, we present an example in Figure 1. In this example and As shown in *Appendix B* (*Corollary 1*), the distribution of for a correctly specified model is In contrast, the incorrect models result in arbitrarily large values of (see *Corollary 2* in *Appendix B*). This means that will be minimum and

### General likelihood-ratio sequential testing: null of *K* associated traits

The above sequential approach is based on testing the null hypothesis and then if this rejects, to turn to the second stage to test the null hypothesis that only one holds for The advantage of this approach is that if is rejected and the null hypothesis that only one holds is accepted, we can conclude that there is only one nonzero But if the null hypothesis that only one holds is rejected, we cannot make a firm conclusion about the number of traits associated with a genetic variant. To provide a more rigorous testing framework, we extended our approach to sequentially test the null hypothesis that a specified number of ’s are nonzero. So, if the null hypothesis that *k* ’s are nonzero is rejected, but the null hypothesis that ’s are nonzero is accepted, we can conclude there are traits associated with a genetic variant. Furthermore, because the sequential testing is based on a likelihood-ratio framework, evaluating all possible combinations of nonzero ’s, the combination that fails to reject the null hypothesis provides evidence of which traits are associated with the genetic variant. The details of the statistical procedures of this general sequential testing method are provided in *Appendix B*, as well as a proof that the type I error is controlled. In summary, this general sequential procedure provides a formal way to determine not only they number of traits associated with a genetic variant, but also which traits are associated.

### Simulations

To evaluate the adequacy of the distribution for the LRT, we performed simulations. For the pleiotropy null, we performed two sets of simulations. The first one assumed that all the usual null for multivariate data. The second one fixed and all other The value of was chosen because the power for detecting this marginal effect size was very large for our setup. We assumed three different sample sizes, and two different values of The small sample size of was used to evaluate the adequacy of our asymptotic derivations for small samples. The variance of the errors was assumed to be 1, and the covariance was assumed to be either a constant for all pairs of traits (*i.e.*, exchangeable correlation structure) or a range of covariances. For the range of covariances, we randomly chose covariances from a specified range, assuming a uniform distribution of the covariances. With a specified covariance structure, we simulated the random errors from either a multivariate normal distribution or a multivariate *t* distribution with 3 d.f., to evaluate the impact of heavy-tailed distributions. For all simulations, a single SNP was simulated, assuming a minor allele frequency of 0.2.

To evaluate the power of our proposed LRT for pleiotropy, we simulated 10 traits from a multivariate normal distribution with variances of 1 and equal covariances among the traits, set at for a total of *n* = 500 subjects. The number of traits associated with the SNP ranged over two, three, or five. The marginal effect of a trait was set at This effect size explains 2% of the variation of a trait, and there is 90% power to detect a marginal effect of this size, using nominal We also set the marginal effect to which corresponds to an explained 1.2% of the variation of a trait, and there is 70% power to detect a marginal effect of this size. All simulations were repeated 1000 times.

### Data application

Our newly developed LRT for pleiotropy was applied to a data set that has 10 immunologic phenotypes measured in response to primary smallpox vaccination. These phenotypes included measures of humoral immunity (neutralizing antibody titer) and cellular immunity [two separate IFNγ ELISPOT assays and cytokine secretion upon viral stimulation as measured by ELISA (IL-1β, IL-2, IL-6, IL-12p40, IFNα, IFNγ, TNFα)]. All 645 subjects included in the presented analyses were of Caucasian ancestry. All subjects provided informed consent for use of their samples and this study was approved by the Mayo Clinic Institutional Review Board. A genome-wide association of the 10 phenotypes was performed, with each phenotype adjusted for relevant covariates (*i.e.*, *P*-value <0.10 for association of a covariate with the phenotype, including eigenvectors to adjust for potential population stratification). Details of the study can be found in prior published reports (Kennedy *et al.* 2012a,b; Ovsyannikova *et al.* 2012a,b, 2013, 2014).

### Data availability

Software implementing the proposed tests for pleiotropy for quantitative traits is available as an R package called “pleio” in the Comprehensive R Archive Network (https://cran.r-project.org/web/packages/pleio/index.html).

## Results

### Simulation results

The type I error rates based on simulations are presented in Table 1, Table 2, Table 3, Table 4, Table 5, and Table 6. For all simulations, we show results from the two-stage test (using for stage 1 and for stage 2), but in all cases, the results from the two-stage test were identical to those from the compound pleiotropy test . The results for when only one differs from zero (Table 1, Table 2, and Table 5) illustrate that the LRT can have inflated type I error rates for small sample sizes with more extreme inflation as *p* increased from 4 to 10. In contrast, for moderate to large sample sizes the type I error rates were close to the nominal level, with only an occasional slight inflation. The inflated type I error rate for small sample sizes seems to be caused by the need to estimate the covariance matrix of the residuals. When we simulated errors that were independent and used the identity matrix for the residual correlations, the simulated type I error rates were very close to the nominal rates for all sample sizes. In contrast, when all were zero (Table 3, Table 4, and Table 6), the LRT has conservative type I error rates. This, however, is not of concern, because controlling the type I error rate when only one differs from zero is the major error that should be controlled when testing pleiotropy. These results were consistent for different amounts and patterns of residual correlations and for multivariate normal and multivariate *t* distributions.

To further evaluate the adequacy of our asymptotic approximations for large samples, we performed 10,000 simulations for 1000 subjects and four traits that had a common correlation structure. All but one was zero; the nonzero was chosen such that there was either 90% or 30% power to detect its marginal effect using This scenario reflects modern large-scale genomic studies that use more stringent significance thresholds. The quantile–quantile plots in Figure 2 show that the asymptotic distribution to test pleiotropy provides adequate *P*-values over the entire range of *P*-values for when the marginal effect of one is small (power of 30%) or large (power of 90%) and for when the correlation of the traits is small or large

The simulation-based power is illustrated in Table 7 and Table 8. The general patterns show that the power to detect two or more associated traits increases with the number of truly associated traits, the effect size of each trait, and larger residual correlations among the traits.

To provide insights into the properties of our proposed sequential testing of multiple traits, we simulated six traits with a common correlation structure such that three of the traits were associated with a genetic variant (*i.e.*, three true nonzero ’s). The effect sizes of the associated traits were chosen to have marginal power of 0.3, 0.7, or 0.9 for a sample size of 1000 subjects. These marginal effect sizes correspond to 0.2%, 0.6%, and 1.0% explained variation of the marginal trait. A total of 1000 simulations were performed. The results are presented in Table 9. The frequency of accepting the null hypothesis that all ’s = 0 (*e.g.*, no ’s selected to be associated with the genetic variant) ranged from 0.646 for when power was 0.3 to 0.015 when power was 0.9—not surprising that greater power resulted in greater frequency of selecting at least one to be nonzero. Table 9 also presents the frequency for which the three true nonzero ’s were selected, conditional on at least one of the six ’s was selected. For weak marginal power (*e.g.*, power of 0.3), the frequency of selecting all three nonzero ’s was small (0.034–0.213, depending on the trait correlation). Yet the frequency of selecting at least one of the three nonzero ’s was reasonable (0.747–0.862). The frequency of correctly selecting all three nonzero ’s increased as either marginal power increased or trait correlation increased. For example, for marginal power of 0.7, the frequency of selecting all three nonzero ’s was 0.179 for weak correlation ( = 0.2), and was 0.851 for strong correlation ( = 0.8). For marginal power of 0.9, the frequency of selecting all three nonzero ’s was 0.472 for weak correlation ( = 0.2), and was 0.956 for strong correlation ( = 0.8). In contrast to selecting true nonzero ’s, we also present in Table 9 the frequency of wrongly selecting ’s that are truly zero. Not surprisingly, when marginal power is weak (power of 0.3), if at least one is selected, there is a significant chance of wrongly selecting a true-zero (*e.g.*, frequency of 0.209 when = 0.2). This type of error decreased as the marginal power for traits increased and the trait correlation increased. For example, when power was 0.9 and trait correlation was = 0.8, the frequency of selecting one true-zero was 0.034, approaching the nominal type I error rate of 0.05. Table 9 illustrates that although there is a chance of wrongly selecting one true-zero the frequency of selecting more than one true-zero was small.

### Data application results

Based on the traditional multivariate regression of all 10 traits on each SNP, we found a strong association of at least one of the traits with SNPs in a region on chromosome 5 (see Figure 3). Figure 4 illustrates the traditional multivariate test in a small region of chromosome 5 (left panel) and the LRT of pleiotropy in the same region (right panel). The test for pleiotropy provides strong evidence that the signal of association was driven by a single phenotype. This is confirmed qualitatively in Figure 5, which shows the individual marginal trait associations for the chromosome 5 region. Although the individual marginal associations in Figure 5 give the visual impression that only one trait is strongly associated with the chromosome 5 SNPs, the LRT of pleiotropy provides a formal statistical test that accounts for the correlations among the traits.

## Discussion

Genetic pleiotropy has been of scientific interest since the time of Gregor Mendel, as he described different traits in peas controlled by genes, such as pea coat color and texture, color of flowers, and whether there were axial spots. In current research, understanding pleiotropy can aid the understanding of complex biological mechanisms of genes (as shown in our vaccine response data), as well as aid the development of pharmacologic and vaccine targets. Yet the statistical methods to assess pleiotropy have resorted to *ad hoc* comparison of univariate statistical tests or multivariate methods that test the null hypothesis of no trait associations. Because a formal statistical test of pleiotropy was lacking, we developed a novel LRT statistic. The statistic is easy to compute, based on well-known linear regression methods for quantitative traits. Our simulations show that the LRT closely follows a distribution when only one trait is associated with a genetic variant and that the LRT tends to be conservative when no traits are associated. We proposed a sequential testing procedure, where the null hypothesis of no associated traits could be tested first (using standard multivariate regression methods) and, if significant, be followed by a test of whether only one trait is associated. If the test of only one associated trait rejects, we proposed sequential testing the null of *j* associated traits (*j* = 2, …, *p* − 1), until the sequential test fails to reject the null hypothesis. This approach provides a way to assess the number of traits associated with a genetic variant, accounting for the correlations among the traits. A limitation of our approach, and most other methods for associations of genetic variants with multiple traits, is that it has limited power when an allele is rare. An alternative approach is to compare the similarity of multiple traits with the similarity of rare-variant genotypes across a genetic region, for pairs of subjects (Broadaway *et al.* 2016). The benefit of this approach is balanced with the limitation of not knowing which genotypes are associated with which traits. Our proposed sequential testing might provide a worthy follow-up procedure if some variants are not too rare.

Although our proposed methods assumed the subjects are independent, it is straightforward to extend our approach to pedigree data. To do so, the variance matrix of residuals for independent subjects, would be replaced with The matrix ** K** contains diagonal elements where is the inbreeding coefficient for subject

*i*, and off-diagonal elements The parameter is the kinship coefficient between individuals

*i*and

*j*, the probability that a randomly chosen allele at a given locus from individual

*i*is identical by descent to a randomly chosen allele from individual

*j*, conditional on their ancestral relationship. For subjects from different pedigrees, so

**can be structured as a block-diagonal matrix, with diagonal block for the**

*K**i*th pedigree. With this adjustment, our methods can be used for pedigree data or for data with population structure where matrix

**is an estimate of genetic relationships (Schaid**

*K**et al.*2013).

Application of our new approach to a study of immune phenotypes in response to smallpox vaccination strongly suggests that only 1 of 10 correlated traits is statistically associated with SNPs in a region on chromosome 5. The benefit of this type of analysis is that it provides strong guidance on follow-up functional studies for genome-wide association studies with multiple traits. In our case, it allowed investigators to focus on the single immunologic trait truly associated with the chromosome 5 SNPs, rather than conducting labor-intensive, expensive, and time-consuming experiments on unrelated immune response traits.

We recognize that our proposed LRT depends on the assumption that residuals have a multivariate normal distribution. Our simulations with a multivariate *t* distribution (3 d.f.) suggest that the LRT is robust to heavy-tailed distributions. To ensure robustness with the traditional multivariate regression, it is common practice to transform the data to have at least normally distributed marginal distributions, such as use of normal quantile transformation. This is a reasonable approach for our proposed LRT.

A limitation of our method is that each of the traits is assumed to be quantitative. If all traits are binary, or if there is a mixture of quantitative and binary traits, then the dependence of the LRT on an assumed likelihood would need to be reconsidered. One approach is to consider a general multivariate exponential family of models (Prentice and Zhao 1991; Zhao *et al.* 1992; Sammel *et al.* 1997). Another approach would be to consider the reverse regression of an SNP dose on all traits, like the ordinal logistic MultiPhen approach of O’Reilly *et al.* (2012), yet develop an LRT for pleiotropy whereby one of the ’s is allowed to be unconstrained under the null. An alternative approach that we are developing is based on generalized linear models and generalized estimating equations. The theoretical underpinnings of these alternate approaches, and their computational challenges, are topics of future research.

## Acknowledgments

This research was supported by (1) the U.S. Public Health Service, National Institutes of Health (NIH), grant GM065450 (to D.J.S.); (2) federal funds from the National Institute of Allergies and Infectious Diseases, NIH, Department of Health and Human Services, under contract HHSN266200400025C (N01AI40065) (to G.A.P.); and (3) the National Natural Science Foundation of China (grant 11371062), Beijing Center for Mathematics and Information Interdisciplinary Sciences, China Zhongdian Project (grant 11131002) (to X.T.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. G.A.P. is the chair of a Safety Evaluation Committee for novel investigational vaccine trials being conducted by Merck Research Laboratories. G.A.P. offers consultative advice on vaccine development to Merck & Co. Inc., CSL Biotherapies, Avianax, Dynavax, Novartis Vaccines and Therapeutics, Emergent Biosolutions, Adjuvance, and Microdermis. G.A.P. holds two patents related to vaccinia and measles peptide research. R.B.K. has grant funding from Merck Research Laboratories to study immune responses to mumps vaccine. These activities have been reviewed by the Mayo Clinic Conflict of Interest Review Board and are conducted in compliance with Mayo Clinic Conflict of Interest policies. This research has been reviewed by the Mayo Clinic Conflict of Interest Review Board and was conducted in compliance with Mayo Clinic Conflict of Interest policies.

## Appendices

## Appendix A

R code to compute *F*-statistic for canonical correlation of matrix **Y** with vector **x**.

library(CCA)

cc.fit <- cc(**Y**,**x**)

cc.fstat <- function(cc.fit){

rho <- cc.fit$cor

lambda <- 1 - rho^2

dimx <- max(dim(cc.fit$xcoef))

dimy <- max(dim(cc.fit$ycoef))

k <- max(c(dimx, dimy))

n <- nrow(Y)

fstat <- ((1-lambda)/lambda) * ((n-k-1)/k)

pval <- 1-pf(fstat, k, n-k-1, ncp=0)

return(list(fstat=fstat, pval=pval))

}

## Appendix

### Appendix B: Hypothesis Tests for Linear Model

#### Notation and model

Based on the regression model described in the main text, suppose that *p* traits are measured on each of *n* subjects, with the vector of measures on the *j*th trait for *n* subjects, and stack the vectors as Let where ** x** is a vector of length

*n*. We assume that

**and**

*y***are centered on their means. We can express the model as where The error term where and the matrix is the covariance matrix for the within-subject covariances of the errors. Then, the Cholesky decompositon of is where and Using the model can transform to independent standard normal random variables, where and with log likelihood**

*x***Theorem 1.** *Let* *be a* *matrix of rank* *Then the minimizer of* *under the constraint* *is*

*where* *is the ordinary least-squares* (*OLS*) *estimate and*

*Furthermore*,(B1)

*Proof*. Denote Note that

The last above step results from because

Under the constraint Applying the Lagrange multiplier method, we minimizeBy taking the derivative of with respect to and we obtain the solution and the estimate of is

Therefore, we haveThis completes the *Proof*.

**Remark.** Equation B1 illustrates that the residual sums of squares (ssq) for the constrained model are partitioned into two parts: (1) the ssq for the OLS fit and (2) the sum of squared differences of the fitted values for the OLS model and the constrained model

**Corollary 1.** *Under the null hypothesis*,

*Proof.*The substitution of with in the last step of the above *Proof* can be made because by the assumed linear model, we find thatThe last step above results because under the null hypothesis.

It is easy to verify that the matrix is idempotent and is of rank if the rank of is of rank Because and is idempotent, completing the *Proof*.

**Corollary 2.** *If* *then*

*Proof*. It follows from the *Proof* of *Corollary 1* that By the linear model, we haveIt is clear that and In addition, since in probability. Combining all three facts yields *Corollary 2*.

#### Hypothesis tests

Now we consider the null hypothesis of no pleiotropy:The null hypothesis is equivalent to testing whether one of the following tests holds:for Note that represents all while for allows while all other

To represent these hypotheses, we use Let be a matrix such that tests whether all In this case, is the identity matrix of dimension To construct create an identity matrix of dimension and then remove the *k*th row. This results in Then, the null hypothesis is equivalent toFor set Then it follows from *Theorem 1* thatwhere is the least-squares estimate under the constraint andThen we have the following corollary.

**Corollary 3.** *The LRT*, × *log of ratio of likelihoods*, *is given by*

If holds, thenand for

If only one holds, thenFrom *Corollary 3*, we can see that the test statistic has two different asymptotic distributions when = 0 or not. When the asymptotic distribution of is unknown. Alternatively, when we can use the commonly used test for the null hypothesis that all This motivates us to do the test by two stages. The first stage is just test using the statistic as the test statistic, so we reject if where is the quantile of a distribution with *p* d.f. If cannot be rejected, then cannot be rejected. If is rejected, we turn to the second stage to test the null hypothesis that one holds for Then we can use the test statistic Since we reject the null hypothesis that one holds for if Then, the null hypothesis is rejected only if both is rejected and the null hypothesis that one holds is rejected Since both tests are conducted at type I error rate of and this is based on the principal of the IU test (Silvapulle and Sen 2004), the type I error rate for rejecting is no more than

**Remark.** If is too large, it might be beneficial to ignore the and directly use to construct the rejection region.

#### Sequential test of nonzero betas

The above solutions can be easily extended to test the following null hypothesis:With appropriately defined matrices (there are matrices), the LRT reduces towhere

In the above, is a matrix. For example, if for indexes we test

then we can constitute the corresponding matrix as follows: (i) Constitute a identity matrix and (ii) delete the rows for indexes

Then we can use the following multistage test:

i. First test Reject if If reject, go to the next stage; otherwise stop and conclude is true.

ii. For test there are only components of ≠ 0. Reject if where The indexes range over the choices. If reject, continue testing by incrementing

*s*by 1. If fail to reject stop testing and conclude there are*s*traits associated with*x.*

The type I error rate of this sequential testing is no greater than the nominal level. To understand this, suppose there are nonzero ’s, and define the type I error as concluding there are nonzero ’s. Note that the test statistic at each stage is based on the minimum of statistics, where each statistic is based on a measure of distance between fitted values based on the unconstrained OLS model and the constrained model determined by ** V**. If one of the constrained models is correct, then by

*Corollaries 1*and

*2*, If, however, none of the constrained models are correct, This means that at testing stage the probability of rejecting the null hypothesis at stage

*j*depends on the power to detect the misspecified models, which approach 1 as

*n*increases. With this background, we can formally evaluate the type I error rate. Define as an indicator of whether the null hypothesis is rejected at stage

*j*. The probability of a type I error is the probability of rejecting the sequential stage testing up to and including stage

*K*. This joint probability can be expressed as(B2)The last term in expression (B2) represents the probability of rejecting the null hypothesis when the null hypothesis is true, so one of the constrained models is correct. The test statistic at stage

*K*follows a distribution, so All other terms in (B2) approach 1 as

*n*increases, because all stages represent misspecified models. This proves that the type I error rate is no greater than the specified level.

## Footnotes

*Communicating editor: G. A. Churchill*

- Received March 16, 2016.
- Accepted August 11, 2016.

- Copyright © 2016 by the Genetics Society of America