Abstract
In the comparison of DNA and protein sequences between species or between paralogues or among individuals within a species or population, there is often some indication that different regions of the sequence are divergent or polymorphic to different degrees, indicating differential constraint or diversifying selection operating in different regions of the sequence. The problem is to test statistically whether the observed regional differences in the density of variant sites represent real differences and then to estimate as accurately as possible the location of the differential regions. A method is given for testing and locating regions of differential variation. The method consists of calculating G(x_{k}) = k/n – x_{k}/N, where x_{k} is the position of the kth variant site along the sequence, n is the total number of variant sites, and N is the total sequence length. The estimated region is the longest stretch of adjacent sequence for which G(x_{k}) is monotonically increasing (a hot spot) or decreasing (a cold spot). Critical values of this length for tests of significance are given, a sequential method is developed for locating multiple differential regions, and the power of the method against various alternatives is explored. The method locates the endpoints of hot spots and cold spots of variation with high accuracy.
Acommon question that arises in the comparison of related DNA or protein sequences is whether the differences between them are concentrated in some regions of the sequence and are relatively sparse in others. This problem arises commonly in three contexts: polymorphism of a gene within a species, divergence of a gene between two species, and divergence of paralogous sequences that arose originally through duplication. In all of these cases there are some clearly definable regions, which we expect, a priori, to be more or less variable or divergent than others, as, for example, introns compared to exons. The differences between such a priori regions can be detected by standard statistical tests of heterogeneity. A much more difficult statistical problem arises, however, when there are no such clearly defined a priori regions, but we are looking for evidence of heterogeneity of variation within, say, an intron or an exon. A solution to the problem of detecting such regions was given in an article by Goss and Lewontin (1996), in which two fairly powerful tests for heterogeneity were developed. This earlier study, however, only provided statistical tests to detect heterogeneity, but did not offer any method for locating those parts of the sequence that differ from other parts in their variation. In this article we develop an estimation procedure for locating the position along the sequence of regions of differential probability of substitution. The method we describe, using empirical cumulative distribution function (ECDF) statistics, has the property that it also provides another test of the null hypothesis that has about the same power as the Goss and Lewontin variance and extremal run length tests. The statistical question then, is whether the lengths and positions of these runs are what we might expect if the positions along the sequence have equal probabilities of substitution.
Because the estimation method arises in the context of a test of the heterogeneity, we begin our exposition by a discussion of the test, turning later to the estimation problem. The performance of the test (power) is evaluated under several alternative hypotheses. The second part of this article describes how the same algorithm is used to estimate the regions of differential variability. We assess the estimate through several measures and discuss its potential problems. The article then concludes with two numeric examples using real molecular data.
A METHOD OF HYPOTHESIS TESTING
The structure of the data is fairly simple. Two or more sequences are aligned and a new resultant sequence is produced with a 0 at each position at which all the sequences are identical, and a 1 at any position where there is at least one variant among the sequences. Where only two sequences are compared, as, for example, between two species or between two paralogous sequences, the 1's mark the sites of divergence. Where multiple sequences are compared, typically in a polymorphism study, the 1's mark sites that are polymorphic without reference to how many distinguishably different allelic forms are seen at the site. The result is a single sequence made up of runs of 0's separated by 1's. Beginning at one end of the sequence, we can describe the data as a series of “events” marked by the 1's, separated by runs of “no event” denoted by the 0's. It is the lengths and arrangements of these runs of 0's between “events” that provide the basis for statistical tests and for locating regions of high or low probability of an event. In most applications the degree of polymorphism or sequence divergence is small as compared to the total sequence length, so there will be many more 0's than 1's, and that data will appear as runs of 0's punctuated by single 1's. But we are not restricted to such cases. Where there are many divergent sites between sequences, there will often be uninterrupted stretches of multiple 1's, but two adjacent 1's are simply counted as a run of 0's of length 0, with no loss of generality. When the proportion of divergent or polymorphic sites is actually >50% we can simply reverse the definition of events. In this article we consider cases where the proportion of events is ≤45% (see the last two columns of Table 1).
It is important to note that the procedures we derive do not assume that there are only two alternatives at a site for a multiple sequence comparison. The methods are equally valid whether a site is marked as “variable” because a single sequence differs from all the others or because every sequence differs uniquely from every other at the site. What is lost by considering only two classes, variant and invariant, is the potential information contained in the frequency distribution and enumeration of all the alternative forms. Ultimately, all other things being equal, this represents a loss of statistical power to detect some kinds of heterogeneity. For example, the distribution of interevent distances might conform to the null hypothesis of no clumping, but all the events that appear in one region of the sequence might be the result of only a single divergent sequence at any variant site, while in another region every sequence might differ from every other one at every variant site. But to make statistical use of such information is a great deal more complex than it may appear. First, it is not clear what the null hypothesis is. The simplest would be that for all sites there is a common number of different equiprobable states, but this number cannot even be estimated from the data because of ascertainment bias, a bias that depends on the true number of alternatives, the number of sequences in the sample, and the level of polymorphism. Other ad hoc null hypotheses suggested a posteriori from the observed patterns of variation parameters suffer from the dangers of all a posteriori tests, while a priori tests contain various numbers of undetermined parameters. Second, the increase in power obtainable from more detailed classification could only be achieved by increases in sample size, because the change from an underlying binomial hypothesis to a multinomial one reduces the number of observations falling in any distinguishable observed class, thus spreading the deviations among more degrees of freedom. Thus, the twostate classification, variant and invariant, seems the only practical basis for a general search for heterogeneity.
The detection of heterogeneity is essentially a test of goodnessoffit to a uniform null distribution. Several nonparametric methods have been developed, which may be grouped into two broad categories. One group includes the runs tests, which are based on the interval lengths between two events. For a comprehensive review and comparison of various methods in this group, readers are referred to Goss and Lewontin (1996). The other group, which includes the method discussed in this study, uses the ECDF statistics (Stephens 1986b).
The ECDF statistics: The ECDF statistics use the difference between the observed cumulative distribution of events, the ECDF, and the theoretical cumulative density function (CDF) under some null hypothesis (Stephens 1986a). In our context the meaning of the ECDF, F_{n}(x), is as follows. We have a total of N positions along the sequence labeled sequentially from one end by x (1 ≤ x ≤ N). On this sequence there are n events (marked by 1's). Beginning at one end of the sequence and progressing to the end of the sequence, we record how many of the events have occurred up to and including position x. In a sample of n events (1's), F_{n}(x) is a stepwise function, calculated from the observations
The CDF function, F(x), is calculated from the null hypothesis. In our case, the null distribution is uniform and
The motivation for our test is as follows. If the true substitution rate at each site is the same, no region should contain unusually large or unusually small numbers of events. We expect the length of most spacings (length between two consecutive events) to be more or less close to the average, n/N, rather than being either extremely short or extremely long. Further, the longer spacings and the shorter ones should occur in a nonsystematic fashion; that is, we expect shorter spacings interspersed among longer ones, and vice versa. Hence F_{n}(x) – F(x) moves up and down. On the other hand, if a region in the sequence has a very high substitution rate, we will observe more events occurring in that region, and thus a cluster of shorter spacings. Meanwhile, because n is fixed, there must be too few events elsewhere. When this happens (F_{n}(x) – F(x)) decreases sharply and consistently in a region of lower rate while increasing sharply in a region of higher rate.
Method: Denote the positions of the events by X_{1}, X_{2},... X_{n}, where X_{1} < X_{2} <... < X_{n}, and calculate
In a region bounded by two events, i and j, let
We reject the hypothesis of uniformity if the test statistic T is greater than some critical value, T*, a function of n, N, and α (probability of type I error). To find T*, we did Monte Carlo simulation using programs written in C with the drand48 random number generator to produce 100,000 samples of n events by sampling sites without replacement along a sequence of 5000 sites. Figure 2 shows the results of the simulation for the null model. The distribution of T under the null hypothesis is symmetric with respect to 0. Therefore, we choose T* to be the percentiles of T  among the 100,000 replicates, so that a twotailed test has type I error of α. The critical values of T* are given in Table 1 for various sequence lengths, N, numbers of events, n, and rejection levels, α.
In the above model, ΔG is calculated over intervals where G increases (or decreases) monotonically. Essentially, this method looks for the longest stretch in the sample in which every spacing is shorter (or longer, for a cold spot) than the expected length. But it is conceivable that one or more spacings may be slightly longer in a true hot spot; similarly, a spacing in a real cold spot may be slightly shorter than average. Such an atypical random spacing produces noise on the G curve (Figure 1), which we treat by smoothing. A great deal of literature on smoothing procedures is available (see Simonoff 1996). In this study, for the ease of computation, we adopt a very simple smoothing scheme. We say that G is almost monotonically increasing (or monotonically decreasing) in any interval in which any opposite change is <0.005. The relaxation of the definition of monotonicity amounts to a slight smoothing of the G curve. The value of 0.005 is chosen quite arbitrarily, and it is discussed in more detail in a later section. But as long as we use a consistent method when simulating under the null distribution, the probability of type I error will not be increased.
Alternative hypotheses: Under the null hypothesis, the probability of substitution at each site is the same. The alternative distribution can be very complex, as its parameter space is multidimensional (Goss and Lewontin 1996). Because the primary goal is to detect regions of heterogeneous substitution rate, we only consider alternative hypotheses in which each true distribution is composed of a few regions. We begin with two simple alternative distributions in which the true probability density function partitions the entire sequence into three regions. The central region is considered as the differential region. Three parameters that completely characterize an alternative hypothesis are the width (measured as a fraction of the length of the entire sequence), depth, and location of this differential region.
The differential region in alternative hypothesis A is a hot spot, and that in alternative hypothesis B is a cold spot. It should be emphasized, however, that these terms are only relative, not absolute. A cold spot only has lower probability of substitution relative to regions flanking it. Thus, distribution B can be equivalently considered as composed of two hot spots.
We first examine the performance of the test when the width, depth, and the position of the differential region vary. We then briefly discuss the change in the power when the alternative distribution contains more regions. To evaluate the power of the test we simulate samples with n events under different alternative distributions, recording the proportion of replicates in which the test statistic T, simulated under this alternative hypothesis, is more extreme than the critical value T*. In most cases, we perform a twotailed test and reject the null hypothesis when T > T*. Of course, if we have prior knowledge of the direction of the deviation, the test can be onetailed, with the critical values derived in a similar fashion.
Power of the test: Figure 3, a and b, shows the power of the test when the underlying distribution contains one hot spot or one cold spot of varying width. The ratio, r, of probability of a substitution between the differential and the constant regions is 5:1 in the case of a hot spot and 1:5 for a cold spot. The xaxis represents the width, w, of the differential region, and the yaxis represents the power (among 10,000 trials) when the rejection level α = 0.05. The critical values of T* are given in Table 1 for various numbers of events, n. For a hot spot and with n = 30, the test has reasonably good power (>0.5) when the differential region spans between 10–50% of the entire sequence. In the case of a cold spot, the differential region needs to span 30–80% of the entire sequence to have good power.
The power will, in general, be a function of the number of distinct sites of variation (n), the ratio of substitution rate between the differential and constant region (r), the width of the differential region (w), and the position of the differential region.
Figure 3, a and b, shows that for a given alternative hypothesis the power of the test increases as n, the number of distinct sites of variation, increases. This is expected because an increased sample size provides additional information. Holding r at the 5:1 level and w at 10% of the entire sequence, the power is only 0.27 for n = 10, but increases to 0.67 for n = 30, and reaches 0.99 when n = 100.
By the depth of the differential region we mean the ratio, r, of substitution rate between the differential and background regions. An increase in the deviation in substitution rate in the differential region raises the power. For 60 events and holding w at 10% of the total length, the power is 0.17 when the probability in the differential region is twice as high as that of the constant region, but it increases to 0.94 when r is 5:1, ceteris paribus.
Figure 3, a and b, shows that for both alternative hypotheses A and B, the power of the test peaks when the differential region is of moderate width. Note that when the differential region is narrow, a hot spot is easier to detect than a cold spot. But when the differential region becomes wide, a cold spot is easier to detect. This is not surprising. If several events take place in a relatively narrow hot spot, the spacings will be significantly shorter. This will produce a highly positive ΔG in that region, and hence high power. But if the narrow differential region is a cold spot, the best that can happen is that no event takes place in this region. Thus, the cold region is completely covered by one spacing, and all the events occur in the constant region with equal probability. If this one spacing covering the cold spot is not long enough to produce a significant ΔG, the test is most likely to be insignificant.
In comparison with Goss and Lewontin's variance test, Figure 4 indicates that the two methods have comparable power, and neither dominates the other over the range of the width of the differential region.
In the above simulations, we always centered the differential region to make the two flanking regions equal in length. The power is not affected significantly if this differential region is slightly off center. But when it moves to the extremities of the sequence, the power is affected due to an edge effect, which arises because the first and the last spacings in an observed sequence are necessarily shorter than the expected length, n/N. Consider a situation in which one of the flanking regions in alternative hypothesis B is degenerate. The cold spot is located at one end of the sequence, but the first spacing at this end is still likely to be shorter than expected due to the edge effect. The cold spot is then less likely to be detected, and the power of the test decreases. The same reasoning argues that the power should increase if the head/tail region is in fact a hot spot. A remedy for the edge effect is to circularize the two ends of the sequence and then cut it at the first event. In other words, we can cut off the first spacing and append it at the end. The new sites of the events are X_{k}′ = X_{k}_{+1} – X_{1}, for k = 1,..., n – 1. Note that we have one less spacing because the first and the last spacings in the original sequence have merged into one. The power of the test against all alternative hypotheses then becomes independent of the location of the alternative region (data not shown). Finally, even without circularizing the sequence, the performance of the test is not affected much as soon as the flanking regions on both sides reach 5% of the total length of the sequence.
Changing the number of differential regions: Intuitively, we expect that one hot spot is easier to detect than two hot spots, each half in size. It is also generally true that one hot spot is easier to detect than two isolated hot spots of the same size, unless the two hot spots are extremely close to each other. In that case they behave like one hot spot and, as seen in Figure 4, the power is not proportional to the width of the differential region. We show these two properties by two experiments.
First, we examine the change in power when one differential region is split into two or more smaller regions. We start with alternative hypothesis A, with one hot spot of width 0.3. We then divide this region into two or more smaller regions, keeping the total width of all these regions at 0.3. Between each pair of two consecutive smaller hot spots is a segment of constant region of width 0.1. Each of these alternative distributions is arranged symmetrically with respect to the center of the sequence. Figure 5 shows the change in the power to reject the null hypothesis when the original region is split into 2–6 regions. Note that the power decreases less sharply for larger sample size. Nonetheless, it decreases in each case. For this reason, our method of hypothesis testing should not be used to detect very finegrained rate variation. It will not, for example, detect a higher substitution rate of the third position in each codon.
In the second experiment, we compare the change in power when there is more than one differential region of the same size. We begin with alternative hypothesis A, with a hot spot of width 0.1. We then add the second, third, fourth, and fifth hot spots, all with length 0.1. Again, each pair of neighboring hot spots is separated by a constant region of width 0.1, and each of these alternative hypotheses is arranged symmetrically with respect to the center of the sequence. The results are almost identical to the picture given in Figure 5. As expected, a decrease in power is observed for all sample sizes.
Although we have only presented results when the alternative distribution contains multiple hot spots, the same argument applies to the case of multiple cold spots. Denote the power of rejecting an alternative hypothesis with m similar differential regions each of length j as P(m,j). Using the results in Figure 5, we conclude that in most cases,
ESTIMATION OF THE DIFFERENTIAL REGION
We now turn to the problem of estimating the differential region. We restrict ourselves to the simple cases where the underlying distributions are in the shapes of alternative hypotheses A or B. Our goal is to estimate the location of the central region. As the distribution becomes more complex, it is not at all clear what we identify as the “differential” region.
Sequential estimation method: Using the hypothesis testing method with smoothing presented in the last section, the estimation procedure and the testing procedure go hand in hand.

Test for heterogeneity in the sequence. If the test is not significant we conclude that substitution rate is uniform and there is no region to be estimated.

If T > T*(α, n) we estimate the differential region as the interval where G changes almost monotonically and over which the absolute magnitude of ΔG is maximized.

Remove the estimated region from the sequence, and repeat steps 1 and 2. Continue this iterated process until no significant heterogeneity is found.
In such a recursive procedure the most deviant observations are being successively removed while the sample size is being reduced progressively. Thus, the probability of a type I error will be greater than the nominal value of α in tests after the first one, while n and N are being successively reduced. Such a recursive procedure will then be both conservative and of lower power for each successive test.
Unlike point estimates, there is no established measure of accuracy for an interval estimate. Intuitively, we want our estimate to coincide with the true central region as closely as possible. To this end we look at the distributions of two proportions: the proportion (P) of the estimated region that falls in the true differential region, and the proportion (Q) of the true differential region that is covered by the estimated region. Of these two proportions, the former is more important than the latter. If we simply estimated the region by including the entire sequence, we would be bound to cover the true differential region completely every time (if one is present). But such an estimate is meaningless. At the same time, we hope that our estimated region includes as much of the true differential region as possible. An estimated region of 1 bp long is not very informative. Alternatively, we can treat the endpoints of the estimated interval as two point estimates and examine their marginal and joint distributions.
Once the boundary of the central region is identified, we can further estimate the relative rate of substitution by comparing the ratio of the number of substitutions in each region to the width of that region.
Results: Figure 6, a and b, shows single simulated examples of plots of G when the sequence contains a hot spot or a cold spot, respectively. As seen in Figure 3b from the previous section, the power of detecting a cold spot is low when the true region is very narrow. Therefore, we use examples of cold spots of width 0.2 in this section.
Figure 7 shows the joint distribution of P and Q under alternative hypothesis A and n = 60. The mean length of the estimated region is 554 sites long, while the true central region spans 500 sites. The distribution shows that in most cases the proportion, P, of the estimated region that overlaps the true differential region is >60%. Figure 7 also shows the distribution of Q, the proportion of the truly differential region that is included within the estimated region under the alternative hypothesis. In almost all cases, 70% of the differential region is covered. Of course, the estimates do not coincide exactly with the true differential region. The plot of joint distribution of P and Q in Figure 7 shows that the mode occurs where 99% of the estimated region overlaps with the true region and 94–96% of the true region is identified. The mean proportions of P and Q under alternative hypotheses A and B with different numbers of events are tabulated in Table 2. Counterintuitively, the mean of Q for a cold spot decreases as n increases. Such an apparent trend, however, should not be interpreted to mean that a larger sample size somehow lowers the accuracy of the estimation. The length of the true differential region is constant for all three cases (w = 0.2). When n is small, the estimated regions are much larger (low average of P) than the true one, so they cover most of the true region; as n increases, the estimated regions are narrower, and therefore are less likely to cover the entire true differential region. Although Q decreases slightly as we accumulate more events, data with larger n still give a more informative estimate.
As an extension, we may look at the quantity R = PQ:
Alternatively, we can look at the distribution of each endpoint. The distributions of both endpoints are shown in Figure 8, a and b. The sharpness of the peaks in these distributions is another indication of the accuracy of the estimates.
Table 3 lists the means of the estimated endpoints. In each case, the true left and right endpoints are 2250 and 2750, respectively. It seems that the estimated left endpoint tends to be too small, and that of the right endpoint too large. But the bias decreases as the sample size increases. One potential source for the bias is that each estimate is always a site where a substitution actually occurs. For an estimate to fall exactly on the true endpoint, a necessary condition is that an event occurs there. The probability of such an occurrence is low, especially when n is small. If there is no event taking place at the true change point, the spacing that covers the true change point is likely to be shorter (if the differential region is a hot spot) than the expected length of spacing. So the left estimate is often the site of substitution right before the differential region starts. By the same token, the right estimates are often the site of substitution immediately after the differential region ends. If this is true we may correct the bias by taking some inner portion of the current estimate.
The smoothing parameter: As noted earlier, we have somewhat arbitrarily neglected an excursion of Δ<0.005 as “noise.” For a sequence of 5000 bp long, this allows a spacing in a hot spot to be as much as 25 bp longer than average even if it should be shorter than the average length under the null hypothesis. Is this relaxation enough, or is it too much? This depends on the resolution of the study. It should be clear that as long as the same smoothing scheme is used to simulate the critical values under the null hypothesis, the type I error will not be affected. Table 4 compares the 0.05level critical value, the power of the test, the median, and the standard deviation of the left and right estimates of the differential region to varying Δ. The alternative region is a centered hot spot with ratio 5:1. It shows that the power fluctuates as Δ varies, while the estimated regions grow wider as Δ increases. In the limiting cases, when Δ = 0 there is no smoothing, and when Δ → ∞ the statistic is the same as the V statistic, suggested by Kuiper (1960), which uses the difference between the maximum and minimum of G over the entire sequence. For this particular alternative hypothesis, a smoothing parameter between 0.005 and 0.01 optimizes the estimates in the sense that the medians are closest to the true endpoints and the spread of the estimates, measured by the standard deviations, are smallest. Nonetheless, Table 4 indicates that the power and the estimates are reasonably stable for a wide range of Δ; hence the method we present here is quite robust even if the “optimal” smoothing parameter is not used.
HOT SPOTS vs. COLD SPOTS
One shortcoming in many previously suggested tests is that one cannot distinguish a hot spot from a cold spot. For example, the shortest and longest interval tests require prior knowledge of the probability of the differential region (Goss and Lewontin 1996). After all, there is a shortest interval and a longest interval in every sequence, and testing with both methods runs into the problem of multiple comparison and an increased probability of type I error. The variance test provides a unified test for both cold and hot spots, but the test statistic is always positive, and it does not provide insight on the nature of the differential region (Goss and Lewontin 1996). The method presented here requires no prior assumption of either cold or hot spot. The test is twotailed when we have no knowledge about the alternative distribution. An extremely low (negative) value of T suggests the existence of a cold spot, while a highly positive T value indicates the presence of a hot spot.
A careful look at Figure 7 reveals that there are cases in which the estimated region completely misses the true differential region (P, Q, and R are all 0). In most of these cases the T statistic is highly negative. What happens in these cases is that, instead of identifying the central region as a hot spot, we have estimated the location of one of the flanking regions as a cold spot. This reveals a weakness of our definition of differential region. In this study we have arbitrarily defined the central region as the differential region. As discussed above, because of the lack of a baseline probability of substitution, alternative hypothesis A can be considered as a hot spot or, equivalently, as two cold spots. When the central region becomes very wide, the test is more likely to indicate one of the cold spots rather than the hot spot. If we have prior knowledge that the differential region is a hot spot, we can perform a onetailed test and only look for intervals where ΔG is positive. If we could detect both cold spots and leave only the central region, the estimate would be just as useful.
NUMERICAL EXAMPLES
As a first example, consider the Hin region of the dpp gene on chromosome 2 of the D. melanogaster genome. Each sequence consists of 5208 bp. Within 18 genomes sequenced, 99 loci are found to be polymorphic (Richteret al. 1997). Positions of polymorphism in the sequence are given in the legend of Figure 9, which shows the plot of G at each position of polymorphism. The test statistic T is 0.1688. The null hypothesis is rejected with a P value of 0.00054. Further, Figure 9 shows that the differential region contains a hot spot between sites 1509 and 1960. This corresponds to the apparent cluster at the 5′ end of intron 2. If we cut off this segment and anneal the remaining two pieces, the resulting sequence is 4757 bp long, with 73 sites of polymorphism. Applying the test to this second sequence, the null hypothesis is again rejected at the 0.05 level (T = 0.1442 and P = 0.01338). This time, another hot spot is identified between base pairs 3027 to 3253. This corresponds to the cluster at the 3′ end of intron 2. If we delete this hot spot, anneal the remaining two pieces, and test for heterogeneity in substitution again, the test is not significant. Finally, if we anneal all of the exons into one sequence and examine the distribution of substitutions, there is no evidence of heterogeneity. This result is consistent with the Goss and Lewontin variance test (Richteret al. 1997).
The second example concerns the Drosophila Adh protein, which consists of 245 amino acids (sequences in GenBank; URL: http://www.ncbi.nlm.nih.gov/). The data are analyzed as a numerical example by Goss and Lewontin (1996). Figure 10 lists the 17 sites of fixed divergence among eight species of the melanogaster subgroup. The test statistic T is –0.386, giving a P value of 0.0058. This result enables us to reject the null hypothesis, and is consistent with that in Goss and Lewontin (1996). Figure 10 indicates a cold spot between the amino acids 98 and 211.
Acknowledgments
We gratefully acknowledge Herman Chernoff, Peter Goss, and Steven Scott for their helpful discussions and references. A wholehearted thank you goes out to Susan Holmes for her 2 years of moral support of the senior author. Finally, we express our gratitude to Steven Scott, Daniel Larson, Richard Lung, and all those not already mentioned who have read the manuscript.
Footnotes

Communicating editor: A. G. Clark
 Received April 25, 1998.
 Accepted May 17, 1999.
 Copyright © 1999 by the Genetics Society of America