Abstract

Although standard statistical tests (such as contingency chi-square or G tests) are not well suited to the analysis of temporal changes in allele frequencies, they continue to be used routinely in this context. Because the null hypothesis stipulated by the test is violated if samples are temporally spaced, the true probability of a significant test statistic will not equal the nominal α level, and conclusions drawn on the basis of such tests can be misleading. A generalized method, applicable to a wide variety of organisms and sampling schemes, is developed here to estimate the probability of a significant test statistic if the only forces acting on allele frequencies are stochastic ones (i.e., sampling error and genetic drift). Results from analyses and simulations indicate that the rate at which this probability increases with time is determined primarily by the ratio of sample size to effective population size. Because this ratio differs considerably among species, the seriousness of the error in using the standard test will also differ. Bias is particularly strong in cases in which a high percentage of the total population can be sampled (for example, endangered species). The model used here is also applicable to the analysis of parent-offspring data and to comparisons of replicate samples from the same generation. A generalized test of the hypothesis that observed changes in allele frequency can be satisfactorily explained by drift follows directly from the model, and simulation results indicate that the true α level of this adjusted test is close to the nominal one under most conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call