Detection of linkage to genes for quantitative traits remains a challenging task. Recently, variance components (VC) techniques have emerged as among the more powerful of available methods. As often implemented, such techniques require assumptions about the phenotypic distribution. Usually, multivariate normality is assumed. However, several factors may lead to markedly nonnormal phenotypic data, including (a) the presence of a major gene (not necessarily linked to the markers under study), (b) some types of gene x environment interaction, (c) use of a dichotomous phenotype (i.e., affected vs. unaffected), (d) nonnormality of the population within-genotype (residual) distribution, and (e) selective (extreme) sampling. Using simulation, we have investigated, for sib-pair studies, the robustness of the likelihood-ratio test for a VC quantitative-trait locus-detection procedure to violations of normality that are due to these factors. Results showed (a) that some types of nonnormality, such as leptokurtosis, produced type I error rates in excess of the nominal, or alpha, levels whereas others did not; and (b) that the degree of type I error-rate inflation appears to be directly related to the residual sibling correlation. Potential solutions to this problem are discussed. Investigators contemplating use of this VC procedure are encouraged to provide evidence that their trait data are normally distributed, to employ a procedure that allows for nonnormal data, or to consider implementation of permutation tests.