Abstract

Background/Aims: Genetic single-nucleotide polymorphism (SNP) data are often analyzed using trend tests that rely on a specific assumption about the way that disease frequency varies across genotypes, but the validity of this assumption is not typically known. We explore the relative efficiency of trend tests in which the assumed model may or may not correspond to the true genetic model. Methods: We derive formulae for the asymptotic relative efficiencies (AREs) comparing tests that assume different genetic models. We consider both unstratified and stratified tests, using both case-control and cohort data. We illustrate these formulae using realistic parameters and compare the calculated AREs to simulated relative efficiencies in finite samples. Results: The AREs are identical for unstratified tests using case-control and cohort data, but differ for stratified tests. Loss of efficiency can be substantial, given specific combinations of high-risk allele frequencies, disease frequencies, and assumed versus actual genetic models. Given reasonably large sample sizes, asymptotic calculations align well with finite sample simulations of relative efficiency. Conclusions: ARE is a useful estimate of the relative efficiency of statistics using different underlying genetic models. ARE calculations reveal that additive gene doses, which are most commonly used, lead to large losses in power in some settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call