Abstract

Public data repositories have enabled researchers to compare results across multiple genomic studies in order to replicate findings. A common approach is to first rank genes according to an hypothesis of interest within each study. Then, lists of the top-ranked genes within each study are compared across studies. Genes recaptured as highly ranked (usually above some threshold) in multiple studies are considered to be significant. However, this comparison strategy often remains informal, in that type I error and false discovery rate (FDR) are usually uncontrolled. In this paper, we formalize an inferential strategy for this kind of list-intersection discovery test. We show how to compute a $p$-value associated with a "recaptured" set of genes, using a closed-form Poisson approximation to the distribution of the size of the recaptured set. We investigate operating characteristics of the test as a function of the total number of studies considered, the rank threshold within each study, and the number of studies within which a gene must be recaptured to be declared significant. We investigate the trade off between FDR control and expected sensitivity (the expected proportion of true-positive genes identified as significant). We give practical guidance on how to design a bioinformatic list-intersection study with maximal expected sensitivity and prespecified control of type I error (at the set level) and false discovery rate (at the gene level). We show how optimal choice of parameters may depend on particular alternative hypothesis which might hold. We illustrate our methods using prostate cancer gene-expression datasets from the curated Oncomine database, and discuss the effects of dependence between genes on the test.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call