Abstract
False Discovery Rate (FDR) and the Bayes risk are two different statistical measures, which can be used to evaluate and compare multiple testing procedures. Recent results show that under sparsity FDR controlling procedures, like the popular Benjamini-Hochberg (BH) procedure, perform also very well in terms of the Bayes risk. In particular asymptotic Bayes optimality under sparsity (ABOS) of BH was shown previously for location and scale models based on log-concave densities. This article extends previous work to a substantially larger set of distributions of effect sizes under the alternative, where the alternative distribution of true signals does not change with the number of tests $m$, while the sample size $n$ slowly increases. ABOS of BH and the corresponding step-down procedure based on FDR levels proportional to $n^{-1/2}$ are proved. A simulation study shows that these asymptotic results are relevant already for relatively small values of $m$ and $n$. Apart from showing asymptotic optimality of BH, our results on the optimal FDR level provide a natural extension of the well known results on the significance levels of Bayesian tests.
Highlights
Driven by a vast number of applications, over the last few years multiple hypothesis testing with sparse alternatives has become a topic of intensive research
In contrast to BH the Bonferroni rule does not adapt well to an unknown level of sparsity. Apart from these theoretical findings, we report the results of an extensive simulation study, comparing the performance of BH, Bonferroni correction and a multiple testing procedure based on the empirical Bayes estimate proposed in [26]
Most of the technical proofs have been put in the Appendix, which includes a discussion on the relationship between rules controlling the Bayesian False Discovery Rate (BFDR) and the Bayes classifier
Summary
Driven by a vast number of applications, over the last few years multiple hypothesis testing with sparse alternatives has become a topic of intensive research (see [1, 9, 12, 13, 24] or [31]). Considering point null hypotheses allows nontrivial asymptotic inference (which means positive asymptotic power) when keeping the distribution of true effects under the alternative fixed while increasing the number of tests. This assumption, natural in many practical applications, substitutes the assumptions of [6] and [33], where the magnitude of true effects increases with the number of tests. In contrast to BH the Bonferroni rule does not adapt well to an unknown level of sparsity Apart from these theoretical findings, we report the results of an extensive simulation study, comparing the performance of BH, Bonferroni correction and a multiple testing procedure based on the empirical Bayes estimate proposed in [26]. Most of the technical proofs have been put in the Appendix, which includes a discussion on the relationship between rules controlling the Bayesian False Discovery Rate (BFDR) and the Bayes classifier
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have