Abstract

We introduce a new class of methods for finite-sample false discovery rate (FDR) control in multiple testing problems with dependent test statistics where the dependence is known. Our approach separately calibrates a data-dependent p-value rejection threshold for each hypothesis, relaxing or tightening the threshold as appropriate to target exact FDR control. In addition to our general framework, we propose a concrete algorithm, the dependence-adjusted Benjamini–Hochberg (dBH) procedure, which thresholds the BH-adjusted p-value for each hypothesis. Under positive regression dependence, the dBH procedure uniformly dominates the standard BH procedure, and in general it uniformly dominates the Benjamini–Yekutieli (BY) procedure (also known as BH with log correction), which makes a conservative adjustment for worst-case dependence. Simulations and real data examples show substantial power gains over the BY procedure, and competitive performance with knockoffs in settings where both methods are applicable. When the BH procedure empirically controls FDR (as it typically does in practice), the dBH procedure performs comparably.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.