Abstract

A setup of simple binary hypothesis testing is examined, where independent and identically distributed statistics are sequentially accumulated until either the accumulation exceeds a decision threshold (thus deciding the alternative hypothesis), or the time horizon passes a deadline (thus deciding the null hypothesis). Unlike in other adaptive procedures such as sequential probability ratio test, here the considered adaptive decision rule (ADR) has an upper bound on its stopping time, namely the deadline, and only one boundary, namely the decision threshold. Thus the ADR may be a suitable solution for scenarios where the statistician desires to identify the alternative hypothesis quickly, and requires a hard deadline on the decision-making process. For nonnegative statistics, the ADR is shown to yield the same detection performance as the fixed-size decision rule with sample size equal to the deadline, and to have a mean stopping time only a fraction of the deadline. For log-likelihood ratio statistics, the performance of the ADR is compared with the fixed-size Neyman-Pearson optimal decision rule, and it is shown that the same detection performance can be assured with the ratio between the mean stopping time and the deadline vanishingly small, in the large deadline asymptotic regime.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call