Abstract

Abstract We propose a universal classifier for binary Neyman–Pearson classification where the null distribution is known, while only a training sequence is available for the alternative distribution. The proposed classifier interpolates between Hoeffding’s classifier and the likelihood ratio test and attains the same error probability prefactor as the likelihood ratio test, i.e. the same prefactor as if both distributions were known. In addition, such as Hoeffding’s universal hypothesis test, the proposed classifier is shown to attain the optimal error exponent tradeoff attained by the likelihood ratio test whenever the ratio of training to observation samples exceeds a certain value. We propose a lower bound and an upper bound to the optimal training to observation ratio. In addition, we propose a sequential classifier that attains the optimal error exponent tradeoff.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call