Abstract

In this paper, we investigate the adversarial robustness of classification problems. In the considered model, after a sample is generated, it will be modified by an adversary before being observed by the classifier. The classifier needs to decide the underlying hypothesis that generates the sample from the adversarially modified data. We formulate this problem as a minimax hypothesis testing problem, in which the goal of the adversary is to design attack strategy to maximize the error probability while the decision maker aims to design decision rule so as to minimize the error probability. We solve this minimax problem and characterize the corresponding optimal strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call