Abstract

In generating adversarial examples, a trade-off problem is always present, as the attacker aims to deceive the classifier in the most subtle way possible in order not to be uncovered. Trade-offs among different objectives can be tackled in many ways, among the most popular ones is to consider several objectives and further select among optimal ones based on some criteria. In this paper, a game theoretic approach to setting the magnitude of adversarial examples for a fast gradient sign model for image classification is proposed. The model controls the size of the pixel change and the number of randomly chosen pixels to be modified. A solution of the game indicates optimal trade-offs between the values. Numerical examples are used to illustrate the approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.