Abstract

Recent studies show that deep neural networks (DNNs) suffer adversarial examples. That is, attackers can mislead the output of a DNN by adding subtle perturbation to a benign input image. In addition, researchers propose new generation of technologies to produce robust adversarial examples. Robust adversarial examples can consistently fool DNN models under predefined hyperparameter space, which can break through some defenses against adversarial examples or even generate physical adversarial examples against real-world applications. Behind these achievements, expectation over transformation (EOT) algorithm plays as the backbone framework for generating robust adversarial examples. Though EOT framework is powerful, we know little about why such a framework can generate robust adversarial examples. To address this issue, we do the first work to explain the principle behind robust adversarial examples. Then, based on the findings, we point out that traditional EOT framework has a performance problem and propose an adaptive sampling algorithm to overcome such a problem. By modeling the sampling process as classic Coupon Collector Problem, we prove that our new framework reduces the cost from O ( n ∗ log ⁡ ( n ) ) to O ( n ), where n denotes the number of sampling points. Under the view of computational complexity, the algorithm is optimal for this problem. The experimental results show that our algorithm can save up to 23% overhead in average. This is significant for black-box attack, where the cost is charged by the amount of queries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call