Abstract

It is a challenging task to fool an image classifier based on deep neural networks under the black-box setting where the target model can only be queried. Among existing black-box attacks, transfer-based methods tend to overfit the substitute model on parameter settings. Decision-based methods have low query efficiency due to fixed sampling and greedy search strategy. To alleviate the above problems, we present a new framework for query-efficient black-box adversarial attack by bridging transfer-based and decision-based attacks. We reveal the relationship between current noise and variance of sampling, the monotonicity of noise compression, and the influence of transition function on the decision-based attack. Guided by the new framework, we propose a black-box adversarial attack named Customized Iteration and Sampling Attack (CISA). CISA estimates the distance from nearby decision boundary to set the stepsize, and uses a dual-direction iterative trajectory to find the intermediate adversarial example. Based on the intermediate adversarial example, CISA conducts customized sampling according to the noise sensitivity of each pixel to further compress noise, and relaxes the state transition function to achieve higher query efficiency. Extensive experiments demonstrate CISA's advantage in query efficiency of black-box adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call