Abstract

Deep neural networks have gained extensive applications in computer vision, demonstrating significant success in fundamental research tasks such as image classification. However, the robustness of these networks faces severe challenges in the presence of adversarial attacks. In real-world scenarios, addressing hard-label attacks often requires the execution of tens of thousands of queries. To combat these challenges, the Black-Box Boundary Attack leveraging Gradient Optimization (GOBA) has been introduced. This method employs a binary search strategy to acquire an initial adversarial example with significant perturbation. The Monte Carlo algorithm is utilized to estimate the gradient of the sample, facilitating iterative movement along the estimated gradient and the direction of the malicious label. Moreover, query vectors positively correlated with the gradient are extracted to construct a sampling space with an optimal scale, thereby enhancing the efficiency of the Monte Carlo algorithm. Experimental evaluations were conducted using the HSJA, QEBA, and NLBA attack methodologies on the ImageNet, CelebA, and MNIST datasets, respectively. The results indicate that, under the constraint of 3 k query times, the GOBA, compared to other methods, can, on average, reduce perturbation (L2 distance) by 55.74% and simultaneously increase the attack success rate by an average of 13.78%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call