Deep neural networks (DNNs) are vulnerable to adversarial examples.Although the existing momentum-based adversarial example generation method can achieve a close 100%white-box attack success rate, it is still not ideal when attacking other models, and the black- box attack success rate is low. To address this, an adversarial example attack method based on loss smoothing is proposed to improve the transferability of adversarial examples. In the iterative process of calculating the gradient at each step, the current gradient is not used directly, but the local average gradient is used to accumulate momentum, so as to suppress the local oscillation phenomenon on the loss function surface, thereby stabilizing the update direction and escaping the local extreme point. A large number of experimental results on the ImageNet dataset show that compared with the existing momentum-based method, the average black-box attack success rate of the proposed method in single model attack experiments is improved by 38.07%and 27.77%, and the average black-box attack success rate in integrated model attack experiments is improved by and.Rising32.50%and28.63%
Read full abstract