Abstract

Mainstream transferable adversarial attacks tend to introduce noticeable artifacts into the generated adversarial examples, which will impair the invisibility of adversarial perturbation and make these attacks less practical in real-world scenarios. To deal with this problem, in this paper, we propose a novel black-box adversarial attack method that can significantly improve the invisibility of adversarial examples. We analyze the sensitivity of a deep neural network in the frequency domain and take into account the characteristics of the human visual system in order to quantify the contribution of each frequency component in adversarial perturbation. Then, we collect a set of candidate frequency components that are insensitive to the human visual system by applying K-means clustering and we propose a joint loss function during the generation of adversarial examples, limiting the frequency distribution of perturbations during attacks. The experimental results show that the proposed method significantly outperforms existing transferable black-box adversarial attack methods in terms of invisibility, which verifies the superiority, applicability and potential of this work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.