Abstract
With the advance of Deep Neural Networks (DNN), the accuracy of various tasks in machine learning has dramatically improved. Image classification is one of the most typical tasks. However, various papers have pointed out the vulnerability of DNN.It is known that small changes to an image can easily makes the DNN model misclassify it. The images with such small changes are called adversarial examples. This vulnerability of DNN is a major problem in practical image recognition. There have been researches on the methods to generate adversarial examples and researches on the methods to defense DNN models not to be fooled by adversarial example. In addition, the transferability of the adversarial example can be used to easily attack a model in a black-box attack situation. Many of the attack methods used techniques to add perturbations to images in the spatial domain. However, we focus on the spatial frequency domain and propose a new attack method.Since the low-frequency component is responsible for the overall tendency of color distributions in the images, it is easy to see the change if modified. On the other hand, the high-frequency component of an image holds less information than the low-frequency component. Even if it is changed, the change is less apparent in the appearance of the image. Therefore, it is difficult to perceive an attack on the high-frequency component at a glance, which makes it easy to attack. Thus, by adding perturbation to the high-frequency components of the images, we can expect to generate adversarial examples that appear similar to the original image with human eyes.R. Duan et al. used a discrete cosine transformation for images when focusing on the spatial frequency domain. This was a method by use of quantization, which drops the information that DNN models would have extracted. However, this method has the disadvantage that block-like noise appears in a resultant image because the target image is separated by 8 × 8 to apply the discrete cosine transformation. In order to avoid such disadvantage, we propose a method which applies the wavelet transformation to target images. Reduction of the information in the high-frequency component changes the image with the perturbation that is not noticeable, which results in a smaller change of the image than previous studies. For experiments, the peak signal to noise ratio (PSNR) was used to quantify how much the image was degraded from the original image. In our experiments, we compared the results of our method with different learning rates used to generate perturbations with the previous study and found that the maximum learning rate of our method was about 43, compared to about 32 in the previous study. Unlike previous studies, the attached success rate was also improved without using quantization: our method improved attack accuracy by about 9% compared to the previous work.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.