Abstract

At present, deep neural network has been widely used in many research fields. However, with the in-depth research on artificial intelligence research, it is found that artificial intelligence technology based on deep neural networks bringsconvenience coming with potential security risks. For example, an attacker may misguide the image classification model to output a wrong result with high confidence by adding slight perturbations to a clean image via the adversarial example method. Compared with the previous methods of adding additional information to the images to generate adversarial examples, Ranjie Duan et al. proposed the AdvDrop algorithm. In this algorithm, adversarial examples are generated by deleting the existing information of the images, which is realized by adjusting the quantization step. However, in the quantization process, the AdvDrop algorithm does not consider the different effects of different gradient values of the quantization table on the adversarial effect. In this regard, AdvDrop+ is proposed, that is, in each iteration, the quantization tables are updated according to the gradient numerical values scaled with a factor. To find the proper scaling factor, we find the gradient value of the highest frequency in the gradient histogram and compute its logarithm and the final result is the scaling factor. The experiments show that AdvDrop+ has better attack performance than AdvDrop under the setting of target attack with nearly the same image distortion. At the same time, AdvDrop+ retains the characteristic of AdvDrop, which can drop information.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.