Abstract

In the development process of today's artificial intelligence system, many text, voice and image recognition systems have been born. In the process of development and application, it has been found that the artificial intelligence system will suffer from classification errors and other problems after adding a small disturbance to the recognized objects. We call such objects with small perturbations adversarial examples, and the process of generating adversarial examples is called adversarial attacks. At present, there are FGSM, I-FGSM and MI-FGSM adversarial attacks algorithms for image. These algorithms are able to carry out global disturbance on image examples. Although they have good adversarial attack effect, in real life, the adversarial examples is often used to cause the failure of artificial intelligence system with smaller local disturbance. Therefore, we hope to study the influence of small-scale disturbance on artificial intelligence system. In this paper, an adversarial algorithm based on saliency detection, LC-MIFGSM, is proposed to maintain the adversarial attack effect and at the same time compress the adversarial attack range, improve the attack concealment, and make the generation of adversarial attack examples more oriented to real scenes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.