Abstract
AbstractIn recent years, the development of deep neural networks is in full gear in the fields of computer vision, natural language processing, and others. However, the existence of adversarial examples brings risks to these tasks, which is also a huge obstacle to implement deep learning applications in the real world. In order to solve the aforementioned problems and improve the robustness of neural networks, a novel defense network based on generative adversarial networks (GANs) and saliency information is proposed. First, the generator is utilized to eliminate disturbances of adversarial samples and clean samples, at the same time, the distance between these two distributions is minimized by loss function. Then, salient feature extraction model is used to extract salient maps of both clean examples and adversarial samples, thus improving the denoising effect of the generator by reducing the difference between salient maps. The proposed method can guide the generation networks to accurately remove the invisible disturbance and to restore the adversarial samples to clean samples, which not only improves the success rate of classification but also achieves the defense effect. Extensive experiments are conducted to compare the defense effect of our proposed method with other defense methods against various attacks. Experimental results show that our method has strong defensive capabilities against these attack methods.KeywordsAdversarial exampleDefenseDeep neural networksGenerative adversarial networksMulti-scale discriminator
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.