Abstract

In recent years, deep neural networks have achieved great success in various fields, especially in computer vision. However, recent investigations have shown that current state-of-the-art classification models are highly vulnerable to adversarial perturbations contained in the input examples. Therefore, we propose a defense methodology against the adversarial perturbations. Prior to a targeted network, adversarial perturbations are erased or mitigated via a deep residual generative network (RGN). Through adopting an auxiliary network VGG-19, the RGN is trained toward optimization of a joint loss, including low-level pixel loss, middle-level texture loss, and high-level task loss, thereby the restored examples are highly consistent with the original legitimate examples. We call our proposed defense based on RGN as RGN-Defense. It is an independent defense module that can be flexibly integrated with other defense strategy, for example, adversarial training, to construct a more powerful defense system. In the experiment, we evaluate our approach on ImageNet, and the comprehensive results have demonstrated robustness of RGN-Defense against current representative attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.