Abstract

Deep Neural Networks (DNNs) have been proven vulnerable to adversarial perturbations, which narrow their applications in safe-critical scenarios such as video surveillance and autonomous driving. To counter this threat, a very recent line of adversarial defense methods is proposed to increase the uncertainty of DNNs via injecting random noises in both the training and testing process. Note the existing defense methods usually inject noises uniformly to DNNs. We argue that the magnitude of noises is highly correlated with the response of corresponding features and the randomness on important feature spots can further weaken adversarial attacks. As such, we propose a new method, namely AdaNI, which can increase feature randomness via Adaptive Noise Injection to improve the adversarial robustness. Compared to existing methods, our method creates non-unified random noises guided by features and then injects them into DNNs adaptively. Extensive experiments are conducted on several datasets (e.g., CIFAR10, CIFAR100, Mini-ImageNet) with comparisons to state-of-the-art defense methods, which corroborates the efficacy of our method against a variety of powerful white-box attacks (e.g., FGSM, PGD, C&W, Auto Attack) and black-box attacks (e.g., Transferable, ZOO, Square Attack). Moreover, our method is adapted to improve the robustness of DeepFake detection to demonstrate its applicability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.