Abstract

Adversarial training is currently one of the most promising ways to achieve adversarial robustness of deep models. However, even the most sophisticated training methods is far from satisfactory, as improvement in robustness requires either heuristic strategies or more annotated data, which might be problematic in real-world applications. To alleviate these issues, we propose an effective training scheme that avoids prohibitively high cost of additional labeled data by adapting self-training scheme to adversarial training. In particular, we first use the confident prediction for a randomly-augmented image as the pseudo-label for self-training. Then we enforce the consistency regularization by targeting the adversarially-perturbed version of the same image at the pseudo-label, which implicitly suppresses the distortion of representation in latent space. Despite its simplicity, extensive experiments show that our regularization could bring significant advancement in adversarial robustness of a wide range of adversarial training methods and helps the model to generalize its robustness to larger perturbations or even against unseen adversaries.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.