Abstract

Adversarial distillation (AD) which combines adversarial training with knowledge distillation has become a powerful procedure to mitigate the effects of adversarial examples to deep neural networks, which aims at distilling a robust student network from a pre-trained robust teacher network. In AD, the teacher's reliability has been a crucial problem and recent work incorporates a self-distillation loss on the student into the AD framework, encouraging the student to partially trust the teacher and gradually trust itself more. However, the only key factor to control this trust level cannot be adaptive enough for different training samples, guiding the student to trust itself inappropriately and inspiring us to pursue a better way to obtain more reliable supervision. In this paper we revisit the performance variation of all training samples from the teacher to student, showing that previous work is not always adaptive with the distillation and can be further refined. Accordingly, a more effective and justified supervision, namely Curricular Adversarial Distillation (CAD), is proposed to help boost the self-distillation process. In CAD, the KL divergences between both the clean and adversarial outputs of teacher and the smoothed labels are calculated for the supervision to address the teacher's unreliability in the self-distillation. Besides, the smoothed labels follow a curriculum-style scheduler, getting smoothed to different degrees at different training stages and helping the teacher continuously be adaptive to the varying adversarial examples from student. Extensive experiments have shown the superiority of our strategy in distilling a robust student network against various attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.