Knowledge distillation is an effective approach to transfer knowledge across models. Existing distillation methods for image classification are primarily focusing on transferring knowledge for recognizing natural images but ignoring the models’ robustness to adversarial examples. To benchmark the knowledge distillation methods on transferring adversarial robustness, we conduct an empirical study on eight popular distillation methods with adversarially robust teacher models, showing that student models can hardly inherit adversarial robustness from teacher models through existing methods. To alleviate such limitation, we propose a novel Guided Adversarial Contrastive Distillation (GACD) to transfer adversarial robustness from the teacher to students through latent representations. Specifically, given a robust teacher model, student models are trained adversarially to extract representations that align with the teacher. We adopt a re-weighting strategy during distillation so that student models can learn from teacher models wisely. To the best of our knowledge, GACD is the first attempt to simultaneously transfer knowledge and adversarial robustness from teacher to student models through latent representations. By extensive experiments evaluating student models on several popular datasets such as CIFAR-10 and CIFAR-100, we demonstrate that GACD can effectively transfer robustness across different models and achieve comparable or better results than existing methods. We also fine-tune the models on different tasks and show encouraging results, demonstrating the transferability of learned representations. Lastly, we visualize the latent representations of different student models for qualitative analysis.