Abstract

Knowledge distillation is a method to train a lightweight network by transferring class probability knowledge from a cumbersome teacher network. However, transferring only the class probability knowledge would limit the distillation performance. Therefore, several approaches have been proposed to transfer the teacher's knowledge at the feature map level. In this paper, we revisit the feature distillation method and have found that the larger the teacher's architecture/capacity becomes, the more difficult it is for the student to imitate. Thus, the feature distillation method is unable to achieve its full potential. To address this, a novel end-to-end distillation framework, termed Customizing a Teacher for Feature Distillation (CTFD), is proposed to train a teacher to be more compatible with its student. In addition, we apply the customized teacher to three feature distillation methods. Moreover, data augmentation is used as a trick to train the student to improve its generalization performance. Extensive empirical experiments and analyses are conducted on three computer vision tasks, including image classification, transfer learning, and object detection, to substantiate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call