Abstract

Knowledge distillation has become a popular task in modern deep learning applications by performing knowledge transfer from a cumbersome neural network commonly called “teacher model” to a much smaller network called “student model”. In traditional knowledge distilling process, there are usually two objectives for training the student model, namely hard target and soft target. However, sometimes it’s hard to find a trade-off between them. We unify the two objectives into one making it easier to perform knowledge distillation, and propose a novel distilling method called “Unified Distillation” to supervise the student to make fewer mistakes. The model can correct the wrong predictions according to the hard target, and maintain the advantage of knowledge distillation. Although our method can be used in almost all fields suitable for knowledge distillation, we choose neural machine translation as a study object for its complexity. We conducted experiments on three neural machine translation tasks, using a finetuned language model BERT as the teacher, and a Transformer base model as the student. The experimental results indicate that our method is better than traditional knowledge distillation method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call