Abstract

Nowadays, the neural networks become deeper and deeper, the number of parameters has continued to increase and the complexity of model has also continued to increase. These models are difficult to deploy in embedded devices, consequently the lightweight of deep neural networks is particularly urgent. To solve these problems, knowledge distillation comes into being. Through knowledge distillation, the model can be simplified. This paper discusses knowledge distillation, a relatively flexible and efficient model compression method, describes how the deep neural networks developed and the development of knowledge distillation technology, and then introduces the method and principle of knowledge distillation in detail. We accomplished experiments on the CIFAR-10 dataset and the distillation results at different temperatures were obtained to achieve the certain results of knowledge distillation. Finally, the future development of this technical method was prospected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call