Abstract

Nowadays, the neural networks become deeper and deeper, the number of parameters has continued to increase and the complexity of model has also continued to increase. These models are difficult to deploy in embedded devices, consequently the lightweight of deep neural networks is particularly urgent. To solve these problems, knowledge distillation comes into being. Through knowledge distillation, the model can be simplified. This paper discusses knowledge distillation, a relatively flexible and efficient model compression method, describes how the deep neural networks developed and the development of knowledge distillation technology, and then introduces the method and principle of knowledge distillation in detail. We accomplished experiments on the CIFAR-10 dataset and the distillation results at different temperatures were obtained to achieve the certain results of knowledge distillation. Finally, the future development of this technical method was prospected.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.