Abstract

Object detection is one of the basic tasks of computer vision. Although deep neural networks have greatly promoted the development of target detection, the training of deep neural networks often requires certain computing resources, especially when dealing with more complex detection tasks. It will become more complicated, and the increase in model complexity will increase its demand for computing resources. This is not desirable for platforms with limited computing resources. Without sacrificing excessive detection accuracy, how to make the model run efficiently on a platform with limited resources has become a big challenge. In order to solve this problem, we have introduced the idea of knowledge distillation in this work. We use the knowledge learned by complex models as prior knowledge, and pass the prior knowledge to the small-scale neural network model, so that the small-scale neural network model can acquire the similar functions of large-scale networks. Based on this, we proposed a framework for target detection based on knowledge distillation and conducted experiments on PASCAL VOC [1] and KITTI [2].The experimental results show that the proposed framework can make the smaller student network show similar performance to the larger teacher network. This provides a possibility for the deep learning system to run efficiently on a resource-limited platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call