Abstract

Knowledge distillation is an effective method for model lightweighting. However, the previous distillation methods in object detection, in order to solve the problem of extreme imbalance between positive and negative examples, rely on manually adjusted hyper-parameters and lack generalization. And ignore the relationship information between different detection instances. Therefore, we propose a new distillation algorithm for the task. This algorithm automatically selects the Top k detection instances that need distillation most based on the network output, and fully considers the feature distillation based on the attention mechanism and the instance relationship distillation based on the Euclidean distance. Our results show that under various object detection frameworks, the student model has achieved a significant effect improvement with a lighter structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call