Abstract

Low-resolution images are ubiquitous in real applications such as surveillance and mobile photography. However, existing fine-grained approaches usually suffer catastrophic failures when dealing with low-resolution inputs. This is because their learning strategy inherently depends on the semantic structure of the pre-trained model, resulting in poor robustness and generalization. To mitigate this limitation, we propose a dynamic semantic structure distillation learning framework. Our method first facilitates knowledge distillation of diverse semantic structures by perturbing the composition of semantic components and then utilizes a decoupled distillation objective to prevent the loss of primary semantic part relation knowledge. We evaluate our proposed approach on two knowledge distillation tasks: high-to-low resolution and large-to-small model. The experimental results show that our proposed approach significantly outperforms existing methods in low-resolution fine-grained image classification tasks. This indicates that it can effectively distill knowledge from high-resolution teacher models to low-resolution student models. Furthermore, we demonstrate the effectiveness of our approach in general image classification and standard knowledge distillation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call