Abstract
As multimedia data grows exponentially, searching for and retrieving a relevant image is becoming a challenge for researchers. Hashing is a widely adopted method because of its high performance in image retrieval with deep neural networks and multiple convolutional layers. Even so, most hashing methods ignore the computational cost and memory storage consumption. When the deep hashing model size is large, it leads to a slowdown in the response time of the model compared to the small model. Addressing these issues, a novel optimized deep supervised hashing based on a teacher-student approach for swift and precise image retrieval is proposed in this paper. In this work, the small student model is trained using the knowledge distillation from the large teacher model and the information from the one-hot labels. Therefore, a weight allocation loss function based on the teacher and student models is defined. Meanwhile, we apply model pruning to decrease the amount of the student model further to increase the response time. Therefore, knowledge distillation is performed on the pruned model. After that, the remaining weights are quantized to reach the smaller size of the model. Extensive experimental outcomes on two widely used datasets prove the outstanding efficiency of our proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.