Abstract
Approximate Nearest Neighbor (ANN) search is an important research topic in multimedia and computer vision fields. In this article, we propose a new deep supervised quantization method by Self-Organizing Map to address this problem. Our method integrates the Convolutional Neural Networks and Self-Organizing Map into a unified deep architecture. The overall training objective optimizes supervised quantization loss as well as classification loss. With the supervised quantization objective, we minimize the differences on the maps between similar image pairs and maximize the differences on the maps between dissimilar image pairs. By optimization, the deep architecture can simultaneously extract deep features and quantize the features into suitable nodes in self-organizing map. To make the proposed deep supervised quantization method scalable for large datasets, instead of constructing a larger self-organizing map, we propose to divide the input space into several subspaces and construct self-organizing map in each subspace. The self-organizing maps in all the subspaces implicitly construct a large self-organizing map, which costs less memory and training time than directly constructing a self-organizing map with equal size. The experiments on several public standard datasets prove the superiority of our approaches over the existing ANN search methods. Besides, as a by-product, our deep architecture can be directly applied to visualization with little modification, and promising performance is demonstrated in the experiments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.