Abstract
Deep Hashing is widely used for large-scale image-retrieval tasks to speed up the retrieval process. Current deep hashing methods are mainly based on the Convolutional Neural Network (CNN) or Vision Transformer (VIT). They only use the local or global features for low-dimensional mapping and only use the similarity loss function to optimize the correlation between pairwise or triplet images. Therefore, the effectiveness of deep hashing methods is limited. In this paper, we propose a dual-stream correlation-enhanced deep hashing framework (DSCEH), which uses the local and global features of the image for low-dimensional mapping and optimizes the correlation of images from the model architecture. DSCEH consists of two main steps: model training and deep-hash-based retrieval. During the training phase, a dual-network structure comprising CNN and VIT is employed for feature extraction. Subsequently, feature fusion is achieved through a concatenation operation, followed by similarity evaluation based on the class token acquired from VIT to establish edge relationships. The Graph Convolutional Network is then utilized to enhance correlation optimization between images, resulting in the generation of high-quality hash codes. This stage facilitates the development of an optimized hash model for image retrieval. In the retrieval stage, all images within the database and the to-be-retrieved images are initially mapped to hash codes using the aforementioned hash model. The retrieval results are subsequently determined based on the Hamming distance between the hash codes. We conduct experiments on three datasets: CIFAR-10, MSCOCO, and NUSWIDE. Experimental results show the superior performance of DSCEH, which helps with fast and accurate image retrieval.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.