Abstract

In order to solve the problem that the supervised information is not effectively used and lack of correlation between image features in image retrieval, a joint learning method based on Transformer (JLMT) for image retrieval is proposed in this paper. This method not only combines Transformer with asymmetric learning strategy to extract image features and generate the correlation between image features, but also combines the classification loss to make full use of the supervised information. For training images, we uses the Transformer to generate hash codes, classification loss and the semantic similarity loss are used to learn the hash function, so that the image hash codes are closer to the real hash codes. For the database (retrieval) images, we use the asymmetric learning strategy to learn the hash codes directly through the hash codes of the training images. Finally, the query images generate hash codes through hash function and search similar images in the database image set according to the Hamming distance. Also, a new loss function for the classification of multi-label datasets is proposed in this paper. Experimental results show that the JLMT has the state-of-the-art performance on public datasets of CIFAR-10, NUS-WIDE and MS-COCO.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.