Abstract
Cross-modal hashing has been widely used in multimedia retrieval tasks due to its fast retrieval speed and low storage cost. Recently, many deep unsupervised cross-modal hashing methods have been proposed to deal the unlabeled datasets. These methods usually construct an instance similarity matrix by fusing the image and text modality-specific similarity matrices as the guiding information to train the hashing networks. However, most of them directly use cosine similarities between the bag-of-words (BoW) vectors of text datapoints to define the text modality-specific similarity matrix, which fails to mine the semantic similarity information contained in the text modal datapoints and leads to the poor quality of the instance similarity matrix. To tackle the aforementioned problem, in this paper, we propose a novel Unsupervised Cross-modal Hashing via Semantic Text Mining, called UCHSTM. Specifically, UCHSTM first mines the correlations between the words of text datapoints. Then, UCHSTM constructs the text modality-specific similarity matrix for the training instances based on the mined correlations between their words. Next, UCHSTM fuses the image and text modality-specific similarity matrices as the final instance similarity matrix to guide the training of hashing model. Furthermore, during the process of training the hashing networks, a novel self-redefined-similarity loss is proposed to further correct some wrong defined similarities in the constructed instance similarity matrix, thereby further enhancing the retrieval performance. Extensive experiments on two widely used datasets show that the proposed UCHSTM outperforms state-of-the-art baselines on cross-modal retrieval tasks. We provide our source codes at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/rongchengtu1/UCHTIM.</uri>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.