Abstract
Multi-view features are often used in video hashing for near-duplicate video retrieval because of their mutual assistance and complementarity. However, most methods only consider the local available information in multiple features, such as individual or pairwise structural relations, which do not fully utilize the dependent nature of multiple features. We thus propose a global-view hashing (GVH) framework to address the above-mentioned issue; our framework harnesses the global relations among samples characterized by multiple features. In the proposed framework, multiple features of all videos are jointly used to explore a common Hamming space, where the hash functions are obtained by comprehensively utilizing the relations from not only intra-view but also inter-view objects. In addition, the hash function obtained from the proposed GVH can learn multi-bit hash codes in a single iteration. Compared to existing video hashing schemes, the GVH not only globally considers the relations to obtain a more precise retrieval with short-length hash codes but also achieves multi-bit learning in a single iteration. We conduct extensive experiments on the CC_WEB_VIDEO and UQ_VIDEO datasets, and the experimental results show that our proposed method outperforms the state-of-the-art methods. As a side contribution, we will release the codes to facilitate other research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.