Abstract

In this work, we propose a deep video hashing (DVH) method for scalable video search. Unlike most existing video hashing methods that first extract features for each single frame and then use conventional image hashing techniques, our DVH learns binary codes for the entire video with a deep learning framework so that both the temporal and discriminative information can be well exploited. Specifically, we fuse the temporal information across different frames within each video to learn the feature representation under two criteria: the distance between a feature pair obtained at the top layer is small if they are from the same class, and large if they are from different classes; and the quantization loss between the real-valued features and the binary codes is minimized. We exploit different deep architectures to utilize spatial-temporal information in different manners and compare them with single-frame-based deep models and state-of-the-art image hashing methods. Experimental results demonstrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call