Abstract

In recent times, exploration of multimedia required ever increasing demand and application for intelligent video retrieval from repositories. This paper presents an efficient video retrieval framework by employing the effective singular value decomposition and computationally low complex ordered dither block truncation coding to extract simple, compact, and well discriminative Color Co-occurrence Feature (CCF). In this context, the occurrence probability of a video frame pixel in the neighborhood is employed to formulate this specific and distinct feature. Moreover, we applied a new adaptive low rank thresholding based on energy concentricity, transposition, and replacement invariance characteristics to formulate a unified fast shot boundary detection approach to solve the protuberant bottleneck problem for real-time cut and gradual transition that eventually contributes for effective keyframes extraction. Therefore, we can assert that the keyframes are distinct and discriminative to represent the whole video content. For effective indexing and retrieval, it is imperative to formulate similarity score evaluator for the encapsulated contextual video information with substantial temporal consistency, least computation, and post-processing. Therefore, we introduced graph-based pattern matching for video retrieval with an aim to sustain temporal consistency, accuracy and time overhead. Experimental results signify that the proposed method on average provides 7.40% and 17.91% better retrieval accuracy and 23.21% and 20.44% faster than the recent state-of-the-art methods for UCF11 and HMDB51 standard video dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call