Abstract

There are a wide variety of video data in the information-oriented society, and how to detect the video clips that users want in the massive video data quickly and accurately is attracting more people to research. Given the existing near-duplicate video detection algorithms are processed by extracting global or local features directly in the key frame level, which is very time-consuming, this paper introduces a new cascaded near-duplicate video detection approach using the temporal consistency feature in the shot level to preliminarily filter out some dissimilar videos before extracting features, and then combining global and local features to obtain the ultimate videos that are duplicated with the query video step by step. We have verified the approach by experimenting on the CC_WEB_VIDEO dataset, and compared the performance with the method based on global signature color histogram. The results show the proposed method can achieve better detection accuracy, especially for the videos with complex motion scenes and great frame changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call