Abstract

The ever increasing volume of video content demands to develop efficient and effective video summarization (VS) techniques to manage the video data. Recent developments on sparse representation have demonstrated prospective results for VS. In this paper, in consideration of visual similarity of adjacent frames, we formulate the video summarization problem with a temporal collaborative representation (TCR) model, in which the adjacent frames instead of an individual frame are taken into consideration to avoid selecting transitional frames. In addition, a greedy iterative algorithm is designed for model optimization. Experimental results on a benchmark dataset with various types of videos demonstrate that the proposed algorithms can not only outperform the state of the art, but also reduce the probability of selecting transitional frames.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call