Abstract

A large number of videos are generated and uploaded to video websites (like youku, youtube) every day and video websites play more and more important roles in human life. While bringing convenience, the big video data raise the difficulty of video summarization to allow users to browse a video easily. However, although there are many existing video summarization approaches, the key frames selected fail to integrate the large video contexts and the qualities of the summarized results are difficult to evaluate because of the lack of ground-truth. Inspired by the previous methods that extract key frames, we propose a deep recurrent neural network model, which learns to extract category-driven key frames. First, we sequentially extract a fixed number of key frames using time-dependent location networks. Second, we utilize recurrent neural network to integrate information of the key frames to classify the category of the video. Therefore, the quality of the extracted key frames could be evaluated by the categorization accuracy. Experiments on a 500-video dataset show that the proposed scheme extracts reasonable key frames and outperforms other methods by quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call