Abstract

The video highlight detection task is to localize key elements (moments of user’s major or special interest) in a video. Most of the existing highlight detection approaches extract features from the video segment as a whole without considering the difference of local features spatially. In spatial extent, not all regions are worth watching because some of them only contain the background of the environment without human or other moving objects, especially when there is lots of clutter in the background. To deal with this issue, we propose a novel region-based model which can automatically localize the key elements in a video without any extra supervised annotations. Specifically, the proposed model produces position-sensitive score maps for local regions in the spatial dimension of the video segment, and then aggregates all position-wise scores with position-pooling operation. The regions with higher response values will be extracted as key elements. Thus more effective features of the video segment are obtained to predict the highlight score. The proposed position-sensitive scheme can be easily integrated into an end-to-end fully convolutional network which aims to update parameters via stochastic gradient descent method in the backward propagation to improve the robustness of the model. Extensive experimental results on the YouTube and SumMe datasets demonstrate that the proposed approach achieves significant improvement over state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.