Abstract

AbstractPrevious video summarization methods often neglected inter‐frame variations during the preprocessing stage. Sampling repeated frames can lead to information redundancy, while missing key frames can result in deviations in semantic comprehension and inaccuracies in the generated summaries. This work proposes a dynamic sampling module that leverages frame‐level motion information to alleviate these issues. The module conducts high‐frequency sampling during intervals with significant changes, allowing for a finer capture of details. Combined with a hierarchical multi‐modal structure, it integrates shot‐level visual and textual information to enhance the semantic understanding of video clips and improve the accuracy of the summarized content. Extensive experiments on benchmark datasets SumMe and TVSum demonstrate the effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.