Abstract

With the prevalence of video sharing, there are increasing demands for automatic video digestion such as highlight detection. Recently, platforms with crowdsourced time-sync video comments have emerged worldwide, providing a good opportunity for highlight detection. However, this task is non-trivial: (1) time-sync comments often lag behind their corresponding shot; (2) time-sync comments are semantically sparse and noisy; (3) to determine which shots are highlights is highly subjective. The present paper aims to tackle these challenges by proposing a framework that (1) uses concept-mapped lexical-chains for lag-calibration; (2) models video highlights based on comment intensity and combination of emotion and concept concentration of each shot; (3) summarize each detected highlight using improved SumBasic with emotion and concept mapping. Experiments on large real-world datasets show that our highlight detection method and summarization method both outperform other benchmarks with considerable margins.

Highlights

  • Every day, people watch billions of hours of videos on YouTube, with half of the views on mobile devices1

  • The present study proposes the following: (1) word-to-concept and word-to-emotion mapping based on global wordembedding, from which lexical-chains are constructed for bullet-comments lag-calibration; (2) highlight detection based on emotional and conceptual concentration and intensity of lagcalibrated bullet-comments; (3) highlight summarization with modified Basic Sum algorithm that treats emotions and concepts as basic units in a bullet-comment

  • The main contribution of the present paper are as follows: (1) We propose an entirely unsupervised framework for video highlight-detection and summarization based on time-sync comments; (2) We develop a lag-calibration technique based on concept-mapped lexical chains; (3) We construct large datasets for bullet-comment wordembedding, bullet-comment emotion lexicon and ground-truth for highlight-detection and labeling evaluation based on bullet-comments

Read more

Summary

Introduction

People watch billions of hours of videos on YouTube, with half of the views on mobile devices. 1 https://www.youtube.com/yt/press/statistics.html ing, there is increasing demand for fast video digestion. Imagine a scenario where a user wants to quickly grasp a long video, without dragging the progress bar repeatedly to skip shots unappealing to the user. With automatically-generated highlights, users could digest the entire video in minutes, before deciding whether to watch the full video later. Automatic video highlight detection and summarization could benefit video indexing, video search and video recommendation. Finding highlights from a video is not a trivial task. What is considered to be a “highlight” can be very subjective. Lack of abstract semantic information has become a bottleneck of highlight detection in traditional video processing

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.