Abstract

Usually generated by ordinary users and often not particularly designed for learning, the videos on video-sharing platforms are mostly not structured enough to support learning purposes, although they are increasingly leveraged for that. Most existing studies attempt to structure the video using video summarization techniques. However, these methods focus on extracting information from within the video and aiming to consume the video itself. In this article, we design and implement BNoteHelper, a note-based video outline prototype that generates outline titles by extracting user-generated notes on Bilibili, using the BART model fine-tuned on a built dataset. As a browser plugin, BNoteHelper provides users with video overview and navigation as well as note-taking template, via two main features: outline table and navigation marker. The model and prototype are evaluated through automatic and human evaluations. The automatic evaluation reveals that, both before and after fine-tuning, the BART model outperforms T5-Pegasus in BLEU and Perplexity metrics. Also, the results from user feedback reveal that the generation outline sourced from notes is preferred by users over that sourced from video captions due to its more concise, clear, and accurate characteristics but also too general with less details and diversities sometimes. Two features of the video outline are also found to have respective advantages, especially in holistic and fine-grained aspects. Based on these results, we propose insights into designing a video summary from the user-generated creation perspective, customizing it based on video types, and strengthening the advantages of its different visual styles on video-sharing platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call