Abstract

With the increasing popularity of open educational resources in the past few decades, more and more users watch online videos to gain knowledge. However, most educational videos only provide monotonous navigation tools and lack elaborating annotations. This makes the task of locating interesting contents time consuming. To address this limitation, in this article, we propose a slide-based video navigation tool that is able to extract the hierarchical structure and semantic relationship of visual entities in videos, by integrating multichannel information. Features of visual entities are first extracted from the presentation slides by a novel deep learning framework. Then, we propose a clustering approach to extract hierarchical relationships between visual entities (e.g., formulas, texts, or graphs appearing in educational slides). We use this information to associate visual entities with their corresponding audio speech text, by evaluating their semantic relationship. We present two cases where we use the structured data produced by this tool to generate a multilevel table of contents and notes to provide additional navigation materials for learning. The evaluation experiments demonstrate the effectiveness of our proposed solutions for visual entity extraction, hierarchical relationship extraction, as well as corresponding speech text matching. The user study also shows promising improvement in the autogenerated table of contents and notes for facilitating learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call