Abstract

Given rapid development witnessed by open educational resources (OER) in the past few decades, a considerable number of online educational videos emerge on various MOOC platforms such as Coursera and YouTube. Nevertheless, most educational videos on the internet are lengthy and lack of elaborate annotations, which poses a challenge for learners to explore and locate content of interest efficiently. To address this, we present an automatic note-generating method to establish correspondences between visual entities in the slide-based lecture video and their descriptive speech texts by evaluating the semantic relationship. Firstly, the visual entities are extracted and recognised from the presentation slides. Then, each of visual entities is associated with its corresponding descriptive speech text. Finally, a placement optimisation scheme is put forward to pack the visual entities and speech texts into a note-like layout in a compact fashion, which can help learners to improve their learning efficiency. The experimental results show that the efficient performances about visual entity extraction and correspondence matching are efficient. The user study is also designed to investigate the performance of Lecture2Note in facilitating learning. Compared with peer methods, the auto-generated note created by our method achieves a higher user satisfaction level regarding a properly structured layout as well as efficient content navigation and exploration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call