Abstract

Increasing videos available in educational content repositories makes searching difficult, and recommendation systems have been used to help students and teachers receive a content of interest. Speech is an important carrier of information in video lectures and is used by content-based video recommendation systems. Although automatic speech recognition (ASR) transcripts have been used in modern video recommendation systems, it is not clear how annotation techniques work with noisy text. This article presents an analysis on a set of semantic annotation techniques when applied to text extracted from video lecture speech and their impact on two tasks: annotation and similarity analysis. Experiments show that topic models have good results in this scenario. Besides, a new benchmark for this task has been created and researchers can use it to evaluate new techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call