Abstract

Many universities offer distance learning by recording classroom lectures and making them accessible to remote students over the Internet. A university's repository usually contains hundreds of such lecture videos. Each lecture video is typically an hour's duration and is often monolithic. It is cumbersome for students to search through an entire video, or across many videos, in order to find portions of their immediate interest. It is desirable to have a system that takes user-given keywords as a query and provides a link to not only the corresponding lecture videos but also to the section within the video. In order to do this, lecture videos are sometimes tagged with meta-data to enable easy identification of the different sections. However, such tagging is often done manually and is a time-consuming process. In this paper, we propose a technique to automatically generate tags for lecture videos. This is based on generating speech transcripts automatically using a speech recognition engine and automatic indexing and search of the transcripts. We also describe our system implemented for easily browsing through a lecture video repository. Our system takes keywords from users as a query and returns a list of videos as the results. In each video of the retrieved list, the portion of the video that matches the query is highlighted so that users can easily navigate to that location within the video. Following the approach and using open source tools mentioned in the paper, a lecture video repository can provide features for users to access the content required by them easily. We used open source libraries available for speech recognition and text search purposes. We have performed experiments to test the performance of our system, we have achieved a recall of 0.72 and an average precision of 0.84 as video retrieval results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call