Abstract

An automatic subtitle summarization of videos not only aims to tackle the problem of content overloading but can also improve the performance of video retrieval, allowing viewers to efficiently access and understand the main content of a video. However, subtitle summarization is a challenging task due to documents being composed of incomplete sentences, meaningless phrases, and informal language. In this paper, we introduce a novel multiple attention mechanism for subtitle summarization to address such issues. We take advantage of both Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (Bi-LSTM) Networks to capture the critical information of the sentence that is used to identify the importance of the sentence. Based on the salient sentence score, we introduce the summary generation method to produce a summary of the video. The experiments are conducted on both subtitle documents from educational videos and text documents. To the best of our knowledge, no previous studies have applied multiple-attention mechanisms for summarizing educational videos. Besides, we experiment on two well-known text document datasets, DUC2002, and CNN/Daily Mail, to test the performance of our model. We utilize ROUGE measures for evaluating the generated summaries at 95% confidence intervals. The experimental results demonstrated that our model outperforms the baseline and state-of-the-art models on the ROUGE-1, ROUGE-2, and ROUGE-L scores.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.