Abstract

Video summarization is a field of signal processing which deals with removing redundant information from a sequence of frames. In order to design an efficient video summarization model, it is imperative that frame features must be analyzed, and their similarity must be evaluated. This process would assist in aggregation of frames that have similar feature sets, and then use this information to reduce frame redundancy. Design of an efficient feature extraction and analysis model must be done in order to improve efficiency of video summarization. Models that aim at improving this efficiency, tend to improve the computational complexity of summarization, thereby limiting their applicability. In order to remove this drawback, The proposed event-based model for long video summarization uses a combination of LSTM based CNN with feature variance method for estimation of keyframes. The model works is 2-phases, wherein in the first phase various events are recognized from the video using LSTM based CNN model; while in the second phase these events are individually summarized using variance-based threshold engine. The proposed event classification-based LSTM CNN model with variance-based summarization outperforms other models, and achieves 8% improvement in terms of compression ratio.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.