Abstract
Video has more information than the isolated images. Processing, analyzing and understanding of contents present in videos are becoming very important. Consumer videos are generally captured by amateurs using handheld cameras of events and it contains considerable camera motion, occlusion, cluttered background, and large intraclass variations within the same type of events, making their visual cues highly variable and less discriminant. So visual event recognition is an extremely challenging task in computer vision. A visual event recognition framework for consumer videos is framed by leveraging a large amount of loosely labeled web videos. The videos are divided into training and testing sets manually. A simple method called the Aligned Space-Time Pyramid Matching method was proposed to effectively measure the distances between two video clips from different domains. Each video is divided into space-time volumes over multiple levels. A new transfer learning method is referred to as Adaptive Multiple Kernel Learning fuse the information from multiple pyramid levels, features, and copes with the considerable variation in feature distributions between videos from two domains web video domain and consumer video domain.With the help of MATLAB Simulink videos are divided and compared with web domain videos. The inputs are taken from the Kodak data set and the results are given in the form of MATLAB simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Trends in Computer Science and Information Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.