Abstract
Video streams generate the better part of the internet traffic due to platforms such as YouTube and Youku in China. A novel genre-adaptive solution is presented in this paper benefitting from an interdisciplinary approach combining image and sound processing features from Pattern Recognition with high-level concepts from Mass Communications. It is shown that it is possible to automatically analyze videos for bundles of syntactical features which represent semantic high-level concepts which are typical for certain genres. In this way, semantic concepts called “Key Visuals” [1] as well as generic semantic concepts with obvious relevance to certain genres can be identified and classified in videos. Once identified it is possible to use the video shots assigned to these semantic concepts to create video abstracts by a dramaturgical synthesis of video shots which can serve the purpose of trailers to inform prospective viewers though such a “trailer” about the content of videos. Other applications could be semantic video retrieval based on the identified semantic features. We describe the concepts of a system capable of automatically generating quite convincing video trailers for some important genres by using low-level audio and video processing algorithms, context-based knowledge and a rule-based system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.