Abstract
This paper presents a novel framework for effective video semantic analysis. This framework has two major components, namely, optical flow tensor (OFT) and hidden Markov models (HMMs). OFT and HMMs are employed because: (1) motion is one of the fundamental characteristics reflecting the semantic information in video, so an OFT-based feature extraction method is developed to make full use of the motion information. Thereafter, to preserve the structure and discriminative information presented by OFT, general tensor discriminant analysis (GTDA) is used for dimensionality reduction. Finally, linear discriminant analysis (LDA) is utilized to further reduce the feature dimension for discriminative motion information representation; and (2) video is a sort of information intensive sequential media characterized by its context-sensitive nature, so the video sequences can be more effectively analyzed by some temporal modeling tools. In this framework, we use HMMs to well model different levels of semantic units (SU), e.g., shot and event. Experimental results are reported to demonstrate the advantages of the proposed framework upon semantic analysis of basketball video sequences, and the cross validations illustrate its feasibility and effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.