Abstract
Visual-based action recognition has already been widely used in human–machine interfaces. However, it is a challenging research to recognize the human actions from different viewpoints. In order to solve this issue, a novel multi-view space hidden Markov models (HMMs) algorithm for view-invariant action recognition is proposed. First, a view-insensitive feature representation by combining the bag-of-words of interest point with the amplitude histogram of optical flow is utilized for describing the human action sequences. The combined features could not only solve the problem that there was no possibility in establishing an association between traditional bag-of-words of interest point method and HMMs, but also greatly reduce the redundancy in the video. Second, the view space is partitioned into multiple sub-view space according to the camera rotation viewpoint. Human action models are trained by HMMs algorithm in each sub-view space. By computing the probabilities of the test sequence (i.e., observation sequence) for the given multi-view space HMMs, the similarity between the sub-view space and the test sequence viewpoint are analyzed during the recognition process. Finally, the action with unknown viewpoint is recognized via the probability weighted combination. The experimental results on multi-view action dataset IXMAS demonstrate that the proposed approach is highly efficient and effective in view-invariant action recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.