Abstract

Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template (VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.