Abstract
In this paper we present a robust motion recognition framework for both motion capture and RGB-D sensor data. We extract four different types of features and apply a temporal difference operation to form the final feature vector for each frame in the motion sequences. The frames are classified with the extreme learning machine, and the final class of an action is obtained by majority voting. We test our framework with both motion capture and Kinect data and compare the results of different features. The experiments show that our approach can accurately classify actions with both sources of data. For 40 actions of motion capture data, we achieve 92.7% classification accuracy with real-time performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.