Abstract

This paper presents a vision-based human action recognition system to support human interaction for companion robots. The system is divided into three parts: motion map construction, feature extraction, and human action classification. First, the Kinect 2.0 captures depth images and color images simultaneously using its depth sensor and RGB camera. Second, the information in the depth images and the color images are respectively used to three depth motion maps and a color motion map. These are then combined into one image to calculate the corresponding histogram of oriented gradient (HOG) features. Finally, a support vector machine (SVM) recognizes these HOG features as human actions. The proposed system can recognize eight kinds of human actions, wave left hand, wave right hand, holding left hand, holding right hand, hugging, bowing, walking, and punching. Three databases were used to test the proposed system: Database1 includes videos of adult actions, Database2 includes videos of child actions, and Database3 includes human action videos taken with a moving camera. The recognition accuracy rates of the three tests were 88.7%, 74.37%, and 51.25%, respectively. The experimental results show that the proposed system is efficient and robust.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.