Abstract

Gesture is one of the fundamental ways of human machine natural interaction. To understand gesture, the system should be able to interpret 3D movements of human. This paper presents a computer vision-based real-time 3D gesture recognition system using depth image which tracks 3D joint position of head, neck, shoulder, arms, hands and legs. This tracking is done by Kinect motion sensor with OpenNI API and 3D motion gesture is recognized using the movement trajectory of those joints. User to Kinect sensor distance is adapted using proposed center of gravity (COG) correction method and 3D joint position is normalized using proposed joint position normalization method. For gesture learning and recognition, data mining classification algorithms such as Naive Bayes and neural network is used. The system is trained to recognize 12 gestures used by umpires in a cricket match. It is trained and tested using about 2000 training instances for 12 gesture of 15 persons. The system is tested using 5-fold cross validation method and achieved 98.11% accuracy with neural network and 88.84% accuracy with Naive Bayes classification method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.