Abstract

Interacting with computer applications using actions that are designed by end users themselves instead of pre-defined ones has advantages such as better memorability in some Human-Computer Interaction (HCI) scenarios. In this paper we propose a method for allowing users to use self-defined mid-air hand gestures as commands for HCI after they provide a few training samples for each gesture in front of a depth image sensor. The gesture detection and recognition algorithm is mainly based on pattern matching using 3 separate sets of features, which carry both finger-action and hand-motion information. An experiment in which each subject designed their own set of 8 gestures, provided about 5 samples for each, and then used them to play a game is conducted all in one sitting. During the experiment a recognition rate of 66.7% is achieved with a false positive ratio of 22.2%. Further analyses on the collected dataset shows that a higher recognition rate of up to about 85% can be achieved if more wrong detections were allowed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.