Abstract

Recognition of human gestures is an active area of research integral to the development of intuitive human-machine interfaces for ubiquitous computing and assistive robotics. In particular, such systems are key to effective environmental designs which facilitate aging in place. Typically, gesture recognition takes the form of template matching in which the human participant is expected to emulate a choreographed motion as prescribed by the researchers. The robotic response is then a one-to-one mapping of the template classification to a library of distinct responses. In this paper, we explore a recognition scheme based on the Growing Neural Gas (GNG) algorithm which places no initial constraints on the user to perform gestures in a specific way. Skeletal depth data collected using the Microsoft Kinect sensor is clustered by GNG and used to refine a robotic response associated with the selected GNG reference node. We envision a supervised learning paradigm similar to the training of a service animal in which the response of the robot is seen to converge upon the user's desired response by taking user feedback into account. This paper presents initial results which show that GNG effectively differentiates between gestured commands and that, using automated (policy based) feedback, the system provides improved responses over time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.