Abstract
Gesture interfaces have long been pursued in the context of portable computing and immersive environments. However, such interfaces have been difficult to build, in part due to a lack of frameworks for their design and implementation. This paper presents a framework for automatically producing a gesture interface based on a simple interface description. Rather than defining hand poses in a low level high-dimensional joint angle space, we describe and recognize gestures in a “lexical” space, in which each hand pose is decomposed into elements in a finger-pose alphabet. The alphabet and underlying rules are defined as a gesture notating system called GeLex. By implementing a generic hand pose recognition algorithm, and a mechanism to adapt it to a specific application based on an interface description, developing a gesture interface becomes straightforward.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.