Abstract

In order to customize multi-touch gestures for different applications, and facilitate multi-touch gesture recognition, an application oriented and shape feature based multi-touch gesture description and recognition method is proposed. In this method, multi-touch gestures are classified into two categories, namely atomic gesture and combined gesture, where combined gesture is a combination of atomic gestures using temporal, spatial and logical relationships. For description, users' motions are mapped into gestures, and then semantic constraints of an application are extracted to build the accessible relationships between gestures and entity states. For recognition, trajectories of a gesture are projected onto an image, and the shape feature of every trajectory and relationships between each other are extracted to match with gesture templates. Experiments show that this method is independent to multi-touch platforms, robust to manipulating differences of users, and it is scalable and reusable for users and applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.