Abstract

The gesture recognition technology based on visual detection employs image processing for detection, segmentation, tracking, and recognition of hand gestures. Static and dynamic gesture recognition are the two types of gesture recognition. In this paper, we present our hand gesture interface consisting of three main layers: Hand Gesture Interface, Mapping Gesture, Action, and Graphical User Interface with our approach named IGMA (Improving Graphical User Interface Based on Gesture Recognition Modeling Approach). IGMA organized in two stages: the modeling stage and the code generation stage. The model is Domain Specific Model; it allows action and gesture modeling. At the code generation stage, a specific code generator transforms all models to final source code using a framework specifically designed for a given target platform. Finally, to simplify the implementation of this method, we propose defining a process for improving user interface control based on gesture recognition. We validate our approach by performing experiments on a gastronomy application. The result is an adaptation framework that can guide software engineers in developing Graphical User Interface Based on Gesture Recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call