Abstract

Technological advances in touch-based devices now allow users to interact with information systems in new ways, being gesture-based interaction a popular new kid on the block. Many daily tasks can be performed on mobile devices and desktop computers by applying multi-stroke gestures. Scaling up this type of interaction to bigger information systems and software tools entails difficulties, such as the fact that gesture definitions are platform-specific and this interaction is often hard-coded in the source code and hinders their analysis, validation and reuse. In an attempt to solve this problem, we here propose gestUI, a model-driven approach to the multi-stroke gesture-based user interface development. This system allows modelling gestures, automatically generating gesture catalogues for different gesture-recognition platforms, and user-testing the gestures. A model transformation automatically generates the user interface components that support this type of interaction for desktop applications (further transformations are under development). We applied our proposal to two cases: a form-based information system and a CASE tool. We include details of the underlying software technology in order to pave the way for other research endeavours in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call