Abstract

Browsing multimedia objects, such as photos, videos, documents, and maps represents a frequent activity in a context of use where an end-user interacts on a large vertical display close to bystanders, such as a meeting in a corporate environment or a family display at home. In these contexts, mid-air gesture interaction is suitable for a large variety of end-users, provided that gestures are consistently mapped to similar functions across media types. We present Lui (Large User Interface), a ready-to-deploy and to-use application for browsing multimedia objects by consistent mid-air gesture interaction on a large display that is customizable by mapping new gesture classes to functions in real-time. The method followed to design the gesture interaction and to develop the application consists of four stages: (1) a contextual gesture elicitation study (23 participants × 18 referents = 414 proposed gestures) is conducted with the various media types to determine a consensus set satisfying consistency, (2) the continuous integration of this consensus set with gesture recognizers into a pipeline software architecture, (3) a comparative testing of these recognizers on the consensus set to configure the pipeline with the most efficient ones, and (4) an evaluation of the interface regarding its global quality and specific to the implemented gestures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call