Abstract

This paper presents a model-based approach for designing Polymodal Menus, a new type of multimodal adaptive menu for small screen graphical user interfaces where item selection and adaptivity are responsive to more than one interaction modality: a menu item can be selected graphically, tactilely, vocally, gesturally, or any combination of them. The prediction window containing the most predicted menu items by assignment, equivalence, or redundancy is made equally adaptive. For this purpose, an adaptive menu model maintains the most predictable menu items according to various prediction methods. This model is exploited throughout various steps defined on a new Adaptivity Design Space based on a Perception-Decision-Action cycle com-ing from cognitive psychology. A user experiment compares four conditions of Polymodal Menus (graphical, vocal, gestural, and mixed) in terms of menu selection time, error rate, user subjective satisfaction and user preference, when item prediction has a low or high level of accuracy. Polymodal Menus offer alternative input/output modalities to select menu items in various contexts of use, especially when graphical modality is constrained.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.