Abstract
Although most of us communicate using multiple sensory modalities in our lives, and many of our computers are similarly capable of multi-modal interaction, most human—computer interaction is predominantly in the visual mode. This paper describes a toolkit of widgets that are capable of presenting themselves in multiple modalities, but further are capable of adapting their presentation to suit the contexts and environments in which they are used. This is of increasing importance as the use of mobile devices becomes ubiquitous.Keywordsaudiomulti-modalresource-sensitivesonically enhanced widgetstoolkit
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have