Abstract

A surge of interest exists in multimodal research and interfaces. This is due, at least in part, to an exponential increase in the amount and type of information that can be presented to a user. When a great deal of information is presented via a single sensory modality, it can exceed the operator's capacity to manage the information efficiently, generating cognitive overload. As a consequence, the user's performance becomes susceptible to slower response times, loss of situational awareness, faulty decision making, and execution errors. Researchers and designers have responded to these issues with the development and application of multimodal information displays. The cross-disciplinary flavor of multimodal applications presents a challenge to the accumulation, evaluation, and dissemination of relevant research. We describe the development of a taxonomy for the evaluation and comparison of multimodal display research studies, and the implementation of the taxonomy into a database: the Multimodal Query System (MQueS).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call