Abstract

In this paper, we review three experiments with a mobile application that integrates graphical input with a touch-screen and a speech interface and develop a model for input modality choice in multimodal interaction. The model aims to enable simulation of multimodal human–computer interaction for automatic usability evaluation. The experimental results indicate that modality efficiency and input performance are important moderators of modality choice. Accordingly, we establish a utility-driven model that provides probability estimations of modality usage, based on the parameters of modality efficiency and input performance. Four variants of the model that differ in training data are fitted by means of Sequential Least Squares Programming. The analysis reveals a considerable fit regarding averaged modality usage. When applied to individual modality usage profiles, the accuracy decreases significantly. In an application example it is shown how the modality choice mechanism can be deployed for simulating interaction in the field of automatic usability evaluation. Results and possible limitations are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.