Abstract

Use of multimodal displays is getting more prevalent in Human Factors and Human-Computer Interaction. Existing information processing models and theories predict the benefits of multimodality in user interfaces. While the models have been refined regarding vision, more granularity is still required regarding audition. The existing models mainly account for verbal processing in terms of representation, encoding, and retrieving, but these models do not provide sufficient explanations for nonverbal processing. In the present paper, I point out research gaps in nonverbal information processing of the representative models at the working memory and attention level. Then, I propose a preliminary conceptual model supported by neural and behavioral level evidence, and provide evaluations of the model and future works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call