Abstract

Much research on modeling human performance associated with visual perception is formulated by schematic models based on neural mechanisms or cognitive architectures. But, these two modeling paradigms are limited in the domains of multiple monitor environments. Although the schematic model based on neural mechanisms can represent human visual systems in multiple monitor environments by providing a detailed account of eye and head movements, these models cannot easily be applied in complex cognitive interactions. On the other hand, the cognitive architectures can model the interaction of multiple aspects of cognition, but these architectures have not focused on modeling the visual orienting behavior of eye and head movements. Thus, in this study, a specific cognitive architecture, which is ACT-R, is extended by an existing schematic model of human visual systems based on neural mechanisms in order to model human performance in multiple monitor environments more accurately. And, this study proposes a method of modeling human performance using the extended ACT-R. The proposed method is validated by an experiment, confirming that the proposed method is able to predict human performance more accurately in multiple monitor environments. Relevance to industryPredicting human performance with a computational model can be used as an alternative method to implementing iterative user testing for developing a system interface. The computational model in this study can predict human performance in multiple monitor environments, so that the model can be applied early on in the design phase, to evaluate the system interface in multiple monitor environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call