Abstract

The ACT-R cognitive architecture only deals with a single display interface, which needs to be expanded and improved so as to describe the real environment that needs more than a single display. Therefore, this paper proposes a method to describe human performance in a multi-display environment by developing the head module because the behavior of searching the object beyond the preferred visual angle of ± 15° could not be modeled with the visual module in ACT-R. The result shows that ACT-R model with the head module should be necessary when performing tasks in a multi-display environment. In addition, a separate ACT-R model was also developed when a different head movement pattern was involved such as a peripheral vision.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call