AbstractAttention can be biased by previous learning and experience. We present an algorithmic-level model of this selection history bias in visual attention that predicts quantitatively how stimulus-driven processes, goal-driven control and selection history compete to control attention. In the model, the output of saliency maps as stimulus-driven guidance interacts with a history map that encodes learning effects and a goal-driven task control to prioritize visual features. The model works on coded features rather than image pixels which is common in many traditional saliency models. We test the model on a reaction time (RT) data from a psychophysical experiment. The model accurately predicts parameters of reaction time distributions from an integrated priority map that is comprised of an optimal, weighted combination of separate maps. Analysis of the weights confirms selection history effects on attention guidance. The model is able to capture individual differences between participants’ RTs and response probabilities per group. Moreover, we demonstrate that a model with a reduced set of maps performs worse, indicating that integrating history, saliency and task information are required for a quantitative description of human attention. Besides, we show that adding intertrial effect to the model (as another lingering bias) improves the model’s predictive performance.
Read full abstract