Abstract

When a human perceives videos composed of the same images in various orders, such as normal order, reverse order, and random order, the human visual attention system perceives them as different visual inputs. This means that the temporal change in the image sequence exerts a considerable influence on the human visual system. However, most state-of-the-art computational visual attention models have not considered the temporal cues adequately. Motivated by this deficiency, we propose a novel temporally irreversible visual attention model considering the following three aspects. First, the central bias of human dynamic vision is incorporated into the model to manifest this tendency. Second, the depth and directional motion-sensitive neurons are fused to discern different motion patterns. Third, the rarity factor is integrated into the model to mimic the attention shift when human observer perceives new emerging motion cues. The proposed model demonstrates its competitiveness to select attentive events in our experiments, in both laboratory setting and real driving video clips. When compared with recent visual attention models, the proposed model achieves the highest score in similarity with human dynamic vision. The proposed model could be one of the fundamental building blocks for any visual attention systems coping with dynamic scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call