Abstract
When a human perceives videos composed of the same images in various orders, such as normal order, reverse order, and random order, the human visual attention system perceives them as different visual inputs. This means that the temporal change in the image sequence exerts a considerable influence on the human visual system. However, most state-of-the-art computational visual attention models have not considered the temporal cues adequately. Motivated by this deficiency, we propose a novel temporally irreversible visual attention model considering the following three aspects. First, the central bias of human dynamic vision is incorporated into the model to manifest this tendency. Second, the depth and directional motion-sensitive neurons are fused to discern different motion patterns. Third, the rarity factor is integrated into the model to mimic the attention shift when human observer perceives new emerging motion cues. The proposed model demonstrates its competitiveness to select attentive events in our experiments, in both laboratory setting and real driving video clips. When compared with recent visual attention models, the proposed model achieves the highest score in similarity with human dynamic vision. The proposed model could be one of the fundamental building blocks for any visual attention systems coping with dynamic scenes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.