Abstract

Although a computer can track thousands of moving objects simultaneously, it often fails to understand the priority and the meaning of the dynamics. Human vision, on the other hand, can easily track multiple objects with saccadic motion. The single thread eye movement allows people to shift attention from one object to another, enabling visual intelligence from complex scenes. In this paper, we present a motion-context attention shift (MCAS) model to simulate attention shifts among multiple moving objects in surveillance videos. The MCAS model includes two modules: The robust motion detector module and the motion-saliency module. Experimental results show that the MCAS model successfully simulates the attention shift in tracking multiple objects in surveillance videos.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call