Abstract

We propose a framework for multitarget tracking with feedback that accounts for scene contextual information. We demonstrate the framework on two types of context-dependent events, namely target births (i.e., objects entering the scene or reappearing after occlusion) and spatially persistent clutter. The spatial distributions of birth and clutter events are incrementally learned based on mixtures of Gaussians. The corresponding models are used by a probability hypothesis density (PHD) filter that spatially modulates its strength based on the learned contextual information. Experimental results on a large video surveillance dataset using a standard evaluation protocol show that the feedback improves the tracking accuracy from 9% to 14% by reducing the number of false detections and false trajectories. This performance improvement is achieved without increasing the computational complexity of the tracker.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.