Abstract
Tracking people across multiple cameras with non-overlapping views is a challenging task, since their observations are separated in time and space and their appearances may vary significantly. This paper proposes a Bayesian model to solve the consistent labeling problem across multiple non-overlapping camera views. Significantly different from related approaches, our model assumes neither people are well segmented nor their trajectories across camera views are estimated. We formulate a spatial-temporal probabilistic model in the hypothesis space that consists the potentially matched objects between the exit field of view (FOV) of one camera and the entry FOV of another camera. A competitive major color spectrum histogram representation (CMCSHR) for appearance matching between two objects is also proposed. The proposed spatial-temporal and appearance models are unified by a maximum-a-posteriori (MAP) Bayesian model. Based on this Bayesian model, when a detected new object corresponds to a group hypothesis (more than one object), we further develop an online method for online correspondence update using optimal graph matching (OGM) algorithm. Experimental results on three different real scenarios validate the proposed Bayesian model approach and the CMCSHR method. The results also show that the proposed approach is able to address the occlusion problem/group problem, i.e. finding the corresponding individuals in another camera view for a group of people who walk together into the entry FOV of a camera.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.