Abstract

Spatiotemporal data association and fusion is a well-known NP-hard problem even in a small number of cameras and frames. Although it is difficult to be tractable, solving them is pivotal for tracking in a multicamera network. Most approaches model association maladaptively toward properties and contents of video, and hence they produce suboptimal associations and association errors propagate over time to adversely affect fusion. In this paper, we present an online multicamera multitarget tracking framework that performs adaptive tracklet correspondence by analyzing and understanding contents and properties of video. Unlike other methods that work only on synchronous videos, our approach uses dynamic time warping to establish correspondence even if videos have linear or nonlinear time asynchronous relationship. Association is a two-stage process based on geometric and appearance descriptor space ranked by their inter- and intra-camera consistency and discriminancy. Fusion is reinforced by weighting the associated tracklets with a confidence score calculated using reliability of individual camera tracklets. Our robust ranking and election learning algorithm dynamically selects appropriate features for any given video. Our method establishes that, given the right ensemble of features, even computationally efficient optimization yields better accuracy in tracking over time and provides faster convergence that is suitable for real-time application. For evaluation on RGB, we benchmark on multiple sequences in PETS 2009 and we achieve performance that is on par with the state of the art. For evaluating on RGB-D, we built a new data set.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.