Abstract

We present a distributed system for wide-area multi-object tracking across disjoint camera views. Every camera in the system performs multi-object tracking, and keeps its own trackers and trajectories. The data from multiple features are exchanged between adjacent cameras for object matching. We employ a probabilistic Petri Net-based approach to account for the uncertainties of the vision algorithms (such as unreliable background subtraction, and tracking failure) and to incorporate the available domain knowledge. We combine appearance features of objects as well as the travel-time evidence for target matching and consistent labeling across disjoint camera views. 3D color histogram, histogram of oriented gradients, local binary patterns, object size and aspect ratio are used as the appearance features. The distribution of the travel time is modeled by a Gaussian mixture model. Multiple features are combined by the weights, which are assigned based on the reliability of the features. By incorporating the domain knowledge about the camera configurations and the information about the received packets from other cameras, certain transitions are fired in the probabilistic Petri net. The system is trained to learn different parameters of the matching process, and updated online. We first present wide-area tracking of vehicles, where we used three non-overlapping cameras. The first and the third cameras are approximately 150 m apart from each other with two intersections in the blind region. We also present an example of applying our method to a people-tracking scenario. The results show the success of the proposed method. A comparison between our work and related work is also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call