Abstract

Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occlusions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individual is of fundamental importance to the video analysis in large-scale network of cameras. This is the person reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effectively address the challenges associated with changes in illumination, pose, and clothing appearance variation are discussed. More specifically, the development of a set of models that capture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some others require an intermediate step where specific body parts need to be identified. Some are designed to extract appearance features over time, and some others can operate reliably also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrence matrices by leveraging a generalization of the integral representation of images. The algorithms are deployed and tested in a camera network comprising of three cameras with non-overlapping field of views, where a multi-camera multi-target tracker links the tracks in different cameras by reidentifying the same people appearing in different views.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call