Abstract

In a crowded public space, people often walk in groups, either with people they know or with strangers. Associating a group of people over space and time can assist understanding an individual’s behaviours as it provides vital visual context for matching individuals within the group. This seems to be an ‘easier’ task compared with person re-identification due to the availability of more and richer visual content in associating a group; however, solving this problem turns out to be rather challenging because a group of people can be highly non-rigid with changing relative position of people within the group and severe self-occlusions. In this work, the problem of matching/associating groups of people over large space and time gaps captured in multiple non-overlapping camera views is addressed. Specifically, a novel people group representation and a group matching algorithm are proposed. The former addresses changes in the relative positions of people in a group and the latter uses the proposed group descriptors for measuring the similarity between two candidate images. Based on group matching, we further formulate a method for matching individual person using the group description as visual context. These methods are validated using the 2008 i-LIDS Multiple-Camera Tracking Scenario (MCTS) dataset on multiple camera views from a busy airport arrival hall.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call