Abstract

Group re-identification aims to match groups of people across disjoint cameras. In this task, the contextual information from neighbor individuals can be exploited for re-identifying each individual within the group as well as the entire group. However, compared with single person re-identification, it brings new challenges including group layout and group membership changes. Motivated by the observation that individuals who are close together are more likely to keep in the same group under different cameras than those who are far apart, we propose to model each group as a spatial K-nearest neighbor graph (SKNNG) and design a group context graph neural network (GCGNN) for graph representation learning. Specifically, for each node in the graph, the proposed GCGNN learns an embedding which aggregates the contextual information from neighbor nodes. We design multiple weighting kernels for neighborhood aggregation based on the graph properties including node in-degrees and spatial relationship attributes. We compute the similarity scores between node embeddings of two graphs for group member association and obtain the matching score between the two graphs by summing up the similarity scores of all linked node pairs. Experimental results on three public datasets show that our approach performs favorably against state-of-the-art methods and achieves high efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call