Abstract

Person re-identification (Re-ID) is of important capability for artificial intelligence and human–computer interaction. The main challenge of person Re-ID lies in limited data to precisely capture a wide range of appearance variations over multiple viewpoints. Furthermore, compared to the Re-ID between the single person, the person groups contain information about the relationship between pedestrians that can potentially help identify certain identities. The Re-ID combining groups and individuals remain to be a promising task under rare study. In this paper, we propose a harmonious attention network for person re-identification, in which we jointly consider the complementarity between person groups and individuals. Concretely, first we propose a two-stream attentive network (TSAN) to respectively learn the information from the person groups and individuals. TSAN consists of a spatial–temporal fusion network for the group Re-ID, as well as a deep network for the traditionally individual person Re-ID. To jointly consider the contributions of the groups and individuals, then we propose a novel re-ranking algorithm (GIRK) based on the learned features to associate the group and individual information. We also propose a new group Re-ID dataset DukeGroupVid to evaluate the performance of our approach. Comprehensive experimental results on the proposed dataset and other Re-ID datasets demonstrate the effectiveness of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call