Abstract

AbstractPerson Re‐identification (Re‐ID) is a task of matching target pedestrians under cross‐camera surveillance. Learning discriminative feature representations is the main issue for person Re‐ID. A few recent methods introduce text descriptions as auxiliary information to enhance feature representations, as it offers richer semantic information and perspective consistency. However, these works usually process text and images separately, which leads to the absence of cross‐modal interactions. In this article, a Dual‐modal Graph Attention Interaction Network (Dual‐GAIN) is proposed to integrate visual features and textual features into a heterogeneous graph to model the relationship between them, simultaneously. The proposed Dual‐GAIN mainly consists of two components: a dual‐stream feature extractor and a Graph Attention Interaction Network (GAIN). Specifically, the two‐stream feature extractor is utilised to extract visual features and textual features respectively. Then, visual local features and textual features are treated as nodes to construct a multi‐modal graph. Cosine similarity constrained attention weights are introduced in GAIN, which is designed for cross‐modal interaction and feature fusion on this heterogeneous multi‐modal graph. Experiments on public large‐scale datasets, that is, Market‐1501, CUHK03 labelled, and CUHK03 detected, demonstrate our method achieves the state‐of‐the‐art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call