Abstract
Although person re-identification (person re-id) has advanced substantially in recent years, most methods are based on the assumption that the identities would not change clothes. This assumption might not hold in practice considering criminals who intentionally change clothes. In this work, we attempt to solve person re-id under moderate clothing change. Since the human body shape is considered as relatively more invariant under moderate clothing changes, we propose to learn a reliable shape-aware feature representation by mutually learning both colorful images and contour images. Instead of directly extracting shape features from contour images, we utilize contour feature learning as regularization and excavate more effective shape-aware feature representations from colorful images. We propose a multi-scale appearance and contour deep infomax (MAC-DIM) to maximize mutual information between colorful appearance features and contour shape features, and in this way, the extracted appearance features are constrained to be shape-aware in terms of both low-level visual properties and high-level semantics. To better model the long-range human body shape and explicitly capture contour segment relations, we introduce hierarchical graph modeling as aggregation headers, propagating structural context through graph convolutional networks (GCNs). The extensive results on benchmarks under clothing changes demonstrate the effectiveness of our shape-aware feature learning scheme.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.