Abstract

Domain generalization (DG) person re-identification (Re-ID) with a good generalization ability on unseen domains has received widespread attention. Existing DG person Re-ID methods usually adopt some multi-source domain training strategies to improve the generalization ability of their models. However, the style of unseen domains usually changes with application scenarios, which requires DG person Re-ID models to be able to adapt to variable real-world application scenarios. In view of this, we propose a meta separation–fusion network (MSF-Net) for DG Re-ID based on a multi-source domain training strategy and a meta-learning framework. Specifically, we first design a feature separation module to separate the identity-related and background features of samples. Then, we design a feature fusion module to diversify the features of meta-test samples. Through the feature separation and fusion, we can obtain meta-test sample features that contain multiple image styles. More importantly, the feature separation and fusion cooperate with each other to achieve a meta separation–fusion strategy, by which our model can learn more image styles using a limited number of samples in source domains and thus has a stronger generalization ability on unseen domains. Extensive experimental results show that our MSF-Net has reliable performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call