Abstract

Recently, domain generalization (DG) person re-identification (ReID) has attracted attention. Existing DG person ReID methods train on mixed datasets containing all source domains. However, these mixed datasets have huge inter-domain differences because of varying data distributions across different source domains. Such differences hinder models from learning domain-invariant representations, affecting generalization on unseen domains. To address this issue, we propose a progressive de-preference task-specific processing network (PDTP-Net) for DG person ReID. Initially, we design a progressive de-preference domain segmentation strategy to mitigate inter-domain differences by dividing multiple source domains into different phases, each comprising several training tasks. We then design a global and task-specific processing module that enhances extraction of domain-invariant features by integrating statistical information from other source domains. Finally, we design a multi-granularity attention module and a group-aware batch normalization strategy to ensure the features are more discriminative and better suited for person ReID tasks. The proposed model is validated using three DG person ReID experimental protocols: Protocol-1, Protocol-2, and leave-one-out experiments. On Protocol-1, the model improves mean average precision (mAP) and Rank-1 accuracy on all datasets by an average of 0.7% and 0.3%, respectively. On Protocol-2, the model improves mAP and Rank-1 accuracy on all datasets by an average of 2.525% and 2.725%, respectively. On the leave-one-out experiments, the model improves mAP and Rank-1 accuracy on all tasks by an average of 0.65% and 0.18%, respectively. The results on several popular datasets suggest that the model achieves state-of-the-art performance in DG person ReID.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.