Abstract

Person re-identification (Re-ID) is an essential task in computer vision, which aims to match a person of interest across multiple non-overlapping camera views. It is a fundamental challenging task because of the conflicts between large variations of samples and the limited scale of training sets. Data augmentation method based on generative adversarial network (GAN) is an efficient way to relieve this dilemma. However, existing methods do not consider how to keep identity information and filter the noise of the generated auxiliary samples during Re-ID training. In this paper, we propose object quality guided feature fusion network for person re-identification, which consists of a self-supervised object quality estimation module and a feature fusion module. Specifically, the former evaluates the quality of the auxiliary data to filter the noise and the disturbing features, while the later accomplishes the feature fusion based on object quality estimation in the collection-to-collection recognition manner to make full use of auxiliary data. Extensive performance analysis and experiments are conducted on two benchmark datasets (Market-1501 and DukeMTMC-reID) to show that our proposed approach outperforms or shows comparable results to the existing best performed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call