Abstract

Person re-identification has gained an increasing attention in both of academic research communities and industrial labs in recent years. It remains as a challenging problem due to the extracting and matching of reliable and distinctive features under different camera views across a wide spatial and temporal scope. To address these issues, we propose a resolution adaptive method by extracting and fusing the global and local features within a unified framework. Specifically, global features and local features are extracted separately in two image scales which are constructed in a cascaded way. After extracting HS and HOG features and ranking in low scale, we choose top k percent as a candidate subset, meanwhile we obtain another person subset by LPQ (Local Phase Quantization) Face detection. The union of these two candidate subsets is used for high scale processing in which wHSV (weighted Hue Saturation Value), LSCF (Local Spatial Constrain Feature) and MSCR (Maximally Stable Colour Regions) local features are adopted. Afterwards, both the global and local features extracted are fused with an unsupervised query adaptive method, based on which person re-identification is conducted with a high accuracy. Experiments are conducted on two real world datasets: ETHZ1, 2, 3, and our own dataset from high resolution cameras in real roads and campus scenes. Experimental results demonstrate that the proposed method outperforms the conventional methods in terms of both accuracy and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call