Abstract

This paper introduces an approach to employ deep features for person re-identification. In contrast to existing works, we focus on using pre-trained deep models and their concept-based output to enhance attribute presentations of person images. There are two main contributions. First, we investigate recent state-of-the-art deep learning models for the task and provide a comprehensive evaluation. Second, we present an approach to improve identification accuracy of a standard attribute-based person re-identification approach. By using pre-trained models, we avoid re-training new deep models which always require a huge amount of training data and high computational cost. The idea is to utilize the correlation between generic concepts learned by the deep models and specific pre-defined attributes commonly used for person re-identification. We employ the deeply learned features from generic concepts to represent person images. These images and their manually annotated attributes are then used to train attribute classifiers. Given the classifiers, attributes of the probe and gallery images can be automatically extracted to compose attribute-based feature vectors. Re-identification is done by matching the vectors. Experiments conducted on several benchmark datasets demonstrated the effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call