Abstract

Person re-identification (person re-id) has attracted rapidly increasing attention in computer vision and pattern recognition research community in recent years. With the goal of providing match ranking results between each query person image and the gallery ones, the person re-id technique has been widely explored and a large number of person re-id methods have been developed. As these algorithms leverage different kinds of prior assumptions, image features, distance matching functions, et al., each of them has its own strengths and weaknesses. Inspired by these facts, this paper proposes a novel person re-id method based on the idea of inferring superior fusion results from a variety of previous base person re-id algorithms using different methodologies or features. To achieve this goal, we propose a novel framework which mainly consists of two steps: 1) a number of existing person re-id methods are implemented, and the ranking results are obtained in the test datasets. and 2) the robust fusion strategy is applied to obtain better re-ranked matching results by simultaneously considering the recognition abilities of various base re-id methods and the difficulties of different gallery person images to be correctly recognized under the generative model of labels, abilities, and difficulties framework. Comprehensive experiments show the effectiveness of our proposed method, and we have received state-of-the-art results on recent popular person re-id datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.