Abstract
ABSTRACTThis paper focuses on improving the performance of image-to-video person re-identification through feature fusion. In this study, image-to-video person re-identification is formulated as a classification-based information retrieval in which a pedestrian appearance model is learned in the training phase, and the identity of an interested person is determined based on the probability his/her probe image belongs to the model. Four state-of-the-art features belonging to two categories: hand-designed features and learned features are investigated for person image representation. They are Kernel Descriptor, Gaussian of Gaussian, features extracted from two famous convolutional neural networks (GoogleNet and ResNet). Furthermore, three fusion schemes that are early fusion, product-rule and query-adaptive late fusion are proposed. To evaluate the performance of the chosen features for person appearance representation as well as their combination in three proposed fusion schemes, 114 experiments on two public benchmark datasets (CAVIAR4REID and RAiD) have been conducted. The experiments confirm the robustness and effectiveness of the proposed fusion schemes. The proposed schemes obtain improvement of +7.16%, +5.42%, and +6.30% at rank-1 over those of single feature in case A, case B of CAVIAR4REID, and RAiD, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.