Abstract

Person re-identification is to search for a correct match for a person of interest across different camera views among a large number of impostors. While the approaches based on RGB modality have been widely studied to re-identify people, other modalities could be exploited as additional information sources, like depth and skeleton modalities. In this paper, we perform multi- and single-modal person re-identification using RGB, depth and skeleton modalities obtained by RGB-D sensors. First of all, the depth and RGB images are divided into three regions of head, torso and legs. Then, each region is explained by the histograms of local vector patterns (LVP). In depth and RGB modalities, the depth values of pixels and the gray levels of pixels are used for extracting LVPs, respectively. The skeleton features are extracted by computing the various Euclidean distances for the joint points of skeleton images. Then, features extracted by different modalities are combined as double and triple combinations using score-level fusion. The experiments are evaluated on two RGBD-ID and KinectREID databases, and results illustrate the acceptable performance of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call