Abstract

Person re-identification (ReID) is an imperative area of pedestrian analysis and has practical applications in visual surveillance. In the person ReID, the robust feature representation is a key issue because of inconsistent visual appearances of a person. Also, an exhaustive gallery search is required to find the target image against each probe image. To answer such issues, this manuscript presents a framework named features-based clustering and deep features in person ReID. The proposed framework initially extracts three types of handcrafted features on the input images including shape, color, and texture for feature representation. To acquire optimal features, a feature fusion and selection technique is applied to these handcrafted features. Afterward, to optimize the gallery search, features-based clustering is performed for splitting the whole gallery into $$k$$ consensus clusters. For relationship learning of gallery features and related labels of the chosen clusters, radial basis kernel is employed. Later on, cluster-wise, images are selected and provided to the deep convolution neural network model to obtain deep features. Then, a cluster-wise feature vector is obtained by fusing the deep and handcrafted features. It follows the feature matching process where multi-class support vector machine is applied to choose the related cluster. Finally, to find accurate matching pair from the classified cluster(s) instead of the whole gallery search, a cross-bin histogram-based distance similarity measure is used. The recognition rate at rank 1 is attained as 46.82%, 48.12%, and 40.67% on selected datasets VIPeR, CUHK01, and iLIDS-VID, respectively. It confirms the proposed framework outperforms the existing ReID approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call