Abstract

Person re-identification has become a hot research topic due to its importance in surveillance and forensics applications. The purpose is to find the same person from disjoint camera views at different times and locations. Most of the existing approaches try to re-identify a target by modeling his appearance characteristics based on a single modality, i.e. low-level feature or mid-level feature representation. In this paper, we proposed a novel Multi-Level Semantic Appearance Representation that combines these two complementary characteristics for people appearance modeling. The low-level representation relies on the Multi-Channel Co-occurrence Matrix descriptor extracted from the salient body parts to model the fine characteristics of the appearance in terms of color and texture information. The mid-level representation relies on the Semantic Body Traits descriptor to model the semantic description of the appearance in terms of clothes and accessories patterns. A score fusion method based on the weighted sum was adopted to fuse these characteristics that reflect the impact of each separated modality. The experimental results proved the effectiveness and benefits of the proposed approach in terms of performance and computational complexity through the achieved comparisons with the state-of-the-art approaches using the VIPeR benchmarking dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call