Abstract

Person re-identification (ReID), which aims at matching individuals across non-overlapping cameras, has attracted much attention in the field of computer vision due to its research significance and potential applications. Triplet loss-based CNN models have been very successful for person ReID, which aims to optimize the feature embedding space such that the distances between samples with the same identity are much shorter than those of samples with different identities. Researchers have found that hard triplets’ mining is crucial for the success of the triplet loss. In this paper, motivated by focal loss designed for the classification model, we propose the triplet focal loss for person ReID. Triplet focal loss can up-weight the hard triplets’ training samples and relatively down-weight the easy triplets adaptively via simply projecting the original distance in the Euclidean space to an exponential kernel space. We conduct experiments on three largest benchmark datasets currently available for person ReID, namely, Market-1501, DukeMTMC-ReID, and CUHK03, and the experimental results verify that the proposed triplet focal loss can greatly outperform the traditional triplet loss and achieve competitive performances with the representative state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.