Abstract

Metric learning with triplet loss is one of the most effective methods for face verification, which aims to minimize the distance of positive pairs while maximizing the distance of negative pairs in feature embedding space. The arduous hard triplets mining and insufficient inter-class and intra-class variations are the two limitations of the previous methods. In this paper, we propose an improved triplet loss based on deep neural network for end-to-end metric learning, which can effectively cut down the number of the possible triplets and increase the proportion of hard triplets. It considers not only the relative distance between positive and negative pairs with the same probe images, but also the absolute distance between positive and negative pairs with different probe images, resulting in a smaller intra-class variation and a larger inter-class variation by adding a new constraint to push away negative pairs from positive pairs. In particular, a dynamic margin is proposed based on the distribution of positive and negative pairs in a batch, which can avoid the under-sampling or over-sampling problems. Our method is evaluated on LFW and YTF datasets, which are the most widely used benchmarks. The experimental results indicate that the proposed method can greatly outperform the standard triplet loss, and obtain the state-of-the-art performance with less time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call