Abstract

Compared to cross-entropy in deep learning, triplet loss is less affected by the biased label information and widely used in fine-grained visual tasks. Especially in person re-identification (re-ID), triplet loss is improved with batch-hard sampling which only selects the hardest samples during the training process to reduce invalid triplets involved in the loss computation. The hardest samples’ loss computation can provide a more intense gradient descent than raw samples. However, the batch-hard triplet loss discards multiple samples with important information, which can negatively impact feature learning. Besides, the hardest samples cause loss stuck problems frequently in training. In this work, we propose a balanced triplet loss for comprehensive feature learning and stable model convergence. The balanced triplet loss only mines the hardest negative samples of each category within a mini-batch. Compared with batch-hard triplet loss, it preserves the features of all the negative categories rather than one negative category with the hardest negative sample. It achieves a balance between triplet selection and information loss. The experiments show that our method can produce competitive results in re-ID tasks. In addition, we analyze the correlation between the intensity of data mining and the granularity of feature learning and further adapt the balanced triplet loss to general fine-grained image classification. The experiments prove the adapted balanced triplet loss also outperforms cross-entropy in multiple datasets of different scales.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call