Abstract

Contrastive learning techniques continue to receive a lot of attention in the self-supervised learning area. Specifically, the learned distance features can be further utilized to capture the distance between latent features in the embedding space and improve the performance of both supervised and unsupervised learning tasks. However, most contrastive learning efforts are focused on learning the geometric distance between the latent features, while the underlying probability distribution is usually ignored. To address this challenge, we propose a novel generalized contrastive loss for self-supervised learning using the Bregman divergence by investigating the hidden relationship between the contrastive loss and the Bregman divergence. Our method considers the hybrid divergence that leverages the Euclidean-based distance and probabilistic divergence, which improves the quality of self-supervised learned feature representation. Besides theory, extensive experimental results demonstrate the effectiveness of our method compared to other state-of-the-art self-supervised methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call