Contrastive unsupervised representation learning (CURL) is a technique that seeks to learn feature sets from unlabeled data. It has found widespread and successful application in unsupervised feature learning, with the design of positive and negative pairs serving as the type of data samples. While CURL has seen empirical successes in recent years, there is still room for improvement in terms of the pair data generation process. This includes tasks such as combining and re-filtering samples, or implementing transformations among positive/negative pairs. We refer to this as the sample selection process. In this article, we introduce an optimized pair-data sample selection method for CURL. This method efficiently ensures that the two types of sampled data (similar pair and dissimilar pair) do not belong to the same class. We provide a theoretical analysis to demonstrate why our proposed method enhances learning performance by analyzing its error probability. Furthermore, we extend our proof into PAC-Bayes generalization to illustrate how our method tightens the bounds provided in previous literature. Our numerical experiments on text/image datasets show that our method achieves competitive accuracy with good generalization bounds.
Read full abstract