Abstract

Contrastive Learning has recently received interest due to its success in self-supervised representation learning in the computer vision domain. However, the origins of Contrastive Learning date as far back as the 1990s and its development has spanned across many fields and domains including Metric Learning and natural language processing. In this paper we provide a comprehensive literature review and we propose a general Contrastive Representation Learning framework that simplifies and unifies many different contrastive learning methods. We also provide a taxonomy for each of the components of contrastive learning in order to summarise it and distinguish it from other forms of machine learning. We then discuss the inductive biases which are present in any contrastive learning system and we analyse our framework under different views from various sub-fields of Machine Learning. Examples of how contrastive learning has been applied in computer vision, natural language processing, audio processing, and others, as well as in Reinforcement Learning are also presented. Finally, we discuss the challenges and some of the most promising future research directions ahead.

Highlights

  • The performance of a machine learning system is directly determined by the choice and quality of the data representation, or features, in the data used to train it

  • EXAMPLE: INSTANCE DISCRIMINATION Along the lines of an exemplar-based classification task [26], which treats each image as its own class, Instance Discrimination [110] is a popular self-supervised method to learn a visual representation and has succeeded in learning useful representations that achieve state-of-the-art results in transfer learning for some downstream computer vision tasks [43], [69]

  • WHAT KIND OF REPRESENTATIONS ARE LEARNED BY CONTRASTIVE METHODS? Recent successes in transfer learning by instance discrimination contrastive pre-training [16], [43], [69] have raised the question of ‘‘what representation is learned from contrastive methods and why is it better than supervised pre-training’’ [101], [119]? from the view of the Contrastive Representation Learning framework, the invariant and covariant features learned from the instance discrimination task are entirely decided by the augmentations techniques that create the positive pairs

Read more

Summary

INTRODUCTION

The performance of a machine learning system is directly determined by the choice and quality of the data representation, or features, in the data used to train it. 2) SUPERVISED AND UNSUPERVISED LEARNING Until recently, the most successful applications of deep learning belonged to the class of supervised learning methods, where a representation is directly learned by mapping from the input to a human-generated label i.e. in training data pairs (x, y), to optimise an objective function. Some newer works under the term ‘‘self-supervised’’ learning aim to learn useful representations without labels using discriminative modelling approaches These methods have shown great success when used for transfer learning, surpassing supervised pre-trained models in multiple downstream tasks, in both computer vision and natural language processing applications. In the self-supervised setting, instead of deriving a pseudo-label from the pretext task, contrastive learning methods learn a discriminative model on multiple input pairs, according to some notion of similarity.

EXAMPLE
DEVELOPMENT OF CONTRASTIVE LEARNING
APPLICATIONS
Findings
DISCUSSION AND OUTLOOK
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.