Contrastive Learning is self-supervised representation learning by training a model to differentiate between similar and dissimilar samples. It has been shown to be effective and has gained significant attention in various computer vision and natural language processing tasks. In this paper, we comprehensively and systematically sort out the main ideas, recent developments and application areas of contrastive learning. Specifically, we firstly provide an overview of the research activity of contrastive learning in recent years. Secondly, we describe the basic principles and summarize a universal framework of contrastive learning. Thirdly, we further introduce and discuss the latest advances of each functional component in detail, including data augmentation, positive/negative samples,network structure, and loss function. Finally, we summarize contrastive learning and discuss the challenges, future research trends and development directions in the area of contrastive learning.