Abstract

Contrastive Learning aims at embedding positive samples close to each other and push away features from negative samples. This paper analyzed different contrastive learning architectures based on the memory bank network. The existing memory-bank-based model can only store global features across few data batches due to the limited memory bank size, and updating these features can cause the feature drift problem. After analyzing these issues above, a network for contrastive learning with visual representations is proposed in this paper. First, the model is combined with a memory bank and memory feature clustering mechanism; Second, a new feature clustering method is proposed for memory bank network to find and store cross-epoch global feature centers for training epochs based on the memory bank architecture. Third, the centers in memory bank are treated as class features to construct positive and negative samples with current batch data and apply contrastive learning methods to optimize a feature encoder to learn a better feature representation. Finally, this paper designed a training pipeline to update the memory bank and encoder individually to circumvent the feature drift problem. To test the performance of proposed memory bank clustering method with on unsupervised image classification, our experiment used a self-supervised online evaluator with an extra non-linear layer. The experiment results show that our proposed model can achieve good performance on image classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call