Abstract
Few-shot classification is challenging due to the limited data and labels. Existing algorithms usually resolve this problem by pre-training models with a considerable amount of annotated data which shares knowledge with the target domain. Nevertheless, large quantities of homogenous data samples are not always available. To tackle this obstacle, we develop a few-shot learning framework that prepares data automatically and still produces well-behaved models. This framework is implemented through conducting contrastive learning on unlabeled web images. Instead of requiring manually annotated data, this framework trains models via constructing pseudo labels. Additionally, since online data is virtually limitless and continues to be generated, the model can thus be empowered to constantly obtain up-to-date knowledge from the Internet. Furthermore, we observe that the generalization ability of learned representation is crucial for self-supervised learning. To present its importance, a naive yet efficient normalization strategy is proposed. Consequentially, this strategy boosts the accuracy of trained models significantly. We demonstrate the superiority of the proposed framework with experiments on miniImageNet, tieredImageNet and Omniglot. The results indicate that our method has surpassed previous unsupervised counterparts by a large margin and obtained performance comparable with some supervised ones.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have