Abstract

Few-shot image classification aims at learning a model from previous experiences that can be rapidly adapted to classify images of new classes with a few labeled examples. The learned model is easy to overfit since the distributions of new classes formed by a small number of samples are severely biased. Recently, Distribution Calibration (DC) tackles this problem by transferring the Gaussian statistics of seen classes with sufficient samples to calibrate the distributions of new classes. In this paper, we first take a closer look at the calibration mechanism from the source class distribution to the new class distribution in DC and propose a simplified version using averaged mean and covariance of all base classes as source statistics for all new classes. We further extend the simplified DC to the transductive setting. We extract the Gaussian statistics of unlabeled query samples to calibrate the distributions of new classes. We augment the labeled samples by sampling from the calibrated distributions to train a more accurate task-specific classifier. Our method can be readily applied on top of any existing pre-trained feature extractor and classifier without extra learnable parameters. Extensive experiments on several few-shot learning benchmarks demonstrate the effectiveness of our method. We provide visualizations to show that new classes are better separated under our calibrated distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call