Abstract

Few-shot image classification aims at learning a model from previous experiences that can be rapidly adapted to classify images of new classes with a few labeled examples. The learned model is easy to overfit since the distributions of new classes formed by a small number of samples are severely biased. Recently, Distribution Calibration (DC) tackles this problem by transferring the Gaussian statistics of seen classes with sufficient samples to calibrate the distributions of new classes. In this paper, we first take a closer look at the calibration mechanism from the source class distribution to the new class distribution in DC and propose a simplified version using averaged mean and covariance of all base classes as source statistics for all new classes. We further extend the simplified DC to the transductive setting. We extract the Gaussian statistics of unlabeled query samples to calibrate the distributions of new classes. We augment the labeled samples by sampling from the calibrated distributions to train a more accurate task-specific classifier. Our method can be readily applied on top of any existing pre-trained feature extractor and classifier without extra learnable parameters. Extensive experiments on several few-shot learning benchmarks demonstrate the effectiveness of our method. We provide visualizations to show that new classes are better separated under our calibrated distributions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.