Abstract

Few-shot learning is challenging since only a few labeled samples are available for training a learning model. To alleviate the data limitation problem in few-shot learning, several works try to generate samples or features by learning a model or distribution. But complex models and biased estimation of class distribution hamper their interpretability and generalization ability, respectively. In this work, we propose a generation based Transductive Distribution Optimization (TDO) method, which introduces neither extra parameters nor complex models. We use a few labeled samples and some high-confident unlabeled samples of the target set to capture the distributions of the few-shot classes, and then generate sufficient samples from them to augment the labeled inputs. Our method can work with most pre-trained feature extractors and outperforms state-of-the-art methods with a simple linear classifier. The visualization of the generated samples shows that our method can capture an accurate distribution even though the few labeled samples deviate from the ground-truth distribution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call