Abstract

Few-shot learning is an extremely challenging task in computer vision that has attracted increased research attention in recent years. However, most recent methods do not fully use the task’s information, and few of the seen samples result in large intraclass differences among the same classes. In this paper, we propose a novel task encoding with distribution calibration (TEDC) model for few-shot learning, which uses the relationships among the feature distributions to reduce intraclass differences. In the TEDC model, an integrated feature extraction module (IFEM) is proposed, which extracts the multiangle visual features of an image and fuses them to obtain more representative features. To effectively utilize the task information, a novel task encoding module (TEM) is proposed, which obtains the task features by fusing all the seen samples’ information and uses them to adjust all the samples’ features for more generalizable task-specific representations. We also propose a distribution calibration module (DCM) to reduce the bias between the distribution of the support features and the query features in the same class. Extensive experiments show that our proposed TEDC model achieves an excellent performance and outperforms the state-of-the-art methods on three widely used few-shot classification benchmarks, specifically miniImageNet, tieredImageNet and CUB-200-2011.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call