Abstract

Learning from limited supervision is a challenging problem that has recently attracted wide attention in the machine learning community. With scarce annotated samples available in target categories, so-called few-shot image recognition aims to transfer basic knowledge from a large-scale image set to recognize unseen classes. Many existing approaches tend to learn a general source data representation and apply it to address the few-shot task by building a target classifier on scare support features, which performs favorably only if source and target data distributions are similar. We argue that ignoring the distribution gap and directly leveraging frozen representations lead to a sub-optimal solution. Taking domain shift into consideration, we explore an efficient task adaptation strategy that can jointly achieve task and domain transfer. Accordingly, we propose a simple yet effective method, called proxy-based domain adaptation (PDA), to optimize the pre-trained representation and a target classifier simultaneously. PDA can be characterized as: (1) a source-data-independent approach that only leverages few support data from the target domain (2) a non-parametric adaptation method that performs model adaptation by minimizing a designed loss without involving any parametric modules additionally. We extensively conduct experiments on multiple few-shot image recognition benchmarks, highlighting the superiority of PDA over many SOTA methods. Besides, careful ablation studies verify each component's effectiveness in our method and demonstrate the significance of domain adaptation in few-shot image recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call