Abstract

Few-shot learning strategies are developed for training a reliable model on even a limited amount of data, but few-shot learning tasks usually lead to the over-fitting dilemma and result in a task-level inductive bias. In contrast to conventional few-shot learning techniques following the meta-learning framework design, recent few-shot learning studies aim to derive a reliable feature extractors via a self-supervised learning mechanism for solving the dilemma. Therefore, we proposed in this paper a task-aware few-shot visual classification framework by articulating meta-learning, traditional supervised classification, and self-supervised learning schemes. The proposed mechanism learns to transform an initial feature embedding into a more general and representative space so that classification performance can be boosted. Extensive experiments show that the proposed method can solve the over-fitting dilemma and outperforms previous state-of-the-art few-shot learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call