Abstract

Although achieve inspiring performance in many real-world applications, machine learning methods require a huge amount of training examples to obtain an effective model. Considering the effort collecting labeled training data, the few-shot learning, i.e., learning with budgeted training set, is necessary and useful. Model prior, e.g., the feature embedding, initialization, and configuration, is the key to the few-shot learning. This study metalearns such prior from seenclasses and apply the learned prior over few-shot task on unseenclasses. Meanwhile, based on the first order optimal condition of the objective, the model composition prior (MCP) is stressed to decompose the model prior and estimate each component. The composition strategy improves the explainability, while guiding the shared and specific parts among those few-shot tasks. We verify the ability of our approach to recover task relationship over the synthetic dataset, and our MCP method achieves better results on two benchmark datasets (MiniImageNetand CUB).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call