Abstract

Learning from only few samples is a challenging problem and meta-learning is an effective approach to solve it. Meta-learning model aims to learn by training a large number of other samples. When encountering target task, the model can quickly adapt and obtain better performance with only few labeled samples. However, general meta-learning only provides a universal model that has certain generalization ability for all unknown tasks, which causes limited effects on specific target tasks. In this paper, we propose a Few-shot Directed Meta-learning (FSDML) model to specialize and solve the target task by using few labeled samples of the target task to direct the meta-learning process. FSDML divides model parameters into shared parameters and target adaptation parameters to store prior knowledge and determine the update direction. These two parts of the parameters are updated in different stages of training. We conduct experiments of image classification task on miniImageNet and Omniglot and the results show that FSDML has better performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call