Abstract

Meta-Learning, or so-called Learning to learn, has become another important research branch in Machine Learning. Different from traditional deep learning, meta-learning can be used to solve one-to-many problems and has a better performance in few-shot learning which only few samples are available in each class. In these tasks, meta-learning is designed to quickly form a relatively reliable model through very limited samples. In this paper, we propose a modified LSTM-based meta-learning model, which can initialize and update the parameters of classifier (learner) considering both short-term knowledge of one task and long-term knowledge across multiple tasks. We reconstruct a Compound loss function to make up for the deficiency caused by the separate one in original model aiming for a quick start and better stability, without taking expensive operation. Our modification enables meta-learner to perform better under few-updates. Experiments conducted on the Mini-ImageNet demonstrate the improved accuracies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call