Abstract

Meta-learning is one of the important methods to solve the challenging few-shot learning setting by using previous knowledge and experience to guide the learning of new tasks. Model-agnostic meta-learning (MAML) is one of the most popular meta-learning algorithms, and many variants of MAML have appeared in recent years. However, the performance of this algorithm for few-shot classification falls behind some other algorithms working on this problem. Therefore, its generalization performance needs to be further explored and improved. In view of the generalization problem, we found that MAML always shares an initialization in the process of parameter update, ignoring the bias between different tasks, resulting in limited generalization performance. On the other hand, the sample diversity of meta-learning model is low, and shallow network training is generally used, so it is difficult to obtain good performance based on deep neural network models. Based on these problems, we propose a hybrid optimization meta-learning method based on data augmentation, initialization attenuation, and resolution increase, called Mix-MAML. Experimental results show that our method reaches 76.93% classification accuracy on mini-ImageNet with 100 × 100 resolution, and 83.62% classification accuracy on CIFAR-FS with 80 × 80 resolution in the 5-way 5-shot settings under ResNet12, which achieves comparable or even better performance than other algorithms in some standard few-shot learning benchmarks without changing MAML simplicity and model-agnostic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call