Abstract

Meta-learning aims to use the knowledge from previous tasks to facilitate the learning of novel tasks. Many meta-learning models elaborately design various task-shared inductive bias, and learn it from a large number of tasks, so the generalization capability of the learned inductive bias depends on the diversity of the training tasks. A common assumption in meta-learning is that the training tasks and the test tasks come from the same or similar task distributions. However, this is usually not strictly satisfied in practice, so meta-learning models need to cope with various novel in-domain or cross-domain tasks. To this end, we propose to use task augmentation to increase the diversity of training tasks, thereby improving the generalization capability of meta-learning models. Concretely, we consider the worst-case problem around the base task distribution, and derive the adversarial task augmentation method which can generate inductive bias-adaptive ‘challenging’ tasks. Our method can be used as a simple plug-and-play module for various meta-learning models, and improve their generalization capability. We conduct extensive experiments under in-domain and cross-domain few-shot learning and unsupervised few-shot learning settings, and evaluate our method on different types of data (images and text). Experimental results show that our method can effectively improve the generalization capability of various meta-learning models under different settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call