Abstract
Meta-learning methods aim to build learning algorithms capable of quickly adapting to new tasks in low-data regime. One of the most difficult benchmarks of such algorithms is a one-shot learning problem. In this setting many algorithms face uncertainties associated with limited amount of training samples, which may result in overfitting. This problem can be resolved by providing additional information to the model. One of the most efficient ways to do this is multi-task learning. In this paper we investigate the modification of a standard meta-learning pipeline. The proposed method simultaneously utilizes information from several meta-training tasks in a common loss function. The impact of these tasks in the loss function is controlled by a per task weight. Proper optimization of the weights can have big influence on training and the final quality of the model. We propose and investigate the use of methods from the family of Simultaneous Perturbation Stochastic Approximation (SPSA) for optimization of meta-train tasks weights. We also demonstrate superiority of stochastic approximation in comparison to gradient-based method. The proposed Multi-Task Modification can be applied to almost all meta-learning methods. We study applications of this modification on Model-Agnostic Meta-Learning and Prototypical Network algorithms on CIFAR-FS, FC100, miniImageNet and tieredImageNet one-shot learning benchmarks. During these experiments Multi-Task Modification has demonstrated improvement over original methods. SPSA-Tracking algorithm first adapted in this paper for multi-task weight optimization shows the largest accuracy boost that is competitive to the state-of-the-art meta-learning methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.