Abstract

The gradient-based meta learning and its approximation algorithms have been widely used in the few-shot scenarios. In practice, it is common for the trained meta-model to employ uniform settings for gradient descent steps across different tasks. However, the meta-model may be biased toward some tasks. The convergence issue occurs that some tasks may see convergence in a few steps while others fail to approach the optimum in the whole inner loop. The bias problem may cause the trained meta-model works well in some tasks but has unexpected bad performance in other tasks, which hurts the generality of the meta-model. To address this issue, in this paper, we formally establish the approximation between the metric-based strategy and gradient descent in meta-test. By directly calculating similarity to classify data, the trained meta-model avoids the convergence issue. We point out that the metric-based methods can closely approximate the gradient descent in meta-test if the representation capability of the derived features and the convergence of the inner loop during meta-training are guaranteed. Based on such observation, we propose a new meta-learning model GMT2 (Gradient-based Meta-Train with Metric-based meta-Test) by combining gradient descent in meta-training with metric-based methods in meta-test. GMT2 employs a new first-order approximation scheme using the adversarial update strategy which not only enhances the feature representation of inner layers, but also allows enough inner gradient steps without calculating second-order derivatives. Experiments show that GMT2 achieves better efficiency and competitive accuracy comparing with popular meta-learning models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.