Abstract

Few-shot learning is dedicated to dealing with the dependence of deep learning on a large amount of data. It learns some new concepts through a large number of training tasks instead of a large amount of data, which enables it learn new tasks quickly without a large amount of training data. Compared with other meta-learning methods, the effect of metric-based meta-learning methods is more advantageous. Despite the success of metric-based meta-learning approaches, the adaptability of metric methods to data distribution is still insufficient, which leads to classification results more dependent on the accuracy of encoder networks and are susceptible to noisy features. In this paper, we realize the filtering of noisy features and the correction of data distribution by adaptive learning of metric weights and data distribution biases, and propose corresponding loss functions to evaluate and update our adaptive module and encoding network. Experimental results on standard few-shot learning datasets demonstrate that our proposed method achieves good improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call