Abstract

At present, image classification covers more and more fields, and it is often difficult to obtain enough data for learning in some specific scenarios, such as medical fields, personalized customization of robots, etc. Few-shot image classification aims to quickly learn new classes of features from few images, and the meta-learning method has become the mainstream due to its good performance. However, the generalization ability of the meta-learning method is still poor and easy to be disturbed by low-quality images. In order to solve the above problems, this paper proposes Momentum Group Meta-Learning (MGML) to achieve a better effect of few-shot learning, which contains Group Meta-Learning module (GML) and Adaptive Momentum Smoothing module (AMS). GML obtains an ensemble model by training multiple episodes in parallel and then grouping them, which can reduce the interference of low-quality samples and improve the stability of meta-learning training. AMS includes the adaptive momentum update rule to further optimally integrate models between different groups, so that the model can memorize experience in more scenarios and enhance the generalization ability. We conduct experiments on miniImageNet and tieredImageNet datasets. The results show that MGML improves the accuracy, stability and cross-domain transfer ability of few-shot image classification, and can be applied to different few-shot learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call