Abstract

In recent years, researchers have commonly employed assistant tasks to enhance the training phase of the few-shot classification models. Several methods have been proposed to exploit and optimize the training tasks, such as Curriculum Learning (CL) and Hard Example Mining (HEM). However, most of the existing strategies can not elaborately leverage the training tasks and share some common drawbacks, including (1) the ignorance of the target tasks’ properties, and (2) the neglect of sample relationships. In this work, we propose a Self-Paced Hard tAsk-Example Mining (SP-HAEM) method to solve these problems. Specifically, the SP-HAEM automatically chooses hard examples via the similarity between training and target tasks to optimize the support set. To represent the property of target tasks, SP-HAEM obtains a representation of the dataset, called “meta-task”. No need to apply an additional model to measure difficulty and choose hard examples like other HEM methods, SP-HAEM selects the tasks with large optimal transport distance to the meta-task as hard tasks. Thus, training with such hard tasks can not only enhances the generalization ability of the model but also eliminate the negative effect of redundancy tasks. To evaluate the effectiveness of SP-HAEM, we conduct extensive experiments on a variety of datasets, including MiniImageNet, TieredImageNet, and FC100. The results of the experiments show that SP-HAEM can achieve higher accuracy compared with the typical few-shot classification models, e.g., Prototypical Network, MAML, FEAT, and MTL.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call