Abstract

Few-shot learning (FSL) is a challenging problem. Prototype-based methods are simple and effective methods for addressing few-shot learning. Due to the lack of labeled samples, the learned class prototype in the existing prototype-based few-shot learning methods has a great deviation and cannot express the representative and discriminant characteristics of its corresponding class well. To address this problem, in this work we propose few-shot learning based on prototype rectification with a self-attention mechanism(FSL-PRS). To learn more unbiased and discriminative class prototypes, FSL-PRS takes the support set and the query set as a whole and learns task-related features from the features extracted from pretrained backbone networks with a self-attention mechanism. Then, the learned task-related features are utilized to compute the original class prototypes and predict a pseudo label and confidence for each query sample. The query samples with high confidence are incorporated into the support set to rectify the class prototypes. We hope that the learned class prototype can better highlight the class significance. Therefore, a class significance learning module is designed for making the learned class prototypes more discriminative. Different from prior works, we take the support set and the query set as a whole to learn task-related features with a self-attention mechanism, which not only alleviates the negative effect of distribution differences between the support set and query set but also fuses global context information to enhance features for FSL. We conduct comprehensive experiments on four benchmark datasets widely adopted in few-shot learning. The experimental results demonstrate that the FSL-PRS achieves state-of-the-art performance, which validates its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call