Abstract

AbstractFew‐shot learning (FSL) attempts to learn and optimise the model from a few examples on image classification, which is still threatened by data scarcity. To generate more data as supplements, data augmentation is considered as a powerful and popular technique to enhance the robustness of few‐shot models. However, there are still some weaknesses in applying augmentation methods. For example, all augmented samples have similar semantic information with respect to different augmented transformations, which makes these traditional augmentation methods incapable of learning the property being varied. To address this challenge, we introduce multi‐task learning to learn a primary few‐shot classification task and an auxiliary self‐supervised task, simultaneously. The self‐supervised task can learn transformation property as auxiliary self‐supervision signals to improve the performance of the primary few‐shot classification task. Additionally, we propose a simple, flexible, and effective mechanism for decision fusion to further improve the reliability of the classifier, named model‐agnostic ensemble inference (MAEI). Specifically, the MAEI mechanism can eliminate the influence of outliers for FSL using non‐maximum suppression. Extensive experiment results demonstrate that our method can outperform other state‐of‐the‐art methods by large margins.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call