Abstract

Few-shot classification aims to identify new categories from only a handful of labeled examples and has drawn considerable attention in machine learning. An efficient and effective approach for solving few-shot classification has featured episodic training based metric learning which attempts to learn a task-generic embedding space on extremely large episodes, each with a small labeled support set and its corresponding query set. Distances between embeddings of examples from the query set and support set are then used as a notion of similarity to do classification. This paper first adds a margin between different classes to increase the inter-class distance in the embedding space(we call this method ML-FSC). Furthermore, considering that a similarity score that deviates far from the optimum should be emphasized, this paper introduces the circle loss into the episodic training based metric learning, which simply re-weights each similarity to highlight the less-optimized similarity scores, contributing to a more discriminative embedding learning(we call this method CL-FSC). We conduct comprehensive experiments to validate our method and we set new state-of-the-art performance on three popular few-shot classification benchmarks, namely omniglot, miniImageNet and tieredImageNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.