Abstract

Few-shot learning has attracted increasing attention recently due to its broad applications. However, it remains unsolved for the difficulty of modeling under few data. In this paper, we present an effective framework named Attentive Matching Network (AMN) to address few-shot learning problem. Based on metric learning, AMN firstly learns robust representations via an elaborately designed embedding network using only few samples. And then distances between representations of support samples and target samples are calculated using similarity function to form a score vector, according to which classification is conducted. Different from existing algorithms, we propose a feature-level attention mechanism to help similarity function pay more emphasis on the features that better reflect the inter-class differences as well as to help embedding network learn better feature extraction capability. Furthermore, to learn a discriminative embedding space that maximizes inter-class distance and minimizes intra-class distance, we introduce a novel Complementary Cosine Loss, which consists of two parts: a modified Cosine Distance Loss for calculating distance between predicted category similarity and the true one that directly takes advantage of all support samples to compute gradients, and a Hardest-category Discernment Loss for handling the similarity of the hardest incorrect class. Results demonstrate that AMN achieves competitive performances on Omniglot and miniImageNet datasets. In addition, we conduct extensive experiments to discuss the influences of embedding network, attention mechanism and loss function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call