Abstract

Most few-shot image classification methods learn a feature space from seen classes and simply perform classification for unseen classes with few labeled samples. They extract features from support and query samples independently and treat all features channels equally. In this work, we apply the attention mechanism to few shot image classification, which can learn the related features between support and query samples to deal with new tasks problem. We propose a selective module to select relevant channels for discriminating objects in the query image and the support image. The selective module aggregates the features of both images into a channel descriptor, which is used to generate channel-wise attentions for both images, respectively. In this way, the mutual interdependencies between channels from different images are explicitly modeled and channel-wise responses are adaptively recalibrated. The Selective model is a lightweight embedding module for selecting relevant features between support and query samples in channel dimension and learn the more distinctive features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call