Abstract

Few-shot visual recognition has achieved remarkable advances along with the rise of deep learning. Its goal is to learn the model parameter from the base category for transferring it to the novel category with limited annotations. However, most of the existing few-shot visual recognition approaches mainly focus on extracting a global feature representation of the sample, which fails to encode the semantic information. To alleviate this issue, this paper presents a novel cooperative density-aware representation learning approach for few-shot visual recognition. Specifically, we first yield the high-level semantic features of the query set and the support set by leveraging a shared convolutional neural network. A cooperative density loss module is then designed to optimize the model to form the discriminative features by incorporating the density global classification loss and the density few-shot loss. The density few-shot loss conducts the semantic alignment with regional features by the mutual information finding manner while the density global classification loss supervises each regional feature lead to more precise classification. Comprehensive experiments in few-shot visual recognition benchmarks validate the effectiveness and superiority of our proposed approach, and elaborate ablations explain the utility of different modules.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.