Abstract
Few-shot learning aims to transfer the knowledge learned from seen categories to unseen categories with a few references. It is also an essential challenge to bridge the gap between humans and deep learning models in real-world applications. Despite extensive previous efforts to tackle this problem by finding an appropriate similarity function, we emphasize that most existing methods have merely considered a single low-resolution representation pair utilized in similarity calculations between support and query samples. Such representational limitations could induce the instability of category predictions. To better achieve metric learning stabilities, we present a novel method dubbed Fork Attention Adapter (FA-adapter), which can seamlessly establish the dense feature similarity with the newly generated nuanced features. The utility of the proposed method is more performant and efficient via the two-stage training phase. Extensive experiments demonstrate consistent and substantial accuracy gains on the fine-grained CUB, Aircraft, non-fine-grained mini-ImageNet, and tiered-ImageNet benchmarks. By comprehensively studying and visualizing the learned knowledge from different source domains, we further present an extension version termed FA-adapter++ to boost the performance in fine-grained scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.